Hands-On Network Programming with C# and .NET Core
上QQ阅读APP看书,第一时间看更新

Signal strength

The last major constraint of network communication that we'll look at is variable signal strength. Over any non-trivial network, the strength of a given signal can be impacted by anything from the distance between wireless transmitters and receivers, to just plain distance between two gateways connected by a wire. This isn't much of a concern on modern fiber optic networks, since those rely on the transmission of visible light through glass or plastic fiber, and are thus not subject to many of the confounding factors that interfere with older physical network standards. However, reliable signal strength can be a major concern for wireless networks, or wired networks that use electric signals over copper wiring.

If you're at all familiar with the impact of resistance on signal strength (for those of you who remember your college physics or computer hardware classes), you'll know that the longer the wire over which you want to send a signal, the weaker the signal will be at the receiving end. If you're defining a bit as being a 1 whenever the voltage on a wire is above a given threshold, and the resistance of your wire reduces the voltage of a signal over time, there's a non-zero chance that some bits of your packet will be rendered indeterminable by your target due to the interference of your signal. A weak signal strength means a lower reliability of transmission.

And mere resistance isn't the only thing that can weaken your signal strength. Most electrical signals are subject to interference from any other nearby electrical signals, or simply the electromagnetic fields that permeate the earth naturally. Of course, over time, electrical engineers have devised innumerable ways to mitigate those effects; everything from wire insulation to reduce the impact of electromagnetic interference, to signal relays to reduce the impact of resistance by amplifying a signal along its route. However, as your software is deployed to wider and wider networks, the extent to which you can rely on a modern and well-designed network infrastructure diminishes significantly. Data loss is inevitable, and that can introduce a number of problems for those responsible for ensuring the reliable delivery of your requests.

So, how does this intermittent data loss impact the design of network transmission formats? It enforces a few necessary attributes of our packets that we'll explore in greater depth later, but we'll mention them here quickly. Firstly, it demands the transmission of the smallest packets that can reasonably be composed. This is for the simple reason that, if there is an issue of data corruption, it invalidates the whole payload of a packet. In a sequence of zeroes and ones, uncertainty about the value of a single bit can make a world of difference in the actual meaning of the payload. Since the payloads are only segments of the overall request or response object, we can't rely on having sufficient context within a given packet itself to make the correct assertion about the value of an indeterminate bit. So, if one bit goes bad and is deemed indeterminable, the entire payload is invalidated, and must be thrown out. By reducing the packet size to the smallest reasonable size achievable, we minimize the impact of invalid bits on the whole of our request payload. It's much more palatable to re-request a single 64-byte packet due to an indeterminable bit than it is to restart an entire 5 Mb transmission.

Astute readers may have already identified the second attribute of packets that are driven by unreliable signal strength. While variable signal strength and external interference could simply render a single bit indeterminable, it could also very well flip the bit entirely. So, while the recipient might be able to determine its received value with certainty, it ultimately determines the incorrect value. This is a much more subtle problem since, as I mentioned before, packets will likely contain insufficient information to determine the appropriate value for a specific bit in its payload. This means packets will have to have some mechanism for, at the very least, error detection baked into the standard headers. So long as the consuming device can detect an error, it can know, at the very least, to discard the contents of the erroneous packet and request re-transmission.

It's worth noting that the benefits of decomposing a request into smaller and smaller packets reach limits beyond which it ceases to be beneficial for network performance. Subject this line of thinking to reduction ad absurdum and you'll quickly find yourself with a full-fledged packet for every single bit in your payload, error-detection and all. With our imagined request payload of 5 Mb, that's 40,000,000 packets being sent simultaneously. Obviously, this is an absurd number of packets for such a small request. Instead, network engineers have found a reliable range of sizes for packets being sent according to a given protocol as falling somewhere between a few hundred bytes and a few kilobytes.

Now that we know why network communication is done with small, isolated packets, we should take a look at what those are.