Atomic data
If you have any experience with database design, you might already have a pretty clear idea of what constitutes atomic data. Typically, it means the smallest components into which a record can be broken down without losing their meaning. In the context of network communication, though, we're not really concerned with the payload of a packet losing its meaning. It would recompiled into the original data structure by the recipient of the payload, and so it's fine if the small chunk of data that moves over the network is meaningless on its own. Instead, when we talk about atomic data in the context of network transactions, we're really talking about the minimum size that we can truncate our data into, beyond which we will stop seeing the desired benefits of shrinking our data into smaller and smaller chunks. Those chunks may well splice double-precision decimal values in two, sending one half over in one packet and the other half in an entirely separate packet. So, in that case, neither packet has enough information to make sense of the data in its original form. It wouldn't be considered atomic in the same way that a FIRST_NAME field will be the most atomic way to store the first name of a user record in a database. But if that decomposition results in the most efficient distribution of packets for transmission over the current network, with minimum latency and maximum bandwidth utilization, then it is the most atomic way to represent it in a network packet.
For an example of this, just look at any arbitrary packet you recorded in your Wireshark capture. Looking at a packet in my data stream, we've got this arbitrary Transmission Control Protocol (TCP) packet (or datagram), as follows:
As you can see in the selected text of the raw data view on the bottom of my Wireshark panel, the payload of that particular packet was 117 bytes of nonsensical garbage. That might not seem very useful to you or me, but once that specific TCP request is reassembled with the rest of the packets in that request, the resulting data should make sense to the consuming software (in this case, the instance of Google Chrome running on my computer). So, this is what is meant by an atomic unit of data. Fortunately, that's not something that we'll have to concern ourselves with, since that's handled directly by the hardware implementation of the transport layer. So, even though we can implement software that directly leverages a transport layer protocol of our choice, the actual act of decomposing and recomposing packets or datagrams will always be out of our hands when we're working on the .NET Core platform.