The catchphrases used to describe Ethernet is CSMA/CD with the various forms of media being described as 10BASE-5, 10BASE-2, 10BASE-T, and so on. CSMA/CD is an acronym for ``Carrier Sense Multiple Access with Collision Detection'' and BASE is an abbreviation for ``baseband'' (as opposed to ``broadband''). We will look a these phrases in a logical sequence:
Basic Electrical Engineering
A given a signal carrying medium has a capacity called its ``bandwidth'' measured by electrical engineers in Hertz (Hz). Bandwidth is literally the width of a band of frequencies: one simply subtracts the lower limit of the frequencies used from the upper limit of the frequencies used. Nyquist's theorem relates the bandwidth to the data rate by stating that the a data signal with a transmission rate 2W can be carried by frequency of W is sufficient to carry the data. The converse is also true: given bandwidth W, the highest signal rate is 2W. The data signal need not be encoded in binary, but if it is, then the data capacity in bits per second (bps) is twice the bandwidth in Hertz. Multilevel signaling can increase this capacity by the transmitting more bits per data signal unit.
The bad news about multilevel signaling is that one must be able to distinguish between the levels in the presence of outside interference, called ``noise.'' Shannon's law sets an upper limit on the bps/Hz ratio which increases logarithmically with the signal-to-noise ratio. Theoretically, one should be able to obtain between 2 and 12 bps/Hz, but current technology is only capable of 1 to 4 bps/Hz. As a matter of simplicity, we will not attempt to make a serious distinction between the two kinds of ways of measuring capacity and simply talk about ``bandwidth'' in terms of bits/second. However, one must remember these are two very different things: bandwidth is a measure of the range of frequencies used in an analog signal and bits/second is a measure of digital data rate.
As an example, consider the typical telephone where the audio signals are limited to the 300 to 3300 Hz range. A total of 3 kHz bandwidth is required to transmit this signal. If we are sufficiently clever we can manage to encode data onto this bandwidth with about 10 bps/Hz and get a modem to transmit data at 28.8 kbps.
Baseband vs. Broadband Multiple Access
As we saw in the previous section, the term ``band'' refers to a range of frequencies. In sharing a given band, one can divide the band into pieces called ``channels.'' Each channel will have a fraction of the bandwidth of the total band. For example, one can use the 9 MHz wide band from 1MHz to 10MHz and divide it into 3000 channels each with 3 kHz of bandwith. Doing so would allow many simultaneous telephone sessions to be transmitted over a single pair of wires, but requires manipulating the analog signal to ``tune'' into one particular channel.
Alternatively, one could use the entire band as a signal carrier. No fancy manipulation of the analog signal need be done to place a two-level digital signal on a wire pair, but sharing is now non-trivial. In RS-232 serial communications, of wire is not done: one wire is used to send data one way, a second wire sends data the other, and a third is used as the signal ground (a basic fact from physics: voltages must be measured as the difference in electrical potential between two wires - one of these is said to be the ``ground'' while the other is said to carry the signal).
Ethernet is a baseband system that uses a Manchester encoding of high and low voltages to place bits on a wire pair. It a Manchester encoding each bit time contains a transition in the middle: a transition from low to high represents a 0 bit and a transition from high to low represents a 1 bit. With repeated bits of the same value, a transition is also needed at the edge of the bit time. To achieve 10 Mbps, one needs a capacity for 20 million transitions per second, doubling the bandwidth requirement over the simplistic encoding of high for 1 and low for 0. However, it ensures a transition on a predictable basis allowing for synchronization.
Consider the ALOHA system mentioned earlier. Suppose one broadcasts frames that are all the same size (to simplify the analysis). The amount of time to transmit a frame, the ``fame time'' is the ratio of the frame length and the bit rate. A collision will result with a given frame if any other frame is broadcast within one frame time of the start of the given frame. This window of vulnerability is two frame times in duration, since the collision may be either with a frame that started before the given frame or after it.
Suppose that the collection of stations generates an average of X frames per frame time (with a Poisson distribution), with 0<X<1. Then, statistically, one would expect (after much careful analysis) about 0.18 frames sucessfully transmitted per frame time at most. This maximum occurs with X=0.5
In order to make this more efficient, Ethernet immediately stops the transmission of a frame as soon as a collision has been detected. Thus, frames involved in a collision are shorter than those that are not.
To recover from the collision the network interface waits a short while and then retransmits. If a collision occurs on the second attempt, it waits twice as long as it did the first time before retransmitting. This process is called ``backoff.'' Each time the interface increases the time between making a transmission attempt it will increase the time by a small constant factor, leading to ``exponential backoff.'' Ethernet specifies a 2 as common small constant factor which is then called ``binary exponential backoff.'' The retransmission is attempted about 6 times, and then the data to be transmitted is simply discarded. Hence, a busy Ethernet will drop packets.
To further reduce the probability of collisions, an interface can simply not transmit when another one is transmitting. Since signal propagation takes time (signals do not travel faster than the speed of light), some time elapses between the beginning of the transmission and the time a collision might occur from an interface which had not yet begun receiving the transmitted signal. Both interfaces ``believe'' they were the first to begin transmitting and they are both right in their own frame of reference, the paradox is well documented in studies of Einstein's general relativity. The ``contention time,'' as this period of uncertainty is known is twice the propagation time between the most distant pair of interfaces in the Ethernet. Propagation time is about 5 nanoseconds per meter on a coaxial cable. For a 500 meter cable, one gets a contention time of 5 microseconds. Since a bit time is 100 nanoseconds in a 10 Mbps Ethernet, 50 bits are transmitted during the contention period.
A system employing CSMA/CD will thus be in one of three states: transmission, contention, or idle. A small amount of idle time is required before and after each transmission and contention state for the carrier sensing to do its job. However, in a busy network it may happen that several interfaces attempt to transmit when they sense an idle network.
The combination of carrier sensing to prevent transmissions that otherwise would result in collisions and collision detection to terminate incomplete transmissions that have collided increase the efficiency of Ethernet over ALOHA. Under the same kind of statistical analysis as was applied to ALOHA, one gets a result of 0.37 frames per frame time being transmitted with the peak occurring at about 0.8 attempted transmissions per frame time.
In a real network, things are not as tidy as in the statistical studies. Frames vary in size and traffic tends to come in bursts rather than being fairly uniformly spread out. Contention comes only at the beginning of the frame, so the simplest measure of the network load as it relates to collisions is the packet rate. Ethernet has a lower limit on frame size which limits the maximum theoretical packet rate to about 14 thousand packets per second, but with packets approaching the maximum frame size one would expect a few hundred packets per second on a busy network and about a thousand packets per second on a pathologically busy one.
With a sufficiently small number of hosts contending for the total network bandwidth one can hope to see about 8 Mbps transmitted on a 10 Mbps network. Hence, if one performs a file transfer between two hosts one might actually see a 20 MB file (20 MB = 160 Mb) transferred in 20 seconds (1 MBps). Most disk drives only spin at a few MBps, so there is not a significant penalty for network access on such a lightly loaded network. However, as more hosts place demands on the network, the share each gets of the total available bandwidth decreases. In addition, as the demand for network bandwidth increases, efficiency begins to drop.
In the extreme case, an Ethernet can achieve a condition called ``collapse'' in which it does not leave the contention state for any significant amount of time. In theory, it can be triggered by a bad combination of timing with just a handful of interfaces all initiating a transmission as soon as the network appears idle, but the probability is astronomically small that such an event would occur. Typically, collapse is triggered by an interface that has a failure in its circuitry. Sometimes a single interface will ``jabber'' by sending a continuous stream of bad frames (often without stopping to do carrier sensing or collision detection).