Previous Up Table of Contents Next

Original Ethernet

The oldest, the "original" Ethernet standard is now referred to as 10BASE-5 which is based on half-inch thick coaxial cable. The "-5" indicates that a 500 meter limit in a single cable. The 500 meter limit is due to signal degradation, not propagation time. To make an Ethernet larger one can add "repeaters" between cables to regenerate a "clean" signal.


Essentially, a repeater is equivalent to a pair of back-to-back transceivers. The transmit wire on one transceiver is hooked to the receive wire on the other, so that bits received by one transceiver are immediately transmitted by the other. Doing so adds one bit time delay (100 nanoseconds in a 10 Mbps Ethernet) at a minimum, plus whatever extra delay the circuitry might unnecessarily add. A 100 ns delay is equivalent to the propigation delay about a hundred feet of cable since the speed of light is about a foot per nanosecond.

Figure 2: A repeater is equivalent to 2 transceivers.

The standard "Rule Of Thumb" (ROT) is that no more than 3 repeaters may be located between any pair of interfaces in the network. This ROT is to prevent the propagation delay from getting out of hand, since one seldom knows how much cable is really buried in the walls, ceilings, floors, and landscaping. Clearly, one could use 3 repeaters to form a 2 kilometer long chain of four 500 meter cables and adhere to the ROT, but may still have problems due to some network interfaces not dealing well with such a long contention time.

One need not be restricted to simple chains of cables. A typical arrangement of cabling would be to string a cable vertically from floor to floor in a building's utility area and then place a repeater on each floor to connect to individual interfaces. This arrangement, called a "backbone" cable only requires two repeaters between any pair of interfaces with a potentially large number of repeaters being used. The term "backbone" comes from imagining the repeater in the middle of one side of a square floor with a two rib-shaped half-loops extending out along the two neighboring sides of the square as would commonly be the case in a building with a hallway down the middle. Several of these "U" shaped ribs stacked on top of each other forms a ribcage with the vertical cable in the location of the backbone.


A bridge can be used in order to extend an Ethernet network beyond the size constraints imposed by propagation times. A bridge is a simple device consisting of a pair of interfaces with some packet buffers and some simple logic. The bridge receives a packet on one interface, stores it in a buffer, and immediately queues it for transmission by the other interface. The two cables each experience collisions, but collisions on the one cable do not cause collisions on the other. They are in separate "collision domains."

Note that the behavior of a bridge points involves two forms of unusual behavior. First, the bridge has its interfaces in "promiscuous" mode. In promiscuous mode, they send extract each and every packet from the cable, not just ones with a specific destination address. If a computer system put its interface into promiscuous mode, it could eavesdrop on any and all network communications. Devices that put their interfaces into promiscuous mode are called "sniffers," a pun on the word "ether," which also is the name of a smelly gas (which used to be used as an anesthesia). Second, when forwarding a frame, they keep the original source address in the frame header, not the address of the interface doing the forwarding. Such behavior contradicts the notion that the source address is firmly fixed by the interface hardware. In fact, most interface hardware can be software controlled so as to "forge" the source address on each frame.

A bridge will introduce a delay of a frame time, which may be tens of thousands of bit times. If too many bridges are used between interfaces, some data link layer software systems will be confused by the large delay. Hence, the ROT for bridges is similar to the ROT for repeaters: no more than a few between any pair of interfaces.

Minimally, the logic must manage the queuing of packets in each direction. However, most bridges also apply some filtering rules before queuing to prevent unnecessary retransmission. The rules cannot be very complex because each and every packet must be processed and they may be arriving at over 10,000 packets per second on each interface.

The standard bridge is a "transparent" learning filtering bridge. It is transparent in the sense that no software needs to be configured on it or any other device on the network in order for it to do its job effectively. It examines the sender address on each packet to learn which interfaces are on which side of the bridge. Then, if the destination address on a packet coming from one direction belongs to an interface in the same direction, the packet need not be forwarded to the opposite side of the bridge. Note that if the bridge does not know where a packet needs to go, it must forward it rather than filter it. In a few seconds, a bridge is likely to learn about a large portion of the addresses in use on a particular Ethernet network.

Since collision rates increase with increasing packet rates, a filtering bridge can increase the performance of a network significantly. Theoretically, each of the two "segments" (also called "collision domains") can get 10 Mbps of data bandwidth for an aggregate bandwidth of 20 Mbps for the network formed by their union.

This is a very important concept to understand, so we will look at an example to clarify the situation. Suppose one has 7 computers that will be generating traffic in a pairwise fashion as shown in the illustration below. Some computers do not communicate with each other at all. Some pairs interact more intensely than others.

If these seven computers are all attached to a single Ethernet cable, the overall packet rate can be found by summing up the packet rates for each of the pairwise relationships.

At a high packet rate one would expect a high collision rate with resulting loss in the throughput experienced by each of the computers in the network. To reduce the packet rate, one can simply insert a bridge near the middle in an attempt to split the network into two collision domains each with a minimal packet rate. Ideally, one wants to group the network into two groups that interact as minimally and install the bridge to tie these two groups together. Such an arrangement results in the bridge doing maximal filtering and minimal forwarding. However, if one of these two groups is much less busy than the other, the split may not reduce the packet rate by much in the busier group. Practical matters, such as the desire to avoid installing extra cabling also enter in the decision of where to locate a bridge.

Previous Up Table of Contents Next

&copy Paul Buis, Associate Professor
Computer Science Department
Ball State Universi
September, 1996