Satellite Atm Networks Essay, Research Paper
Increasing demand for high-speed reliable network transmission has motivated the design and implementation of new technologies capable of meeting lofty performance standards. A relatively new form of network transmission known as Asynchronous Transfer Mode (ATM) has emerged as a potential technological solution to the ever-increasing bandwidth demand imposed upon networks. ATM technology can be a good choice of network medium for many transmission tasks such as voice, data, video, and multimedia. Coupling ATM network implementations with the benefits associated with satellite networks may prove to be a fruitful merger in the pursuit of faster, more reliable, far-reaching networks.
Using ATM network technology over a satellite network medium can provide many advantages over the typical terrestrial-based network system. Some of these advantages include remote area coverage, immunity from earth-bound disasters, and transmission distance insensitivity. Additionally, these networks can offer bandwidth on demand, broadband links, and easy network user addition and deletion.
Adapting ATM technology for use over a satellite network is a new concept still in its infancy. Numerous problems such as signal adaptation, network congestion, and error control will have to be overcome in order to produce a fully functioning, viable system resulting from the marriage of these two distinct technologies. Although ATM over satellite networks are not yet commonplace, many military, research, and business entities are expressing interest in their development. It is a promising idea that may be one of the future solutions to the expanding problem of network performance needs.
SATELLITE NETWORKS
A satellite network is a highly specialized wireless type of transmission and reception system. Satellites send and receive signals to and from the earth in order to facilitate data transfer from various points on the planet. A typical satellite is rocket-launched and placed in a specific type of orbit around the globe. In 1945, scientist and author Arthur C. Clarke first proposed that satellites in orbit around the earth could be used for communication purposes. (The Geosynchronous Earth Orbit described below is sometimes called the Clarke orbit in honor of the author’s suggestion.) The first satellite successfully launched and implemented in space was a Russian artificial satellite, roughly the size of a basketball. This satellite, launched in the late 1950’s, simply transmitted a short Morse code signal repeatedly. Today, there are hundreds of satellites circling the globe serving many diverse purposes such as communications, weather tracking and reporting, military functions, photo and video imaging, and global positioning information.
Satellites communicate with the earth by transmitting radio waves between the satellites and earth-bound reception stations. The wavelengths of the radio wave frequencies are determined by the location of the satellite in space as described below. The signals transmitted between the earth and satellites are sent and received by various sized antennas known as satellite dishes, typically located near the earth-station receivers. Signals being sent from Earth to the satellites are referred to as uplinks, whereas the signals emanating from the satellites are called downlinks. The range of coverage area on earth that the frequencies can reach is known as the satellite’s footprint. (See Figure 1 below.) Radio waves transmitted to (and from) satellites usually involve traveling over large distances which creates relatively long propagation delays since the speed of transmission is limited by the speed of light.
Figure1.
Three commonly used satellite frequency bands are C, Ku, and Ka, with C and Ku being the most frequently used in today’s satellite systems. C-band transmissions occupy the 4 to 8 GHz frequency range, whereas Ku and Ka bands exist in the 11 to 17 GHz, and 20 to 30 GHz frequency ranges respectively. There is a relationship between transmission frequencies, wavelength size, and antennas or dish size. The higher the frequency, the smaller the wavelength and accordingly, the smaller the dish. Conversely, a lower frequency corresponds to a larger wavelength which in turn requires a larger dish. C-band antenna size is generally 2-3 meters in diameter whereas the Ku-band antenna can be as small as 18 inches in diameter. Ku is the band of choice for many home DSS systems in use today.
The majority of satellites presently circling the globe today are in Geosynchronous Earth Orbit (GEO). These satellites are positioned at a point 22,238 miles above the earth’s surface. As the Geosynchronous title implies, these satellites circle the globe once every 24 hours, completing one orbit for every earth rotation. In relative view from the earth, the satellites appear to be stationary, remaining in a fixed position. It is for this reason that these satellites are also occasionally called Geostationary Satellites. (There is a difference between the two terms, however. Geosynchronous orbits can be circular or elliptical, whereas Geostationary orbits must be circular and located above the earth’s equator.) This GEO orbit allows the satellite dishes located on the earth to be aimed at the orbiting satellite once, without requiring continual repositioning. A satellite in this type of orbit can provide a coverage footprint equal to 40% of the earth’s surface. Therefore, three evenly spaced Geosynchronous Earth Orbit satellites (120 angular degrees apart) can provide complete transmission coverage for the entire civilized world.
In recent years, technological innovations have paved the way for new types of satellite orbits and designs. One of these new orbits is called Medium Earth Orbit. (MEO) Satellites in this type of orbit are located at an altitude of approximately 8,000 miles above the earth. Placing satellites at this level allows for shorter transmission lengths, thereby increasing the strength of the signal and decreasing the transmission delay. This in turn means that the receiving equipment on earth can be smaller, more lightweight, and less expensive. The downside to these altitudes is the smaller footprints provided by MEO satellites as opposed to their GEO counterparts.
Another relatively new category of satellite orbits is the Low Earth Orbit (LEO). There are three categories of LEO satellites: Little LEO, Big LEO, and Mega LEO. LEO satellites typically orbit at a distance of only 500 to 1000 miles above the earth. As with the MEO satellites described above, LEO satellites further reduce the transmission delay and equipment expense, maintain a strong signal strength, and project a smaller footprint.
ATM NETWORKS
A network topology that has gained considerable popularity in the recent past is Asynchronous Transfer Mode (ATM). This network switching technology, also known as Cell Switching, has been embraced by various factions of the network transmission community such as telephone companies, scientific research firms, and the military. ATM was designed to operate on transmission media at speeds of 155Mbps or more, giving it the advantage of good performance. ATM topology is connection-oriented offering a high Quality Of Service (QOS) level. The ATM network technology was originally envisioned as a way to create large public networks for the transmission of data, voice, and video. Additionally, ATM has been subsequently embraced by the LAN community to compete with Ethernet and Gigabit Ethernet.
In its simplest form, ATM networks are switched networks that create a connection and path from a sender to one or more receivers for the transmission of fixed size frames known as cells. Cell transport is accomplished using a statistical multiplexing algorithm for transmission decisions, and a technique called Cell Segmentation and Reassembly. Since the majority of frames passed to an ATM network are not the required byte size, the ATM protocol must segment the frames into the proper cell size prior to transmission and subsequently reassembling the ATM cells back into their original state at the receiver.
ATM cells are fixed size data packets of 53 bytes each. The ATM cell size is always 5 bytes of header and 48 bytes of payload. Therefore, this technology has the beneficial side-effects of a reduction of complex hardware for the switches themselves, improved frame queueing, and uniform switching activities all taking the same amount of time to accomplish. ATM cells have two possible formats dependent upon where the cell happens to be in the network. The names for these two formats are User-Network Interface (UNI) for the user to network interface, and Network-Network Interface (NNI) for the network to network interface.
Figure 2 below shows the format for a UNI version of the ATM cell. The NNI version of the cell differs only in that the Generic Flow Control (GFC) of the UNI version is replaced by 4 additional bits for the Virtual Path Identifier (VPI).
As can be seen from Figure 2, the cell starts with the GFC, which is intended to be used to arbitrate access to a link in the event that the local site uses a shared medium to attach to an ATM configuration. Following the GFC are the VPI and Virtual Circuit Identifier (VCI) bits used to identify the path (channel) created for this transmission. Next, we see the Type bits which are used to facilitate management functions and indicate whether or not user data is contained in the cell. The final two fields prior to the 48 byte payload are the Cell Loss Priority (CLP) bits and the Header Error Check (HEC) bits. The CLP establishes a priority value in case the network becomes congested and needs to drop one or more frames. (See below for a discussion on traffic management.) The HEC is used for transmission error checking incorporating the CRC-8 polynomial. (CRC-8 is one of several commonly used polynomials for Cyclic Redundancy Checking within network protocols.)
ATM can be run over several different physical media and physical-layer protocols including SONET and FDDI. To adapt ATM to these various media layers and accomplish the necessary segmentation and reassembly of the ATM cells, a protocol layer called AAL (ATM Adaptation Layer) is inserted into the network protocol stack between the ATM layers and other protocol layers desiring ATM use. (See Figure 3 for a typical ATM protocol stack.)
ATM OVER SATELLITE
To take full advantage of the benefits of both satellite and ATM networks, a network architecture and protocol stack must be implemented that allows communication between these very different technologies. The solution to this problem comes in the form of the key component of this system known as the ATM Satellite Internetworking Unit (ASIU). (This component is also referred to by some as the ATM Adaptation Unit.) The ASIU is the essential piece of equipment for the properly-functioning interface between the satellite and ATM system. It is responsible for handling many complex issues of the system interface such as management and control of system resources, real-time bandwidth allocation, network access control, and call monitoring. In addition, the ASIU must take care of system timing and synchronization control, error control, traffic control, and overall system administrative functions. (See Figure 4.)
Figure 4.
The ASIU is the single interface that acts as a bridge between the ATM and satellite systems, allowing the desired data to be exchanged back and forth across these two distinctly different types of transmission mediums. As can be seen on both sides of the typical ATM to satellite protocol stack of Figure 5, the ASIU is placed between the last leg of the ATM network and the front of the satellite system equipment. Therefore, the ASIU can perform all of its necessary functions such as data segmentation and reassembly, and the features described above.
As mentioned above, the ASIU performs many functions, all contributing to the proper operation of the ATM to satellite interface. Five of the most interesting and important functions of this component are the Cell Transport Method, the Satellite Link Layer, error control, traffic management, and bandwidth management. These five issues will be explored in the following discussions.
Figure 5.
Cell Transport Methods
Cell transport across an ATM over satellite network can make use of an existing digital cell transport format. The three existing cell transport protocols that have been considered for potential use with this type of system are Plesiochronous Digital Hierarchy (PDH), Synchronous Digital Hierarchy (SDH), and Physical Layer Convergence Protocol (PLCP). The most feasible and promising protocol for cell transport within this system turns out to be SDH for the reasons delineated below.
The PDH transmission system was originally developed to carry digitized voice efficiently in major urban areas. (PDH however, is being replaced by other transport methods such as SONET and SDH.) In this scheme, the multiplexer at the sending end has access to multiple tributaries for transport that can have varying clock speeds. The multiplexer reads each tributary at the highest allowed clock speed but will also check to see if there are no bits in the input buffer. If this is the case, the multiplexer will use bit-stuffing to move the signal up to a higher clock speed for better performance. The multiplexer subsequently notifies the receiving demultiplexer that the data contains stuffed bits for proper deletion of the bits on the receiving end. The downside to the PDH system is the added overhead and complexity created by performing the redundant ADD and DROP operations required by the bit stuffing. In addition, PDH has difficulty recovering and rerouting signals following a network failure.
SDH, on the other hand, is more suited for the ATM to satellite interface since it was originally designed to take advantage of a completely synchronized network. The fiber optic transmission signal typically used for an ATM network transfers a very accurate clock rate throughout the network. The key ingredient for the SDH protocol is the inclusion of pointer bytes that indicate the beginning of the cell payload. This helps avoid any data loss due to bit slippage caused by slight phase and/or frequency variations. SDH has other advantages over PDH since it can handle higher data rates, support easier and less expensive multiplexing and demultiplexing, and has increased provisions for network management.
The PLCP transport method was originally designed to carry ATM cells over existing DS3 facilities. The PLCP format consists of 12 ATM cells in a sequential group, with each cell being preceded by 4 overhead bytes. A frame trailer of either 13 or 14 nibbles is appended to the end of the group of 12 cells to facilitate nibble stuffing. Each individual 12 cell and overhead combinations requires a 125 microsecond interval for transmission. Unfortunately, PLCP is susceptible to corruption caused by burst errors that can effect the perceived number of nibbles required for stuffing, resulting in frame misalignment.
SDH appears to be the logical choice for cell transport in this type of system. However, an important point to consider when using SDH is the possibility of an incorrect payload pointer. This situation may produce faulty payload extraction, causing previously received cells to be corrupted and necessitate their dismissal. It is imperative for the correct functioning of an SDH-based system to employ techniques capable of spreading out errors and performing enhanced error monitoring activities. (See the discussion on error control below.)
Satellite Link Access
Access methods typically seen in Local and Metropolitan Area Networks are not suited for use with satellite systems due to the high propagation delays created by the long distances to the satellites. LAN and MAN performance is dependent upon short transmission times whereas satellite systems are effective when utilized at maximum capacity. Therefore, an access method must be used in this system that “keeps the pipe full”.
There are presently three basic access methods used in satellite systems. Unfortunately, none of these schemes are optimized for use with ATM technology. These three methods, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), and Demand Assignment Multiple Access (DAMA) can be modified from their present form to a configuration more suited for use in an ATM over satellite implementation.
The FDMA access method divides the total available satellite bandwidth into equally sized portions. Each portion is assigned to one earth station for exclusive use by that station. This scheme thus eliminates errors and collisions since there is no signal interference between individual earth stations. In addition, FDMA can be used with smaller antennas. Unfortunately, however, FDMA requires guard bands for signal separation which is not conducive to the goal of maximum capacity usage in the system. (FDMA is also considered to be rather inflexible.)
Unlike the subchannel frequency division of FDMA, the conventional TDMA access method divides the bandwidth into time slots. These time slots are usually equal-sized, however, variable time slots or allocation on demand configurations are also possible. Using a round-robin scheme, earth stations each receive the use of the entire bandwidth for a small period of time. This turns out to be a suitably flexible setup for packet traffic. TDMA unfortunately requires a large antenna size and since the time slot synchronization adds complexity to the system, the earth-bound hardware cost is increased.