Showing posts with label Electronic Communication. Show all posts
Showing posts with label Electronic Communication. Show all posts

AIRCRAFT MARKER BEACON AND RECEIVERS BASIC INFORMATION



In order to provide a pilot with an indication of his distance from the runway, marker-beacon transmitters are installed with the outer-marker transmitter at approximately 5 miles from the runway and the mid-marker approximately 5 mile from the end of the runway.

The marker-beacon transmitter operates at a frequency of 75 MHz and produces both aural and visual signals. The outer-marker transmitter produces a 400-Hz intermittent signal which causes a blue indicator light on the instrument panel to glow intermittently.

The midmarker transmitter produces a signal modulated at 1,300 Hz which causes the amber marker-beacon light on the instrument panel to glow. Thus, when the airplane is approaching the runway and is approximately 5 miles from its end, the blue light will flash.

A short time later, when the airplane is within 2/3 mile of the runway, the amber light will flash. This system provides an excellent indication to the pilot of his distance from the runway.

THE MARKER-BEACON RECEIVER
The marker-beacon receiver for a typical large aircraft navigation system is a crystal-controlled fixed frequency superheterodyne designed to operate only on a frequency of 75 MHz. The receiver is equipped with output circuits which enable it to deliver both aural and visual signals to the flight crew.

A portion of the output signal is fed through a transformer to audio filters tuned to 400, 1,300, and 3,000 Hz. The 75 MHz - signal from the marker-beacon transmitters is modulated with the three different audio tones, depending upon whether the transmitter is a midmarker, an outer marker, or an airways Z, or fan, marker.

Each of the audio filters is designed to select one of the frequencies and with this signal activate a switching circuit which causes the appropriate signal light on the instrument panel to flash. The indicator lights are white, amber, and blue, thus making it possible for the pilot to know what type of marker he is passing over.

For example, if the blue light flashes, the pilot will know that he is passing over the outer marker.

AIRCRAFT HOMING SYSTEM BASIC INFORMATION



A homing guidance system requires that a missile contain such electronic sensing and control devices that the missile will seek a target on its own without the need for command signals from outside the missile. Homing systems are classified as active, semiactive, and passive, depending upon the nature of the sensing and controlling mechanisms.

An active homing system generates and transmits a signal which is sent to the target and reflected back to the missile from the target. The guidance system utilizes this reflected signal as a beam to guide the missile to the target; it is common practice to employ a radar signal for this purpose.

In a semiactive homing system the target is "illuminated" by a signal from a source outside the missile. The illuminating signal may be transmitted from a ground station or a mother ship in flight. In any case, the missile system receives a reflected signal from the target and rides the reflected signal to the target.

The passive homing system employs some form of energy radiation from the target to serve as a beam for guidance; the energy may be in the form of heat, light, or sound.

Infrared Homing System
Among the most effective homing systems for guided missiles is the infrared sensing system used on many airborne missiles. The infrared homing system is passive, because the energy for guidance of the missile is supplied by the target.

The missile guidance system uses this energy to provide the directional signals needed to home on the target. The heat from the exhausts of a flying aircraft provides an excellent form of radiated energy which is easily detected by an infrared sensor not too distant from the source.

Infrared energy, although often called heat, is not actually heat. It would be better to state that heat causes the radiation of infrared energy and that infrared radiation will cause heat in an object exposed to it. Infrared radiation is actually an electromagnetic wave of the same type as radio waves and light
waves; it lies in the frequency range between the two.

The full range of wavelengths for infrared radiation is 0.7 to 30 microns (1 micron is one-millionth of a meter), and the range employed for infrared homing is about 1.5 to 6 microns. This is a frequency range of 200 to 50 million MHz, which may also be expressed 2 x 10' to 5 x 10' MHz.

The principal device in an infrared detector is usually a parabolic mirror which collects the infrared radiation and focuses it upon a heat-sensitive device such as a thermopile or bolometer. A thermopile consists of a group of thermocouples connected in series to increase the total voltage output, which is then amplified and used to actuate the control system of the missile.

A bolometer consists of thin metal strips (usually nickel) connected in series and arranged so so that they may be exposed to infrared radiation. The metal strips change resistance whenever there is
a change in temperature, and hence a current flowing through the unit will be affected by temperature
change. In an infrared homing system, the radiation received by the infrared detector is focused by the
parabolic mirror onto the bolometer, and the resultant signal is employed for the operation of the
guidance system.

In order to provide a directional reference, the mirror which picks up the infrared radiation must be rotated eccentrically to produce a conical scan. If the signal is equal throughout a complete rotation of the mirror, the target must be centered in the scan area.

If the target is not centered, there will be a stronger signal on one side of the scanned circle than the other. This signal will be employed in the missile computer to indicate the position of the target with respect to the missile heading and will also cause guidance commands to be sent to the missile autopilot.

POINT-TO-POINT COMMUNICATION ROUTINES BASIC INFORMATION



MPI point-to-point communication routines are utilized for data communications between a sending processor and a receiving processor. The argument list of these routines takes one of the following formats:

Blocking communication
MPI-Send (buffer, count, type, dest, tag, comm)
MPI-Recv (buffer, count, type, source, tag, comm, status)
Non-blocking communication
MPI-Isend (buffer, count, type, dest, tag, comm, request)
MPI-Irecv (buffer, count, type, source, tag, comm, request)

The arguments of these communication routines are explained as
follows:

A buffer is the application address space that references the data to be sent or received. In most cases, this is simply the variable name that is to be sentheceived. For C programs this argument is passed by the reference and usually must be pre-pended with an ampersand.

A data count indicates the number of data elements of a particular type to be sent or received. A data type must use the data types predefined by MPI, though programmers can also create their derived types. We should note that the MPI types, MPI-BYTE, and MPIPACKED do not correspond to standard C or Fortran types.

A dest indicates the send process where a message should be delivered, it is specified as the rank of the receiving process. A source indicates the originating process of the message; as the counterpart of the dest, it also specifies the rank of the sending process.

The source may be set to the wild card MPI-ANY-SOURCE to receive a message from any task. The tag is an arbitrary non-negative integer assigned by the programmer to uniquely identify a message. The send and receive operations should match message tags.

For a receive operation, the wild card MPIANYTAG can be used to receive any message regardless of its tag. The MPI standard guarantees that integers 0-32767 can be used as tags, but most implementations allow much wider range than this.

A comm represents the designated communicator and indicates the communication context, or the set of processes for which the source or destination fields are valid. The predefined communicator MPI-COMM-WORLD is usually used, unless a programmer is explicitly creating new communicators.

A status indicates the source of the message and the tag of the message for a receive operation. In C, this argument is a pointer to a predefined structure MPI-Status. In Fortran, it is an integer array of size MPI STATUS SIZE. Additionally the actual number of bytes received is obtainable fromstatus via the MPI-GET-COUNT routine.

A request is used by non-blocking send and receive operations. Since nonblocking operations may return before the requested system buffer space is obtained, the system issues a unique request number. The programmer uses this system assigned handle later to determine completion of the nonblocking operation. In C, this argument is a pointer to a predefined structure MPI-Request. In Fortran, it is an integer.

In the following, we present an example of point-to-point communication. Suppose that in the distributed load flow computation, SACCi and SACCj will exchange boundary state variables. For simplicity but without losing generality, we assume that SACCi will only send the magnitude of one boundary voltage variable.

If the ranks of SACCi and SACC, are assigned by the distributed system are x and y, respectively, then SACCi and SACC, can use the following function calls to realize synchronous communication:

SACCi
SACCj
SACCi and SACC, can use the following function calls to realize the asynchronous communication:
MPI-Send (vk, 1, MPI-FLOAT, y, 8, comm)
MPI-Recv (vb, 1, MPI-FLOAT, x, 8, comm, status)
SACCi
SACCj
MPI-Isend (vk, 1, MPI-FLOAT, y, 8, comm, 5)
MPI-Irecv (vb, 1, MPI-FLOAT, x, 8, comm, 7)

A UHF/MICROWAVE CONVERTER BASIC INFORMATION



Down conversion is often used to allow reception of ultra-high-frequency (UHF) and microwave signals (above 300 MHz). The UHF or microwave input is mixed with an LO to provide an output that falls within the tuning range of a shortwave VHF receiver.

A block diagram of a down converter for UHF/microwave reception is shown in Fig. 27-11B.


At B, a down converter that allows UHF/microwave reception on a shortwave receiver.

This converter has an output that covers a huge band of frequencies. In fact, a single frequency allocation at UHF or microwave might be larger than the entire frequency range of a shortwave receiver.

An example is a UHF converter designed to cover 1.000 GHz to 1.100 GHz. This is a span of 100 MHz, more than three times the whole range of a shortwave radio.

To receive 1.000 to 1.100 GHz using a down converter and a shortwave receiver, the LO frequency must be switchable. Suppose you have a communications receiver that tunes in 1-MHz bands.

You might choose one of these bands, say 7.000 to 8.000 MHz and use a keypad to choose LO frequencies from 0.993 GHz to 1.092 GHz. This will produce a difference-frequency output at 7.000 to 8.000 MHz for 100 segments, each 1 MHz wide, in the desired band of reception.

If you want to hear the segment 1.023 to 1.024 GHz, you set the LO at 1.016 MHz. This produces an output range from 1023 − 1016 = 7 MHz to 1024 − 1016 = 8 MHz.

ECCM – RADAR PROBLEMS



Jammers are typically barrage noise or repeater jammers. The former try to prevent all radar detections whereas the latter attempt to inject false targets to overload processing or attempt to pull trackers off the target.

A standoff jammer attempts to protect a penetrating aircraft by increasing the level of noise in the radar’s receiver. In such an environment, the radar should be designed with electronic counter countermeasures.

These can include adaptive receive antennas (e.g., adaptive array or sidelobe canceler), polarization cancelers (defeated easily by jammer using independent jamming on horizontal and vertical polarizations), sidelobe blankers to prevent false pulses through the sidelobes, frequency and prf agility to make life more difficult for the repeater jammer, low probability of intercept (LPI) waveforms, spread spectrum waveforms that will decorrelate CW jammers, spoofer waveform with a false frequency on the leading edge of the pulse to defeat set-on repeaters or a spoofer antenna having an EIRP that covers the sidelobes of the main antenna and masks the transmitted pulses in those directions, receiver uses CFAR/Dicke-fix, guard band blanking, excision of impulsive noise in time domain, and excision of narrow-band jammers via the frequency domain, etc.

In stressing cases, the radar can employ burn through (i.e., long dwells with noncoherent integration of pulses). Bistatic radars can also be used to avoid jamming. For example, a standoff (sanctuary) transmitter can be used with forward-based netted receive-only sensors [avoid antiradiation missiles (ARMs) and responsive jammers] to located targets via multilateration.

Ultralow sidelobe antennas can be complemented with remote ARM decoy transmitters that cover the radar’s sidelobes. Adaptive antennas include both adaptive arrays and sidelobe cancelers. The adaptive array includes a number of low-gain elements whereas the sidelobe canceler has a large main antenna and one or more low-gain auxiliary elements having sufficient gain margin to avoid carryover noise degradation.

The processing algorithms are either analog (e.g., Applebaum orWidrow LMS feedback) that can compensate for nonlinearities or are digital (sample matrix inversion or various eigenvector approaches including Gram–Schmidt and singular valved decomposition (SVD)). Systolic configurations have been implemented for increased speed using Givens rotations or Householder (conventional and hyperbolic) transformations.

In a sidelobe canceller (SLC) the jamming signal is received in the sidelobe of the main antenna as well as in the low-gain auxiliary element. By weighting the auxiliary signal to match that of the main antenna and setting the phase difference to 180◦, the auxiliary signal can be added to the main channel yielding cancellation of the jammer.

The weighting is determined adaptively since the main antenna is usually rotating. Target returns in the mainbeam are not canceled because they have much higher gain than their associated return in the auxiliary antenna. Since they are pulsed vs. the jammer being continuous, target returns have little effect in setting the adaptive weight. Since the closed-loop gain of an analog canceler is proportional to jamming level, the weights will converge faster on larger jammers creating an eigenvalue spread.

To prevent the loop from becoming unstable, receiver gains must be set for a given convergence time on the largest expected jammer. Putting limiters or AGC in the loops will minimize the eigenspread on settling time. The performance of jammer cancellation depends on the nulling bandwidth since the antenna pattern is frequency sensitive and the receivers may not track over the bandwidth (i.e., weights at one edge of the band may not yield good nulling at the other end of the band).

Broader bandwidth nulling is achieved through more advanced space-time processing; that is, channelize the spectrum into subbands that are more easily nulled or, equivalently, use adaptive tapped delay lines in each element to provide equalization of the bandpasses; that is, the adaptive filter for each element is frequency sensitive and can provide the proper weight at each frequency within the band.

A Frost constraint can be included in digital implementations to maintain beamwidth, monopulse slope, etc., of the adapted patterns. If the jammers are closely spaced, mainlobe nulling may be required. Nulling the jammer will cause some undesired nulling of the target as the jammer-target angular separation decreases.

This is limited by the aperture resolution. Difference patterns can be used as auxiliary elements with the sum beam. The adaptation will place nulls in the mainlobe of the sum pattern. They are actually more like conical scan where a difference pattern is added to a sum pattern to move the beam over.

The mainbeam squints such that the jammer is placed in the null on the side of the mainbeam. Better angular resolution can be achieved by nulling with two separated array faces. The adaptive pattern can now have sharp nulls that cancel jammers with minimal target loss since the angular resolution is set by the much wider interferometric baseline.

COAXIAL TRANSMISSION LINES SKIN EFFECT BASIC INFORMATION



The components that connect, interface, transfer, and filter RF energy within a given system or between systems are critical elements in the operation of vacuum tube devices. Such hardware, usually passive, determines to a large extent the overall performance of the RF generator.

To optimize the performance of power vacuum devices, it is first necessary to understand and optimize the components upon which the tube depends. The mechanical and electrical characteristics of the transmission line, waveguide, and associated hardware that carry power from a power source (usually a transmitter) to the load (usually an antenna) are critical to proper operation of any RF system.

Mechanical considerations determine the ability of the components to withstand temperature extremes, lightning, rain, and wind, that is, they determine the overall reliability of the system.

The effective resistance offered by a given conductor to radio frequencies is considerably higher than the ohmic resistance measured with direct current. This is because of an action known as the skin effect, which causes the currents to be concentrated in certain parts of the conductor and leaves the remainder of the cross-section to contribute little or nothing toward carrying the applied current.

When a conductor carries an alternating current, a magnetic field is produced that surrounds the wire. This field continually expands and contracts as the ac wave increases from zero to its maximum positive value and back to zero, then through its negative half-cycle.

The changing magnetic lines of force cutting the conductor induce a voltage in the conductor in a direction that tends to retard the normal flow of current in the wire. This effect is more pronounced at the center of the conductor.

Thus, current within the conductor tends to flow more easily toward the surface of the wire. The higher the frequency, the greater the tendency for current to flow at the surface. The depth of current flow d is a function of frequency and is determined from the following equation:

d = 2.6/ (μf)

where d is the depth of current in mils, μ is the permeability (copper=1, steel=300), and f is the frequency of signal in MHz. It can be calculated that at a frequency of 100 kHz, current flow penetrates a conductor by 8 mils.

At 1 MHz, the skin effect causes current to travel in only the top 2.6 mils in copper, and even less in almost all other conductors. Therefore, the series impedance of conductors at high frequencies is significantly higher than at low frequencies.

When a circuit is operating at high frequencies, the skin effect causes the current to be redistributed over the conductor cross-section in such a way as to make most of the current flow where it is encircled by the smallest number of flux lines. This general principle controls the distribution of current, regardless of the shape of the conductor involved.

With a flat-strip conductor, the current flows primarily along the edges, where it is surrounded by the smallest amount of flux.

GIGABIT ETHERNET MEDIA HANDLING CAPABILITIES AND SUPPORT BASIC INFORMATION



Gigabit Ethernet represents an extension to the 10 Mbps and 100 Mbps IEEE 802.3 Ethernet standards. Providing a data transmission capability of 1000 Mbps, Gigabit Ethernet supports the CMSA/CD access protocol, which makes various types of Ethernet networks scalable from 10 Mbps to 1 Gbps.

Similar to 10BASE-T and Fast Ethernet, Gigabit Ethernet can be used as a shared network through the attachment of network devices to a 1 Gbps repeater hub providing shared use of the 1 Gbps operating rate or as a switch, the latter providing 1 Gbps ports to accommodate high-speed access to servers while lower operating rate ports provide access to 10 Mbps and 100 Mbps workstations and hubs. Although very few organizations can be expected to require the use of a 1 Gbps shared media network.

Similar to the recognition that Fast Ethernet would be required to operate over different types of media, the IEEE 802.3z committee recognized that Gigabit Ethernet would also be required to operate over multiple types of media.

This recognition resulted in the development of a series of specifications, each designed to accommodate different types of media. Thus, any discussion of Gigabit Ethernet involves an examination of the types of media the technology supports and how it provides this support.

There are five types of media supported by Gigabit Ethernet – single-mode fiber, multi-mode fiber, short runs of coaxial cable or shielded twisted pair, and longer runs of unshielded twisted pair.

The actual relationship of the Gigabit 802.3z reference model to the ISO Reference Model is very similar to Fast Ethernet. Instead of a Medium Independent Interface (MII), Gigabit Ethernet uses a Gigabit Media Independent Interface (GMII). The GMII provides the interconnection between the MAC sublayer and the physical layer to include the use of an 8-bit data bus that operates at 125MHZ plus such control signals as transmit and receiver clocks, carrier indicators and error conditions.

BIT ERROR RATE TESTER BASIC INFORMATION AND TUTORIALS



To determine the bit error rate, a device called a bit error rate tester (BERT) is used. Bit error rate testing (BERT) involves generating a known data sequence into a transmission device and examining the received sequence at the same device or at a remote device for errors.

Normally, BERT testing capability is built into another device, such as a ‘sophisticated’ break-out box or a protocol analyzer; however, several vendors manufacture hand-held BERT equipment. Since a BERT generates a sequence of known bits and compares a received bit stream to the transmitted bit stream, it can be used to test both communications equipment and line facilities.

You would employ a BERT in the same manner to determine the bit error rate on a digital circuit, with the BERT used with CSU/DSUs instead of modems. The modem closest to the terminal into a loop can be used to test the modem. Since a modem should always correctly modulate and demodulate data, if the result of the BERT shows even one bit in error, the modem is considered to be defective.

If the distant modem is placed into a digital loop-back mode of operation where its transmitter is tied to its receiver to avoid demodulation and remodulation of data the line quality in terms of its BER can be determined. This is because the data stream from the BERT is looped back by the distant modem without that modem functioning as a modem.

Since a leased line is a pair of two wires, this type of test could be used to determine if the line quality of one pair was better than the other pair. On occasion, due to the engineering of leased lines through telephone company offices and their routing into microwave facilities, it is quite possible that each pair is separated by a considerable bandwidth.

Since some frequencies are more susceptible to atmospheric disturbances than other frequencies, it becomes quite possible to determine that the quality of one pair is better than the other pair. In one situation the author is aware of, an organization that originally could not transmit data on a leased line determined that one wire pair had a low BER while the other pair had a very high BER.

Rather than turn the line over to the communications carrier for repair during the workday, this organization switched their modems from fullduplex to half-duplex mode of operation and continued to use the circuits. Then after business hours they turned the circuit over to the communications carrier for analysis and repair.

MEDIUM EARTH ORBIT SATELLITES BASIC INFORMATION



Some satellites revolve in orbits higher than those normally considered low-earth, but at altitudes lower than the geostationary level of 22,300 miles. These intermediate “birds” are called medium-earth-orbit (MEO) satellites.

A MEO satellite takes several hours to complete each orbit. MEO satellites operate in fleets, in a manner similar to the way LEO satellites are deployed. Because the average MEO altitude  is higher than the average LEO altitude, each “bird” can cover a larger region on the surface at any given time.

A fleet of MEO satellites can be smaller than a comparable fleet of LEO satellites, and still provide continuous, worldwide communications. The orbits of GEO satellites are essentially perfect circles, and most LEO satellites orbit in near-perfect circles.

But MEO satellites often have elongated, or elliptical, orbits. The point of lowest altitude is called perigee; the point of greatest altitude is called apogee. The apogee can be, and often is, much greater than the perigee.

Such a satellite orbits at a speed that depends on its altitude. The lower the altitude, the faster the satellite moves. A satellite with an elliptical orbit crosses the sky rapidly when it is near perigee, and slowly when it is near apogee; it is easiest to use when its apogee is high above the horizon, because then it stays in the visible sky for a long time.

Every time a MEO satellite completes one orbit, the earth rotates beneath it. The rotation of the earth need not, and usually does not, correspond to the orbital period of the satellite. Therefore, successive apogees for a MEO satellite occur over different points on the earth’s surface. This makes the tracking of individual satellites a complicated business, requiring computers programmed with accurate orbital data.

For a MEO system to be effective in providing worldwide coverage without localized periodic blackouts, the orbits must be diverse, yet coordinated in a precise and predictable way. In addition, there must be enough satellites so that each point on the earth is always on a line of sight with one or more satellites, and preferably, there should be at least one “bird” in sight near apogee at all times.

LOW EARTH ORBIT SATELLITES BASIC INFORMATION



The earliest communications satellites orbited only a few hundred miles above the earth. They were low earth-orbit (LEO) satellites. Because of their low orbits, LEO satellites took only about 90 minutes to complete one revolution.

This made communication spotty and inconvenient, because a satellite was in range of any given ground station for only a few minutes at a time. Because of this, GEO satellites became predominant.

However, GEO satellites have certain limitations. A geostationary orbit requires constant adjustment, because a tiny change in altitude will cause the satellite to get out of sync with the earth’s rotation.

Geostationary satellites are expensive to launch and maintain. When communicating through them, there is always a delay because of the path length. It takes high transmitter power, and a sophisticated, precisely aimed antenna, to communicate reliably.

These problems with GEO satellites have brought about a revival of the LEO scheme. Instead of one single satellite, the new concept is to have a large fleet of them.

Imagine dozens of LEO satellites in orbits such that, for any point on the earth, there is always at least one satellite in range. Further, suppose that the satellites can relay messages throughout the fleet. Then any two points on the surface can always make, and maintain, contact through the satellites.

A LEO system employs satellites in orbits strategically spaced around the globe. The satellites are placed in polar orbits (routes that pass over or near the earth’s geographic poles) because such orbits optimize the coverage of the system.

A LEO satellite wireless communications link is easier to access and use than a GEO satellite link. A small, simple antenna will suffice, and it doesn’t have to be aimed in any particular direction.

The transmitter can reach the network using only a few watts of power. The propagation delay is much shorter than is the case with a geostationary link, usually much less than 0.1 second.

GEOSTATIONARY ORBIT SATELLITES BASIC INFORMATION



For any satellite in a circular orbit around the earth, the revolution period gets longer as the altitude increases. At an altitude of about 22,300 miles, a satellite in a circular orbit takes precisely one day to complete each revolution.

If a satellite is placed in such an orbit over the equator, and if it revolves in the same direction as the earth rotates, it is called a geostationary-orbit (GEO) satellite or simply a geostationary satellite. From the viewpoint of someone on the earth, a GEO satellite stays in the same spot in the sky all the time.

One GEO satellite can cover about 40 percent of the earth’s surface. A satellite over Ecuador, for example, can link most cities in North America and South America.

Three satellites in geostationary orbits spaced 120 degrees apart (one-third of a circle) provide coverage over the entire civilized world. Geostationary satellites are used in television (TV) broadcasting, telephone and data communication, for gathering weather and environmental data, and for radiolocation.

In GEO-satellite networks, earth-based stations can communicate via a single “bird” only when the stations are both on a line of sight with the satellite. If two stations are nearly on opposite sides of the planet, say in Australia and Wisconsin, they must operate through two satellites to obtain a link.

In this situation, signals are relayed between the two satellites, as well as between either satellite and its respective earth-based station. The main problem with two-way GEO-satellite communication is the fact that the signal path is long: at least 22,300 miles up to the satellite, and at least 22,300 miles back down to the earth.

If two satellites are used in the circuit, the path is substantially longer. This doesn’t cause problems in television broadcasting or in one-way data transfers, but it slows things down when computers are linked with the intention of combining their processing power. It is also noticeable in telephone conversations.

What is the minimum round-trip signal delay when a GEO satellite is used? Assume that the satellite retransmits signals at the same instant they are received. Radio waves travel at the speed of light (186,282 miles per second). The minimum path length to and from a geostationary satellite is 22,300 miles, for a total round-trip distance of 44,600 miles. Therefore, the delay is at least 44,600/186,282 second, or about 0.239 seconds, or 239 milliseconds.

The delay will be a little longer if the transmitting and receiving stations are located a great distance from each other as measured over the earth’s surface. In practice, a slight additional delay might also be caused by conditions in the ionosphere.

LAN CONNECTIVITY BASIC INFORMATION AND TUTORIALS



The migration of LAN cabling infrastructure to the use of twisted pair cable has been accompanied by a significant increase in the use of the modular RJ-45 connector. Today most network adapter cards, hubs and concentrators are manufactured to accept the use of the RJ-45 connector.

One of the first networks to use the RJ-45 connector was 10BASE-T, which represents a 10 Mbps version of Ethernet designed for operation over unshielded twisted pair (UTP). Since UTP cable previously installed in commercial buildings commonly contains either three or four wire pairs, the RJ 45 connector or jack was selected to be used with 10BASE-T, even through this version of Ethernet only supports the use of four pins.

Although 10BASE-T only uses four of the eight RJ-45 pins, other versions of Ethernet and different LANs use the additional pins in the connector. For example, a full duplex version of Ethernet requires the use of eight pins.

Thus, the original selection of the RJ-45 connector has proven to be a wise choice. Ethernet is a termthat actually references a series of local area networks that use the same access protocol (CSMA/CD) but can use different types of cable.

Early versions of Ethernet operated over either thick or thin coaxial cable. The attachment of an Ethernet workstation to a thick coaxial cable is accomplished through the use of a short cable which connects the workstation to a device known as a transceiver. This connection is accomplished through the use of a DB-15 connector.

The attachment of an Ethernet workstation to a thin coaxial cable requires the use of a T connector on the coax. The T connector is then cabled to a BNC connector on the network adapter card installed in the workstation.

Recognizing the fact that a workstation could be connected to a thick or thin coaxial cable or a twisted-pair based network manufacturers would have to support three separate types of adapter cards based upon different connectors required.

Rather than face this inventory nightmare, most Ethernet network adapter card manufacturers now incorporate all three connectors on the cards they produce. Not only does this type of card simplify the manufacturer’s inventory but, in addition, it provides end-users with the ability to easily migrate from one wiring infrastructure to another without having to replace network interface cards.

INTERNATIONAL TELECOMMUNICATIONS UNION (ITU) BASIC INFORMATION



The International Telecommunications Union (ITU) is a specialized agency of the United Nations, headquartered in Geneva, Switzerland. The telecommunications section of the ITU(ITU -T) is tasked with direct responsibility for developed data communications standards and consists of 15 Study

Groups, each tasked with a specific area of responsibility. The work of the ITU-T is performed on a four-year cycle which is known as a Study Period. At the conclusion of each Study Period, a Plenary Session occurs.

During the Plenary Session, the work of the ITU-T during the previous four years is reviewed, proposed recommendations are considered for adoption and items to be investigated during the next four-year cycle are considered.

The ITU-T Tenth Plenary Session met in 1992 and its eleventh session occurred in 1996. Although approval of recommended standards is not intended to be mandatory, ITU-T recommendations have the effect of law in some Western European countries and many of its recommendations have been adopted by both communications carriers and vendors in the United States.

ITU-T recommendations
Recommendations promulgated by the ITU-T are designed to serve as a guide for technical, operating and tariff questions related to data and telecommunications. ITU-T recommendations are designated according to the letters of the alphabet, from Series A to Series Z, with technical standards included in Series G to Z.

In the field of data communications, the most well known ITU-T recommendations include Series I which pertains to Integrated Services Digital Network (ISDN) transmission, Series Q which describes ISDN switching and signaling systems, Series V which covers facilities and transmission systems used for data transmission over the PSTN and leased telephone circuits, the DTE-DCE interface and modem operations and Series X which covers data communications networks to include Open Systems Interconnection (OSI).

One emerging series of ITU-T recommendations that can be expected to become relatively well known in the next few years is the G.922 recommendations. The G.922 recommendations define standards for different types of digital subscriber lines to include splitterless G.lite.

The ITU-T V-Series
For international use, the V.3 recommendation specifies national currency symbols in place of the dollar sign ($) as well as a few other minor differences. You should also note that certain ITU-T recommendations, such as V.21, V.22 and V.23, among others, while similar to AT&T Bell modems, may or may not provide operational compatibility with modems manufactured to Bell specifications.

SWITCHED NETWORK VS LEASED LINE ECONOMICS COMPARISON



A leased line is billed on a monthly basis regardless of its level of utilization. In comparison, a switched network service, such as the use of the switched telephone network, is billed on a usage basis.

To illustrate an example of the manner by which you would compare the usage of the switched network with the use of a leased line, let’s assume you are investigating an application that is estimated to require two hours of communications per day.

Let’s further assume there are 22 business days in the month, your long distance communications carrier’s rate plan permits calls to be made any time during the day for 10 cents per minute, and a leased line used to connect the two locations would have a monthly cost of $300.

Because our estimate involves two hours of communications per business day and there are 22 business days per month, this results in 44 hours of communications per month. Based upon the use of a flat rate dialing plan that charges 10 cents per minute, the cost of using the switched network becomes:

44 hours/month X 60 min/hr X 10¢/min = $264.00/month

If we compare the cost of anticipated switched network usage to the cost of a leased line, it is apparent that the cost of the switched network is more economical. However, what happens if we expect daily usage to increase to three hours per day?

Based upon communicating three hours per day, the total usage of the switched network becomes 3 hours/day X 22 days/month, or 66 hours per month. Continuing our use of a flat rate calling plan of 10¢/min, the monthly cost of the switched network becomes:

66 hours/month X 60 min/hr X 10¢/min = $396.00/month

Note that the cost of using the switched network now exceeds the cost of a leased line. The preceding computations illustrate several key communications concepts. First, as usage of the switched network increases, it is possible to reach a point where the monthly cost of a leased line is more economical.

Secondly, because the monthly cost of a leased line is fixed regardless of usage, many times a decision to install a leased line should be made even when it appears that the cost of using the switched network may be more economical. This is because if we are slightly wrong in our estimate of switched network usage, a leased line could be more economical and its expense is fixed, making its cost easier for budgeting purposes.

In addition, if estimated usage is on the low side but the cost of using the switched network and a leased line are within close proximity of one another, we should probably select the usage of a leased line. This is because its use caps our communications cost.

To illustrate the use of rate tables, let’s examine a second comparison of the use of the PSTN versus a leased line. For this example, assume that a personal computer located 50 miles from a mainframe has a requirement to communicate between 8 a.m. and 5 p.m. with the mainframe once each business day for a period of 30 minutes.

Suppose the communications application lengthened in duration to 2 hours per day. Then, from Table 4.1, the cost per call would become 0:31 X1þ 0:19 X 119 or $22.92. Again assuming 22 working days per month, the monthly PSTN charge would increase to $504.24, making the leased line more economical.

If data communications requirements to a computer involve occasional random contact from a number of terminals at different locations and each call is of short duration, dial-up service is thus normally employed. If a large amount of transmission occurs between a computer and a few terminals, leased lines are usually installed between the terminal and the computer.

Since an analog leased line is fixed as to its routing, it can be conditioned to obtain a uniform level of attenuation and delay distortion, as well as provide a minimum signal-to-noise ratio which will reduce errors in transmission.

INTEGRATED SERVICES DIGITAL NETWORK (ISDN) BASIC INFORMATION



The integrated services digital network (ISDN) concept is designed to allow voice and data to be sent in the same way along the same lines. Currently, most subscribers are connected to the switched telephone network by local loops and interface cards designed for analog signals. This is reasonably well-suited to voice communication, but data can be accommodated only by the use of modems.

As the telephone network gradually goes digital, it seems logical to send data directly over telephone lines without modems. If the local loop could be made digital, with the codec installed in the telephone instrument, there is no reason why the 64 kb/s data rate required for PCM voice could not also be used for data, at the user’s discretion.

The integrated services digital network concept provides a way to standardize the above idea. The standard encompasses two types of connections to the network. Large users connect at a primary-access point with a data rate of 1.544 Mb/s. This, you will recall, is the same rate as for the DS-1 signal described earlier. It includes 24 channels with a data rate of 64 kb/s each.

One of these channels is the D (data) channel and is used for commonchannel signaling, that is, for setting up and monitoring calls. The other 23 channels are called B (bearer) channels and can be used for voice or data, or combined, to handle high-speed data or digitized video signals, for example.

Individual terminals connect to the network through a basic interface at the basic access rate of 192 kb/s. Individual terminals in a large organization use the basic access rate to communicate with a private branch exchange (PBX), a small switch dedicated to that organization.

Residences and small businesses connect directly to the central office by way of a digital local loop. Two twisted pairs can be used for this, though more use of fiber optics is expected in the future.

Basic-interface users have two 64 kb/s B channels for voice or data, one 16 kb/s D channel, and 48 kb/s for network overhead. The D channel is used to set up and monitor calls and can also be employed for low-data-rate applications such as remote meter-reading.

All channels are carried on one physical line, using time-division multiplexing. Two pairs are used, one for signals in each direction.

Implementation of the ISDN has been slow, leading some telecommunications people to claim, tongue-in-cheek, that the abbreviation means “it still does nothing.” Several reasons can be advanced for this.

First, converting local loops to digital technology is expensive, and it is questionable whether the results justify the cost for most small users. Residential telephone users would not notice the difference (except that they would have to replace all their telephones or buy terminal adapters), and analog lines with low-cost modems or fax machines are quite satisfactory for occasional data users.

Newer techniques like Asymmetrical Digital Subscriber Line (ADSL), which is described in the next section, and modems using cable-television cable have higher data rates and are more attractive for residential and small-office data communication.

Very large data users often need a data rate well in excess of the primary interface rate for ISDN. They are already using other types of networks. It appears possible that the ISDN standard is becoming obsolete before it can be fully implemented.

With this in mind, work has already begun on a revised and improved version of ISDN called broadband ISDN (B-ISDN). The idea is to use much larger bandwidths and higher data rates, so that high-speed data and video can be transmitted. B-ISDN uses data rates of 100 to 600 Mb/s.

DIGITAL TELEPHONY BASIC INFORMATION AND TUTORIALS



The telephone system was completely analog at its beginning in the nineteenth century, of course. Over the past 30 years or so, it has gradually been converted to digital technology and the process is not yet complete.

The digital techniques were originally designed to work in conjunction with the existing analog system. This, and the fact that many of the standards for digital technology are quite old, should help the reader understand some of the rather peculiar ways things are done in digital telephony.

We can summarize the results here as they apply to the North American system. The numbers vary slightly in some other countries but the principles are the same.

The analog voice signal is low-pass filtered at about 3.4 kHz and then digitized, using 8-bit samples at a sampling rate of 8 kHz. The signal is compressed, either before or after digitization, to improve its signal-to-noise ratio.

The bit rate for one voice signal is then ƒb(voice) = 8 bits/sample × 8000 samples/second = 64 kb/s. The sample rate is determined by the maximum frequency to be transmitted, which was chosen for compatibility with existing analog FDM transmission (which uses SSBSC AM with a bandwidth of 4 kHz per channel, including guardbands between channels).

An upper frequency limit of about 3.4 kHz has long been considered adequate for voice transmission. A much lower bit rate could be used for telephone-quality voice using data compression and vocoders.

These techniques are not employed in the ordinary telephone system, though data compression is used in special situations where bandwidth is limited and expensive, as in intercontinental undersea cables.

The connection of wireless systems to the PSTN requires conversion of standards in both directions. In general, wireless systems use ordinary telephone quality as a guide, though many of these systems fall short of what has been considered toll quality, that is, good enough to charge long-distance rates for.

Until now, users have been so delighted to have portable telephones that they have been willing to put up with lower quality. This situation is changing quickly now that wireless phones are everywhere.

RECEPTION OF SPREAD-SPECTRUM SIGNALS BASIC INFORMATION AND TUTORIALS



The type of receiver required for spread-spectrum reception depends on how the signal is generated. For frequency-hopped transmissions, what is needed is a relatively conventional narrowband receiver that hops in the same way as and is synchronized with the transmitter.

This requires that the receiver be given the frequency-hopping sequence, and there be some form of synchronizing signal (such as the signal usually sent at the start of a data frame in digital communication) to keep the transmitter and receiver synchronized.

Some means must also be provided to allow the receiver to detect the start of a transmission, since, if this is left to chance, the transmitter and receiver will most likely be on different frequencies when a transmission begins.

One way to synchronize the transmitter and receiver is to have the transmitter send a tone on a prearranged channel at the start of each transmission, before it begins hopping. The receiver can synchronize by detecting the end of the tone and then begin hopping according to the prearranged PN sequence.

Of course, this method fails if there happens to be an interfering signal on the designated synchronizing channel at the time synchronization is attempted.

A more reliable method of synchronizing frequency-hopping systems is for the transmitter to visit several channels in a prearranged order before beginning a normal transmission. The receiver can monitor all of these channels sequentially, and once it detects the transmission, it can sample the next channel in the sequence for verification and synchronization.

Direct-sequence spread-spectrum transmissions require different reception techniques. Narrowband receivers will not work with these signals, which occupy a wide bandwidth on a continuous basis. A wideband receiver is required, but a conventional wideband receiver would output only noise.

In order to distinguish the desired signal from noise and interfering signals, which over the bandwidth of the receiver are much stronger than the desired signal, a technique called autocorrelation is used. Essentially this involves multiplying the received signal by a signal generated at the receiver from the PN code.

When the input signal corresponds to the PN code, the output from the autocorrelator will be large; at other times this output will be very small. Of course, once again the transmitter and receiver will probably not be synchronized at the start of a transmission, so the transmitter sends a preamble signal, which is a prearranged sequence of ones and zeros, to let the receiver synchronize with the transmitter.

SPREAD SPECTRUM SYSTEMS RADIO COMMUNICATION BASIC INFORMATION



As radio communication systems proliferate and traffic increases, interference problems become more severe. Interference is nothing new, of course, but it has been managed reasonably successfully in the past by careful regulatory control of transmitter locations, frequencies, and power levels.

There are some exceptions to this, however. Two examples are CB radio and cordless telephones. In fact, wherever government regulation of frequency use is informal or nonexistent, interference is likely to become a serious problem.

Widespread use of such systems as cordless phones, wireless local-area networks, and wireless modems by millions of people obviously precludes the tight regulation associated with services such as broadcasting, making bothersome interference almost inevitable. One approach to the problem, used by cellular radio systems, is to employ a complex system of frequency reuse, with computers choosing the best channel at any given time.

 However, this system too implies strong central control, if not by government then by one or more service providers, each having exclusive rights to certain radio channels. That is, in fact, the current situation with respect to cellular telephony, but it can cause problems where several widely different services use the same frequency range.

The 49-MHz band, for instance, is currently used by cordless phones, baby monitors, remote controlled models, and various other users in an almost completely unregulated way. Similarly, the 2.4-GHz band is shared by wireless LANs, wireless modems, cordless phones—and even microwave ovens!

Another problem with channelized communication, even when tightly controlled, is that the number of channels is strictly limited. If all available channels are in use in a given cell of a cellular phone system, the next attempt to complete a call will be blocked, that is, the call will not go through. Service does not degrade gracefully as traffic increases; rather, it continues as normal until the traffic density reaches the limits of the system and then ceases altogether for new calls.

There is a way to reduce interference that does not require strong central control. That technique, known as spread-spectrum communication, has been used for some time in military applications where interference often consists of deliberate jamming of signals. This interference, of course, is not under the control of the communicator, nor is it subject to government regulation.

Military communication systems need to avoid unauthorized eavesdropping on confidential transmissions, a problem alleviated by the use of spread spectrum techniques. Privacy is also a concern for personal communication systems, but many current analog systems, such as cordless and cellular telephone systems, have nonexistent or very poor protection of privacy.

For these reasons, and because the availability of large-scale integrated circuits has reduced the costs involved, there has recently been a great deal of interest in the use of spread-spectrum technology in personal communication systems for both voice and data.

The basic idea in spread-spectrum systems is, as the name implies, to spread the signal over a much wider portion of the spectrum than usual. A simple audio signal that would normally occupy only a few kilohertz of spectrum can be expanded to cover many megahertz.

Thus only a small portion of the signal is likely to be masked by any interfering signal. Of course, the average power density, expressed in watts per hertz of bandwidth, is also reduced, and this often results in a signal-to-noise ratio of less than one (that is, the signal power in any given frequency range is less than the noise power in the same bandwidth).

It may seem at first glance that this would make the signal almost impossible to detect, which is true unless special techniques are used to “de-spread” the signal while at the same time spreading the energy from interfering signals. In fact, the low average power density of spread-spectrum signals is responsible for their relative immunity from both interference and eavesdropping.

FREQUENCY HOPPING SYSTEM BASIC INFORMATION AND TUTORIALS



Frequency hopping is the simpler of the two spread-spectrum techniques. A frequency synthesizer is used to generate a carrier in the ordinary way.

There is one difference, however: instead of operating at a fixed frequency, the synthesizer changes frequency many times per second according to a preprogrammed sequence of channels.

This sequence is known as a pseudo-random noise (PN) sequence because, to an outside observer who has not been given the sequence, the transmitted frequency appears to hop about in a completely random and unpredictable fashion.

In reality, the sequence is not random at all, and a receiver which has been programmed with the same sequence can easily follow the transmitter as it hops and the message can be decoded normally.

Since the frequency-hopping signal typically spends only a few milliseconds or less on each channel, any interference to it from a signal on that frequency will be of short duration. If an analog modulation scheme is used for voice, the interference will appear as a click and may pass unnoticed.

If the spread-spectrum signal is modulated using digital techniques, an errorcorrecting code can be employed that will allow these brief interruptions in the received signal to be ignored, and the user will probably not experience any signal degradation at all. Thus reliable communication can be achieved
in spite of interference.

Sample Problem
A frequency-hopping spread-spectrum system hops to each of 100 frequencies
every ten seconds. How long does it spend on each frequency?
SOLUTION

The amount of time spent on each frequency is
t = 10 seconds/100 hops
= 0.1 second per hop

If the frequency band used by the spread-spectrum system contains known sources of interference, such as carriers from other types of service, the frequency-hopping scheme can be designed to avoid these frequencies entirely.

Otherwise, the communication system will degrade gracefully as the number of interfering signals increases, since each new signal will simply increase the noise level slightly.

LOSSY AND LOSSLESS COMPRESSION ELECTRONIC COMMUNICATION BASIC INFORMATION



There are two main categories of data compression. Lossless compression involves transmitting all of the data in the original signal but using fewer bits. Lossy compression, on the other hand, allows for some reduction in the quality of the transmitted signal.

Obviously there has to be some limit on the loss in quality, depending on the application. For instance, up until now the expectation of voice quality has been less for a mobile telephone than for a wireline telephone.

This expectation is now changing as wireless telephones become more common. People are no longer impressed with the fact that wireless telephony works at all; they want it to work as well as a fixed telephone.

Lossless compression schemes generally look for redundancies in the data. For instance, a string of zeros can be replaced with a code that tells the receiver the length of the string. This technique is called run-length encoding.

It is very useful in some applications: facsimile (fax) transmission, for instance, where it is unnecessary to transmit as much data for white space on the paper as for the message. In voice transmission it is possible to greatly reduce the bit rate, or even stop transmitting altogether, during time periods in which there is no speech.

For example, during a typical conversation each person generally talks for less than half the time. Taking advantage of this to increase the bandwidth for transmission in real time requires there to be more than one signal multiplexed.

When the technique is applied to a radio system, it also allows battery-powered transmitters to conserve power by shutting off or reducing power during pauses in speech.

Lossy compression can involve reducing the number of bits per sample or reducing the sampling rate. As we have seen, the first reduces the signal-to-noise ratio and the second limits the high-frequency response of the signal, so there are limits to both methods.

Other lossy compression methods rely on knowledge of the type of signal, and often, on knowledge of
human perception. This means that voice, music, and video signals would have to be treated differently.

These more advanced methods often involve the need for quite extensive digital signal processing. Because of this, they have only recently become practical for real-time use with portable equipment. A couple of brief examples will show the sort of thing that is possible.