Ali Grami, in Introduction to Digital Communications , 2016
12.3.3 Delay Spread and Coherent Bandwidth
Multipath propagation, an inherent feature of a mobile communications channel, results in a received signal that is dispersed in time. Each path has its own delay and the time dispersion leads to a form of intersymbol interference. Delay spread is a measure of the multipath profile of a mobile communications channel. It is generally defined as the difference between the time of arrival of the earliest component (e.g., the line-of-sight wave if there exists) and the time of arrival of the latest multipath component. Delay spread is a random variable, and the standard deviation is a common metric to measure it. This measure is widely known as the root-mean-square delay spread στ.
Coherence bandwidth Bc is a statistical measure of the range of frequencies over which the channel can be considered flat (i.e., it passes all spectral components with approximately equal gain and linear phase). All frequency components of the transmitted signal within the coherence bandwidth will fade simultaneously. The coherence bandwidth is inversely proportional to the delay spread, and we thus have the following:
Delay spread στ and coherent bandwidth Bc are parameters that describe the time dispersion nature of the mobile channel, and their values with respect to the transmitted signal bandwidth Bs and the symbol duration Ts can help determine if the channel is experiencing flat fading or frequency-selective fading.
The difference between path lengths is rarely greater than a few kilometers, so the delay spread στ is rarely more than several micro-seconds. The coherence bandwidth Bc is thus typically greater than 100 kHz. If a channel is faded at one frequency, the frequency must be very roughly changed by the coherence bandwidth Bc to find an unfaded frequency.
Read full chapter
Digital Underwater Acoustic Communication Signal Processing
Tianzeng Xu, Lufen Xu, in Digital Underwater Acoustic Communications , 2017
22.214.171.124 Nonlinear Effect due to Multipath Interference
We have known that the multipath propagations due to the sound reflections from both sea surface and sea bottom appear in shallow-water acoustic channels. Moreover, the sound refractions in deep-sea channeling would also generate the multipath effects. Therefore, complex and variable interference effects will appear by the coherent sound waves. In the case of a carrier frequency pulse signal, a multipath structure will be formed in which the sound interference effect is also included.
We see that whether the interference effect will appear that depends not only on the characteristics of sound channels and operating conditions, such as communication ranges, depths at which transducers are located, but also on pulse duration τs, that is efficient bandwidth B, for a certain communication sonar.
Assuming a carrier frequency pulse signal with finite pulse duration τs or finite bandwidth B corresponding to required data rate R is transmitted by communication sonar, obviously, the multipath interference will be reduced when τs is decreased or B is increased. Provided τs is less than the minimal time delay of adjacent multipaths, the multipath pulses (wave packets) will be separated from each other and form a comb multipath structure. In such case, the underwater acoustic channels would fit the linear superposition principle for an infinitesimal-amplitude sound wave. As mentioned previously, the nonlinear effect due to finite-amplitude sound wave would disappear at longer-range underwater acoustic communications in which the nonlinear effect is mostly caused by multipath effects.
The transmission loss of energy for band-limited sound signals may be determined  by
in which E(1,z) is signal energy at unit distance in the depth direction z.
TLB for the signals having different bandwidths passing through a surface sound channeling in the Arctic Ocean at different communication distances were calculated; results were plotted in Fig. 3.36. The channel has a depth of 500 m, the sound velocity just below the surface is 1490 m/s; it is 1496 m/s near the sea bottom. Both transmitting and receiving transducers are played at the depth of 50 m. The center frequency of the signal is 150 Hz.
The real lines in Fig. 3.41A–D express TLB for the signals having single frequency of 150 Hz, and bandwidths to be 10, 20, 50, and 100 Hz relative to the center frequency of 150 Hz, respectively. The virtual lines express distance average fields for the signal with frequency of 150 Hz at different ranges. We see that: (1) TLB for pulse signals change to be smooth with increasing bandwidth. Remarkable convergence zones and attenuation zones appear due to multipath interference effects for a single-frequency signal; whereas in the case of pulse signals, both convergence and attenuation zones are gradually blurred with increasing the bandwidths and thus TLB curves become smooth. (2) The smooth effect is more remarkable with increasing distances for the pulse signals with the same bandwidth. (3) Provided bandwidths are larger than a certain value (here to be 50 Hz), TLB of pulse signals are consistent with the distance average field of the center frequency at longer ranges.
Similar results with respect to interference effect in shallow-water acoustic channels with a weaker multipath effect are obtained. The recorded waveforms for a pulse signal sequence are shown in Fig. 3.42. When the pulse duration τs is narrow (0.5 ms), the multipath pulses (wave packets) are separated from each other as shown in Fig. 3.42A. Once τs is larger (3 ms), direct signal pulses will be superposed by multipath pulses as shown in Fig. 3.42B. The characteristics of waveforms are similar with those shown in Fig. 1.9 but the durations of wave peaks are wider than those of the latter, which means that the time differences arriving at receiving point between the direct and multipath pulse reflected from sea surface are larger in the latter.
The interference effect of multipath propagations caused by sound refraction in a deep-sea sound channeling appears provided τs is larger than the time differences arriving at receiving point for two adjacent pulses as shown in Figs. 1.8 and 2.29Figs. 1.8Figs. 2.29, in which the waveforms with double peaks also appear.
The data rate R is a basic specification for an applied underwater acoustic communication sonar, that is corresponding to a certain τs or B. Therefore, for a specific communication channel and operation conditions, only τs is narrow enough, that is B is wide enough, the interference caused by multipath effects would disappear. In this case the underwater acoustic channels can be processed as a linear system, and described by means of the impulse response function h(τ,r,t) in time domain, or the transfer function H(ω,r,t) in frequency domain as mentioned previously.
Generally speaking, the underwater acoustic communication channels whether to satisfy the linear additive theorem depends not only on the transmission characteristics of the channels, but also on τs, that is bandwidth B for a specific communication sonar. Therefore, h(τ,r,t) and H(ω,r,t) corresponding to δ function having an infinite bandwidth as an input signal could not reflect the transmission characteristics of general communication conditions. Instead of them, a band-limited pulse response function hl(τ,r,t,B) in time domain and a band-limited transfer function Hl(ω,r,t,B) in frequency domain for actual underwater acoustic communication channels are introduced. Moreover, the sound signals traveling over the channels would be changed into random processes; therefore, the reference signals in a correlation receiver are not suitable to use preknown transmitting signals as in a copy cross-correlator. In this case, the real-time output of a channel, that is field band-limited multipath structure, must be taken as the input of the communication receiver, and then by making use of some effective signal-processing schemes to adapt to that, as shown in Fig. 3.43, in which an additive white noise background is given.
The sign T in Fig. 3.43 represents a transform or operator through that input signal s(t) is reflected to output signal of the channel obeying a certain law or formula. The theoretical analyses and preestimation models for multipath structures in Chapter 2 just attempt to reveal the relative law. In underwater acoustic communication engineering, the preestimated multipath structures can be used as primary references in designing communication sonar, and then based on the multipath structure acquired in situ, an adaptive rake receiver is used to adapt to that. So that matching the field multipath structure and a large processing gain would be obtained, transmitting signal s(t) may be reconstructed at a preset BER Pe.
Read full chapter
Fundamentals of Airborne Acoustic Positioning Systems
Fernando J. Álvarez Franco, in Geographical and Fingerprinting Data to Create Systems for Indoor Positioning and Indoor/Outdoor Navigation , 2019
5.2 Strong Multipath Propagation
As we have already stated in Section 2.3, multipath propagation is a common effect in indoor AAPS due to the specular reflections of the acoustic emissions at the room boundaries. This phenomenon gives rise to typical room impulse responses where the direct path is followed by a pattern of early reflections and then by a late-field reverberant tail, as the one shown in Fig. 5A. Since the pattern of early reflections is basically the representation of a sparse channel whose number of coefficients with nonnegligible magnitude is much lower than the total number of coefficients (see Fig. 5B), a matching pursuit (MP) algorithm can be used as a low complexity approximation to the maximum likelihood solution to estimate the TOA of the direct wave (Kim and Iltis, 2004).
If we consider N different beacons, the digitized samples of the received signal r can be represented by,
where hl is the lth channel coefficient vector, El is the characteristic signal matrix containing samples of the lth beacon emission, and n is a vector of zero-mean white Gaussian noise samples. The MP algorithm estimates the channel coefficients ĥqjl one at a time, using a greedy approach in which the detected path index qjl and corresponding coefficient ĥqjl are computed from the following set of equations,
where Eil represents the ith column vector of matrix El and r1 = r. Every new iteration of the algorithm j=1,2,…,Nf, computes Eqs. (15) and (16) N times (one per channel), and only the largest coefficient ĥqjl is stored. Next, the newly estimated signal ĥqjlEqjl is subtracted from the current residue rj to obtain the updated signal rj+1 as indicated by Eq. (17). This multipath cancelation technique has proven to notably decrease the mean positioning error measured under strong multipath conditions in an AAPS where a 16 kHz sonic carrier was modulated with 63-bit Kasami sequences (Álvarez et al., 2017a), and an AAPS based on a time-multiplexing strategy where a 41.67 kHz ultrasonic carrier was modulated with 255-bit Kasami sequences (Aguilera et al., 2018a).
Read full chapter
P J Howard, in Telecommunications Engineer’s Reference Book , 1993
15.5.2 Single mode fibre
The bandwidth limitations of multimode fibres arise from the variation in multipath propagation times. The best graded index fibre offers a bandwidth of about 1.5GHz/km and is expensive and difficult to produce. (On a 30km route this would reduce to something like 140MHz which is inadequate for high bit rate systems.)
This major limitation is overcome in single mode fibre designs. The number of modes propagated by a fibre is given approximately by Equation 15.26.
where a is the core radius.
λ is the wavelength
ηcl is the refractive index of cladding
ηco is the refractive index of the core.
As the fibre diameter is reduced, the number of modes which can be propagated falls, and in the extreme only a single mode is transmitted. In order to obtain a usable core size the core cladding index difference is also reduced from the order of 1% to 0.1%.
In a single mode fibre the core size is about one tenth of that of a graded index fibre (5 micron) and obviously the practical problems of injecting light into the fibre, jointing fibres and connector design are more difficult. However, solutions are now well established and single mode fibres have become the standard for virtually all new telecommunication work.
In a single mode fibre, part of the power is carried in the fibre cladding, and is still guided. The concept of mode field diameter is therefore more useful than core diameter and can be defined as the width between the points across the fibre where the optical field amplitude measured is 1/e of the maximum value.
The overall diameter over the cladding of the fibre as drawn is 120 micron, the same as for standard multimode fibre. The cladding is thus very much larger than the core and is of optically low loss. One consequence of this is that care has to be taken to exclude cladding light when making measurements, for example by taking the fibre through a bath of liquid of higher refractive index than the cladding. Some fibre protective coatings are also designed to act as cladding mode strippers.
A single mode guide will only be single mode above a certain wavelength and below this value second order modes will also be propagated. The cut-off wavelength is usually in the range 1100–1280nm.
Reference has already been made to dispersion shifted fibres in the earlier section on material dispersion. This involves increasing the dopant concentrations to shift the naturally occurring zero value at 1270nm in silica fibre to the lower loss window of 1550nm. However, this also serves to increase the loss at 1550nm. The effect can be mitigated by changing the refractive index profile of the fibre from a rectangular to a triangular shape.
Single mode fibre provides a low loss and high bandwidth transmission bearer which is economic over a wide range of digital bit rates using either laser or LED sources, and thus provides an operating administration with a “future-proof” investment.
The low values of pulse dispersion make system planning a relatively straightforward task up to about 600Mbit/s. At higher bit rates additional allowances may have to be made for very short term laser wavelength changes during the ‘on’ period which are converted by the small but finite dispersion of the fibre to an additional noise source.
Read full chapter
Revised by Douglass D. Crombie, in Reference Data for Engineers (Ninth Edition) , 2002
In high-frequency transmission, the communication bandwidth is limited by multipath propagation. The greatest limitation occurs when two or more paths exist with a different number of hops. The bandwidth may then be as small as 100 hertz, but such multipath may be minimized by operating near the muf. Operation at a frequency within approximately 10% of the muf is necessary for paths less than about 600 kilometers to obtain bandwidths greater than, say, 1 kilohertz. The multipath reduction factor (mrf) is defined as the smallest ratio of muf to operating frequency for which the range of multipath propagation time difference is less than a specified value. The mrf thus defines the frequency above which a specified minimum protection against multipath is provided. Fig. 10 shows the mrf for various lengths of path.*
Read full chapter
Systems and Applications
NICHOLAS FOURIKIS, in Advanced Array Systems, Applications and RF Technologies , 2000
126.96.36.199 Airborne Early Warning Radars
Airborne radar systems look down at low-flying targets, so multipath propagation is not an issue. With these systems a low-flying target is located and tracked by the airborne system and intercepted by ship-launched missiles. These systems effectively extend the ship’s horizon so that their defense becomes viable. The AWACS perform this function effectively.
While fully airborne early warning (AEW) systems are expensive but can be used anywhere at short notice , airship-borne systems are more affordable but slow moving; typical speeds of 70 knots enable the airships to keep pace with naval vessels in virtually all weather conditions . Their horizon is typically 240 km for very small-RCS airborne targets and 650 km for conventional targets. The evolution of AEW systems together with an outline of the technical problems solved with the passage of time are given in reference .
A demonstration utilizing a drone launched from a ship to increase its horizon was successfully completed at Kauai, Hawaii during January and February 1996. The demonstration (the cruise missile defense advanced concept technology) is also known as Mountain Top .
As air threats having ever-decreasing RCS are developed and fly closer to the land or sea, a defensive missile faces problems similar to those confronting surveillance radars, i.e. how to discriminate the threat from background clutter. Detailed studies of the seeker’s perspective of the radar environment are time consuming and expensive. A series of experiments in which a seeker is ‘captive’ to an aircraft have been reported ; with this arrangement valuable data are collected when the low-flying threat is illuminated by a ship and the scattered radiation is received by the captive seeker. The assembled data are valuable for the evolution of the next generation seekers.
Read full chapter
OFDM-Based MIMO Systems
Henrik Asplund, … Erik Larsson, in Advanced Antenna Systems for 5G Network Deployments , 2020
5.2.3 Frequency Domain Model and Equalization
A CP with duration larger than the time dispersion in a radio channel with multipath propagation enables the receiver to recover the symbols transmitted on different subcarriers in different OFDM symbols without any mutual interference. The channel coefficient describes how the radio channel impacts the symbol transmitted on a subcarrier, as will be seen next.
Due to the propagation delays, between the transmitter and the receiver, the receiver needs to align its window accordingly so that it starts at
The demodulator output in (5.6) for subcarrier k is then taken as
Next, for notational simplicity it is assumed that the path and delays are defined from a receiver perspective so that the first path corresponds to zero delay,
By using the expression for the received signal in (5.8) for the case with multipath propagation in the expression for the output of the OFDM demodulator in (5.12), and using the expression for the transmitted signal x(t) from (5.3) with the assumptions on receiver alignment in (5.13) and time dispersion less than CP length (5.10), it follows that the subcarriers are still orthogonal and the demodulator output (5.14) can be shown to be
where ek stems from the impairments and the channel coefficient experienced by subcarrier k is given by
Before discussing the properties of the channel coefficient Hk, the first observation is that despite the time dispersion in the radio channel, the orthogonality between the subcarriers is maintained in the sense that the output for subcarrier k, yk in (5.14), contains no contributions from symbols transmitted on any other subcarriers xl for l≠k. As mentioned above in the introduction in Section 5.2, this is the reason why equalization becomes simple and straightforward for OFDM with CP also in time dispersive radio channels.
To detect xk from yk in (5.14) implies that the channel Hk first needs to be estimated and this is done by transmitting a few known modulated symbols. This is referred to as inserting demodulation reference signal (DM-RS) among the transmitted symbols xk on certain subcarriers where data symbols are not transmitted. By using these known transmitted signals, typically QAM symbols, the receiver can with appropriate averaging and interpolation between these reference symbol carrying subcarriers generate estimates of the channel Hk for all subcarriers and for all OFDM symbols. The placement of reference symbols is illustrated in Fig. 5.4 and the possible DM-RS configurations available in NR are given in Section 188.8.131.52.
The channel coefficient for subcarrier k in (5.15) can be recognized as the channel transfer function defined in Section 184.108.40.206, or equivalently the Fourier transform of the channel impulse response for the baseband signal, evaluated for the frequency of the subcarrier, kΔf. In a practical digital implementation, the channel coefficients are given by a discrete-time Fourier transform rather than a continuous-time Fourier transform, and the impact of filtering is included. In both cases, the channel coefficient is referred to as the frequency domain channel.
Another important aspect related to orthogonality is the need for synchronization. In light of (5.12), the receiver needs to align the integration window and know the delay of the first tap, τmin, at least to the extent that a decent amount of the signal power is within the assumed CP window. However, there is also a need to align with respect to frequency, and in practice there is often a frequency mismatch between the receiver and the transmitter. In fact, if the frequency offset is as large as the subcarrier spacing, then due to the orthogonality, no signal power would be received. For this purpose, in standards such as LTE and NR, synchronization signals are transmitted from the base stations and additional reference signals are used not only for channel estimation and measurements, but also for fine-tuning the time and frequency synchronization. This fine-tuning is critical to be able to receive higher-order, highly spectral efficient QAM constellations as they are more sensitive to misalignment. In NR, the tracking RS was introduced for this purpose (see Section 220.127.116.11.1).
It should be noted that the orthogonality is lost, which may degrade demodulation performance, if there are channel variations during the OFDM symbol time, for example, caused by fast user equipment (UE) movement. In that case, the basic relation (5.13) needs to be modified for correct modeling, for example, by including intercarrier interference contributions in the impairments term ek.
Read full chapter
Cognitive radio based smart grid communications
Ersan Kabalci, Yasin Kabalci, in From Smart Grid to Internet of Energy , 2019
6.4.4 Cooperative spectrum sensing based spectrum detection
The non-cooperative detection methods suffer from wireless communication problems such as shadowing, fading, multi-path propagation effects and hidden terminal problem. Unlike the use of non-cooperative methods, the CR devices can cooperate to provide better spectrum detection reliability and to handle hidden terminal problem that occurs in the presence of multipath fading and shadowing as stated before [16, 64]. An example of hidden terminal problem based on shadowing is shown in Fig. 6.14.
The cooperative spectrum sensing (CSS) based detection process can be divided into two categories as centralized and distributed CSS. By using the diversity gains supplied by various CR devices, the CSS methods can generally present higher accuracy for the SS process. Even though this type of spectrum detection ensure performance improvement, the results of this advantage may cause several drawbacks such as more power consumption, higher system complexity and increased computational complexity . A common control channel (CCC) is responsible for changing necessary information on the CSS. The signal/traffic load level in the CCC may change depending on the type of transferred data. For instance, one-bit decision from CR devices may be enough for some CSS algorithms while multiple decision bits may be required for other algorithms. In addition, insufficient clustering of CR devices for cooperation may not ensure the desired performance in the CSS. For instance, the sensing knowledge acquired from the CR devices concentrated in a narrow region may indicate high similarity because those devices may suffer from the same destructive effects. In centralized CSS, collaborated CR devices sense the spectrum bands and the sensed data are send to the central data center where the received data are analyzed to decide whether the spectrum is idle, or not. In distributed CSS, the CR devices change their perceived data between each other over the CCC, and each CR device creates its own sensing decision by combining the collected data.
Read full chapter
The Fading Channel Problem and Its Impact on Wireless Communication Systems in Uganda
L.L. Kaluuba, … D. Waigumbulizi, in Proceedings from the International Conference on Advances in Engineering and Technology , 2006
3.7 Doppler Spread
When a single-frequency sinusoid is transmitted in a free-space propagation environment where there is no multipath propagation, the relative motion between the transmitter and receiver results in an apparent change in the frequency of the received signal. This apparent frequency change is called the Doppler shift (see Fig. 3).
The receiver moves at a constant velocity v along a direction that forms an angle a with the incident wave.
The difference in path lengths traveled by the wave from the transmitter to the mobile receiver at points X and Y is given by
where, Δt is the time required for the mobile to travel from X to Y. The phase change in the received signal due to the difference in path lengths is therefore
Where, λ is the wavelength of the carrier signal. Hence the apparent change in the received frequency, or Doppler shift, is given by
In the last equation, c is the speed of light and fc is the frequency of the transmitted sinusoid (carrier). Note taht c = fcλ. Equation (5) shows that the Doppler shift is a function of, among other parameters, the angle of arrival of the transmitted signal.
In a multipath propagation environment in which multiple signal copies propagate to the receiver with different angles of arrival, the Doppler shift will be different for different propagation paths. The resulting signal at the receiver antenna is the sum of the multipath components. Consequently, the frequency spectrum of the received signal will in general be “broader or wider” than that of the transmitted signal, i.e., it contains more frequency components than were transmitted. This phenomenon is referred to as Doppler spread.
Since a multipath propagation channel is time-varying, when there is relative motion, the amount of Doppler spread characterizes the rate of channel variations . Doppler spread can be quantitatively characterized by the Doppler spectrum:
The Doppler spectrum is the power spectral density of the received signal when a single-frequency sinusoid is transmitted over a multipath propagation environment. The bandwidth of the Doppler spectrum, or equivalently, the maximum Doppler shift fmax, is a measure of the rate of channel variations.
When the Doppler bandwidth is small compared to the bandwidth of the signal, the channel variations are slow relative to the signal variations. This is often referred to as “slow fading”. On the other hand, when the Doppler bandwidth is comparable to or greater than the bandwidth of the signal, the channel variations are as fast or faster than the signal variations. This is often called “fast fading”.
Read full chapter
FBMC Channel Equalization Techniques
Leonardo Gomes Baltar, … Vincent Savaux, in Orthogonal Waveforms and Filter Banks for Future Communication Systems , 2017
One of the main advantages of MultiCarrier Modulation (MCM) schemes for broadband wireless communications is their robustness to multipath propagation1 channels, stemming from the fact that they divide the channel spectrum into very narrow subbands, and, in the extreme case, no frequency selectivity, i.e., only flat fading, is observed in each of them. For an increased spectral efficiency, most practical MCM schemes have their subbands overlapped in frequency. In Cyclic Prefix Orthogonal Frequency-Division Multiplexing (CP-OFDM), Inter-Symbol Interference (ISI) and Inter-Carrier Interference (ICI) can be completely removed if the Cyclic Prefix (CP) is at least as long as the channel delay spread. Thus, at the expense of reducing the spectral efficiency due to the CP redundancy, the subchannels corresponding to the different subbands are completely decoupled. The equalization in CP-OFDM then becomes trivial and can be performed by a single complex multiplication per subcarrier, giving rise to the so-called single-tap equalizer. Usually, this is of the Zero Forcing (ZF) type, which directly inverts the frequency response of the channel at each subcarrier. More sophisticated equalizers, such as the Minimum Mean-Squared Error (MMSE) one, are adopted if the noise and channel statistics are known or estimated.
FilterBank MultiCarrier with Offset-QAM subcarrier modulation (FBMC/ OQAM) systems do not have to employ a CP, and they enjoy (real-field) orthogonality in ideal propagation scenarios. This also means that full orthogonality exists by considering the Quadrature Amplitude Modulation (QAM) symbols before the Offset Quadrature Amplitude Modulation (OQAM) staggering at the Synthesis FilterBank (SFB) and after OQAM destaggering at the Analysis FilterBank (AFB), as originally proved for Perfect Reconstruction (PR) Modified Discrete Fourier Transform (MDFT) FilterBank (FB)  and later for FBMC/OQAM systems . In other words, the so-called self-interference can easily be removed by the OQAM destaggering. For realistic propagation scenarios, where channel distortions are present, the symbols received at the AFB output are contaminated by both ISI and ICI, which are channel induced. With mildly frequency selective channels, a single-tap equalizer, like that presented in Section 12.2, should be sufficient to compensate for the channel effects and minimize both kinds of interference. However, with moderate to highly frequency selective channels, more elaborate equalizers have to be used, which will also increase the receiver complexity. Such equalizers can be designed and implemented in the time or frequency domain, and they have the ability to compensate also for time and phase shifts.
Another advantage of FBMC/OQAM systems, apart from the CP-free transmission, is that they provide a flexibility in choosing the subcarrier spacing. In the cases where a higher symbol rate per subcarrier is necessary to reduce latency or frequency offset/phase noise are relevant impairments, a higher subcarrier bandwidth (large subcarrier spacing) can be considered. The consequences of this include an increase in the complexity of the per-subcarrier equalization due to the larger number of taps required. In other cases where a higher granularity in frequency domain is desired and a higher Peak-to-Average Power Ratio (PAPR) can be tolerated,2 a longer symbol duration can be accepted. This results in narrower subbands (smaller subcarrier spacing) and the possibility to employ low complexity equalizers, the single-tap one being the simplest possible.
The compensation of the effects of multipath propagation in FBMC/OQAM systems was first presented in , where it was shown that it is possible to completely eliminate ISI and ICI, and to compensate for time and carrier phase deviations, if a per-subcarrier T/2-spaced3 equalizer with a sufficient number of taps is employed. Here T is the symbol period. The equalizer coefficients are computed using an MMSE steepest descent adaptive algorithm. The analytical solution for the multitap equalizer presented in Section 12.3.1 shares many objectives and properties with that in . It is worth mentioning that in  two structures for the implementation of the fractionally spaced equalizer are introduced, which correspond to the equalizer operating at the 2/T or 1/T sampling rate. Much later, in , both fractionally and symbol-spaced adaptive steepest descent equalizers were proposed. In the nonfractionally spaced case, three equalizers per subcarrier are employed to remove ISI and ICI. These equalizers are placed after the OQAM destaggering and combine the output of the subcarrier of interest with its neighbors. In the fractionally spaced case, also three equalizers per subcarrier are employed, and two variants are provided: With and without the OQAM destaggering in the adaptation loop. In , a combined equalization and echo cancellation solution was presented, where a fractionally spaced Finite Impulse Response (FIR) filter for the equalization and another for the echo cancellation part are employed. For the latter, a pre-processing before the FIR filter is included to emulate the SFB- and AFB-equivalent response. The per-subcarrier equalization for odd-stacked FBMC/OQAM systems was revisited in , where specific equalizer structures were presented to compensate different levels of frequency selectivity. In , an equalizer similar to that presented in Section 12.3.1 was designed so as to cope with ICI from all subcarriers and channel time selectivity. An evaluation of the spectral efficiency as a function of the time and frequency spread was performed and showed that the MMSE multitap equalizer significantly increases spectral efficiency. To improve the robustness to ISI and ICI, a combination of the Walsh–Hadamard transform with FBMC/OQAM was proposed in  (see also  and ). The effect of the transform is to spread the symbols over all subcarriers, resulting in frequency diversity. An MMSE equalizer was employed at the receiver. It is worth mentioning here that frequency diversity can also be achieved by combining bit-interleaved channel coding and the equalization schemes presented in this chapter. The authors in  performed an analysis of ISI and ICI and proposed a new equalizer structure that uses the interference effect in a positive way. A single-tap ZF equalizer before OQAM destaggering is combined with an interference estimation and cancellation scheme applied after the destaggering on a per-subcarrier basis.
For mildly frequency selective channels, the classical single-tap ZF equalizer, applied before the OQAM destaggering, was compared to two alternative equalizers in : a dispersion receiver, where the AFB is designed to match the combination of SFB and channel, and ZF equalization is then applied, and an interference-free receiver, which preprocesses the received signal before the AFB, to transform the equivalent channel to one with purely real or imaginary Channel Frequency Response (CFR), thus allowing the interference to be completely eliminated. It was shown that all three receivers behave similarly for mildly frequency selective channels. If the channel dispersion increases, the first one presents better performance than the other two. In , also for mildly frequency selective channels and considering channel coding, a method for calculating the Log-Likelihood Ratio (LLR) values was derived for the specific OQAM signaling in FilterBank MultiCarrier (FBMC) systems when single-tap ZF equalizers are employed. More recently, a single-tap equalizer that maximizes the Signal-to-Interference Ratio (SIR) was derived in , again for mildly frequency selective channels. It was shown that the maximum SIR criterion leads to improved performance compared to the ZF one. The authors of  propose to jointly design the per-subchannel equalizer and the AFB prototype filter, based on maximizing Signal-to-Interference-plus-Noise Ratio (SINR). An iterative two-step approach is followed, where in the first step the equalizer and in the second step the AFB prototype filter are optimized in an alternating manner. In , for mildly doubly selective channels and single-tap equalizers, the authors derive Bit Error Probability (BEP) expressions to compare FBMC/OQAM with OFDM. For channels with strong frequency selectivity,  extends previous results on MMSE decision-feedback equalization by including two ICI-suppressing filters at each subcarrier.
It should be noted that many of the equalizers discussed in this chapter, along with some other solutions, were studied within the Framework Programme 7 (FP7) Information and Communications Technology (ICT) projects PHYDYAS [17,18] and EMPhAtiC [19,20] (see also the references therein). With the exception of Sections 12.2 and 12.6, the focus is this chapter will be on equalizers for highly frequency selective channels.
Widely Linear Processing (WLP) results, in general, in better performance for OQAM-based systems than strictly linear processing. For this reason, this chapter includes an introduction to WLP and a literature review of how it can be applied to wireless communications and, more specifically, to FBMC/OQAM systems (Section 12.3.2). In Section 12.4, similarly as in Sections 12.3.1 and 12.3.3, another receiver structure specially made for highly frequency selective channels is presented. It involves multiple AFBs and equalizers operating in parallel. In Section 12.5, equalizer solutions that are specific for Fast-Convolution FilterBank (FC-FB) are presented and evaluated. The FC-FB scheme allows us to efficiently realize flexible MultiCarrier (MC) systems, where different subcarriers can have different bandwidths and data rates. Finally, Section 12.6 discusses blind equalizer solutions for FBMC/OQAM systems, where no channel knowledge or training sequences are involved. Section 12.7 concludes the chapter, outlining some possible directions for future research in this area.
A large part of the definitions, notations, and system models employed in this chapter are described in detail in Chapters 9 and 11. Hence, we recommend the reader first to consult those chapters, especially, Section 11.2.
Read full chapter