《数字通信原理(英文版)》是世界通信权威、信息领域泰斗RobertG.Gallager博士新作,在数字通信原理的基础上精炼,重点阐述了理论、问题和工程设计之间的关系。内容涉及离散源编码、量化、信道波形、向量空间和信号空间、随机过程和噪声、编码、解码等数字通信基本问题,最后还简单介绍了无线数字通信。
《数字通信原理(英文版)》是通信专业高年级本科生和研究生教材,也可供工程技术人员参考。
国际信息领域泰斗Robert G. Gallager博士最新力作。 香农的信息论中有一个重要结论,即信源/信道分离定理:如果能够以任意某种方式通过信道传输信源,那么也一定能够通过二进制接口传输该信源。这就为数字通信成为通信系统的标准形式提供了理论依据。另外,数字电路成本低廉、性能可靠、易于小型化,更容易实现。因此,近十年来,数字通信技术发展迅猛,已经深入人们日常生活的每个角落,这也驱使大量人才投身数字通信领域。 《数字通信原理(英文版)》融会了Gallager博士数十年的教学科研心得,介绍了信息论方面的基本概念及其对通信系统设计的作用,旨在帮助数字通信领域的师生和工程技术人员理解数字通信背后的基本原理。《数字通信原理(英文版)》内容全面,包括了数字通信基本知识,离散信源的编码,量化,信源波形与信道波形,向量空间与信号空间,信道、调制与解调,随机过程与噪声,信号的检测与编解码,无线通信,等等。书中为数字通信原理搭建了一个简单的框架,以直观、简洁的方式介绍了复杂的现代通信系统。 《数字通信原理(英文版)》甫一出版,即被业界奉为经典,目前已被麻省理工学院等世界级名校作为教材。配有习题答案,读者可登录图灵公司网站免费注册获取。
Digital communication is an enormous and rapidly growing industry, roughly comparable in size to the computer industry. The objective of this text is to study those aspects of digital communication systems that are unique. That is, rather than focusing on hardware and software for these systems (which is much like that in many other fields), we focus on the fundamental system aspects of modem digital communication.
Digital communication is a field in which theoretical ideas have had an unusuallypowerful impact on system design and practice. The basis of the theory was developedin 1948 by Claude Shannon, and is called information theory. For the first 25 years or so of its existence, information theory served as a rich source of academic research prob-lems and as a tantalizing suggestion that communication systems could be made more efficient and more reliable by using these approaches. Other than small experiments and a few highly specialized military systems, the theory had little interaction with practice. By the mid 1970s, however, mainstream systems using information-theoretic ideas began to be widely implemented. The first reason for this was the increasing number of engineers who understood both information theory andcommunication system practice. The second reason was that the low cost and increasing processing power of digital hardware made it possible to implement the sophisticated algorithms suggested by information theory. The third reason was that the increasing complexity of communication systems required the architectural principles of information theory.
The theoretical principles here fall roughly into two categories - the first provides analytical tools for determining the performance of particular systems, and the second puts fundamental limits on the performance of any system. Much of the first category can be understood by engineering undergraduates, while the second category is dis-tinctly graduate in nature. It is not that graduate students know so much more than undergraduates, but rather that undergraduate engineering students are trained to mas-ter enormous amounts of detail and the equations that deal with that detail. They are not used to the patience and deep thinking required to understand abstract performance limits. This patience comes later with thesis research.
My original purpose was to write an undergraduate text on digital communication,but experience teaching this material over a number of years convinced me that I could not write an honest exposition of principles, including both what is possible and what is not possible, without losing most undergraduates. There are many excellent undergraduate texts on digital communication describing a wide variety of systems,and I did not see the need for another. Thus this text is now aimed at graduate students,but is accessible to patient undergraduates.
The relationship between theory, problem sets, and engineering/design in an aca-demic subject is rather complex. The theory deals with relationships and analysis for models of real systems. A good theory (and information theory is one of the best)allows for simple analysis of simplified models. It also provides structural principles that allow insights from these simple models to be applied to more complex and realistic models. Problem sets provide students with an opportunity to analyze these highly simplified models, and, with patience, to start to understand the general princi-ples. Engineering deals with making the approximations and judgment calls to create simple models that focus on the critical elements of a situation, and from there to design workable systems.
The important point here is that engineering (at this level) cannot really be separated from theory. Engineering is necessary to choose appropriate theoretical models, and theory is necessary to find the general properties of those models. To oversimplify,engineering determines what the reality is and theory determines the consequences and structure of that reality. At a deeper level, however, the engineering perception of real-ity heavily depends on the perceived structure (all of us carry oversimplified models around in our heads). Similarly, the structures created by theory depend on engi-neering common sense to focus on important issues. Engineering sometimes becomes overly concerned with detail, and theory becomes overly concerned with mathematical niceties, but we shall try to avoid both these excesses here.
Each topic in the text is introduced with highly oversimplified toy models. The results about these toy models are then related to actual communication systems, and these are used to generalize the models. We then iterate back and forth between analysis of models and creation of models. Understanding the performance limits on classes of models is essential in this process.
Robert G. Gallager,博士,信息理论界世界级权威,美国工程院院士,美国科学院院士,先后荣获1990年IEEE荣誉奖章、2003年马可尼奖、2004年Dijkstra奖等多项殊荣。师从信息论创始人香农,不但自己科研成果卓著,还为通信领域培育了很多优秀人才,包括《无线通信基础》的作者David Tse。
Preface page xi
Acknowledgements xiv
1 Introduction to digital communication 1
1.1 Standardized interfaces and layering 3
1.2 Communication sources 5
1.2.1 Source coding 6
1.3 Communication channels 7
1.3.1 Channel encoding (modulation) 10
1.3.2 Error correction 11
1.4 Digital interface 12
1.4.1 Network aspects of the digital interface 12
1.5 Supplementary reading 14
2 Coding for discrete sources 16
2.1 Introduction 16
2.2 Fixed-length codes for discrete sources 18
2.3 Variable-length codes for discrete sources 19
2.3.1 Unique decodability 20
2.3.2 Prefix-free codes for discrete sources 21
2.3.3 The Kraft inequality for prefix-free codes 23
2.4 Probability models for discrete sources 26
2.4.1 Discrete memoryless sources 26
2.5 Minimum L for prefix-free codes 27
2.5.1 Lagrange multiplier solution for the minimum L 28
2.5.2 Entropy bounds on L 29
2.5.3 Huffman’s algorithm for optimal source codes 31
2.6 Entropy and fixed-to-variable-length codes 35
2.6.1 Fixed-to-variable-length codes 37
2.7 The AEP and the source coding theorems 38
2.7.1 The weak law of large numbers 39
2.7.2 The asymptotic equipartition property 40
2.7.3 Source coding theorems 43
2.7.4 The entropy bound for general classes of codes 44
2.8 Markov sources 46
2.8.1 Coding for Markov sources 48
2.8.2 Conditional entropy 48
2.9 Lempel–Ziv universal data compression 51
2.9.1 The LZ77 algorithm 51
2.9.2 Why LZ77 works 53
2.9.3 Discussion 54
2.10 Summary of discrete source coding 55
2.11 Exercises 56
3 Quantization 67
3.1 Introduction to quantization 67
3.2 Scalar quantization 68
3.2.1 Choice of intervals for given representation points 69
3.2.2 Choice of representation points for given intervals 69
3.2.3 The Lloyd–Max algorithm 70
3.3 Vector quantization 72
3.4 Entropy-coded quantization 73
3.5 High-rate entropy-coded quantization 75
3.6 Differential entropy 76
3.7 Performance of uniform high-rate scalar quantizers 78
3.8 High-rate two-dimensional quantizers 81
3.9 Summary of quantization 84
3.10 Appendixes 85
3.10.1 Nonuniform scalar quantizers 85
3.10.2 Nonuniform 2D quantizers 87
3.11 Exercises 88
4 Source and channel waveforms 93
4.1 Introduction 93
4.1.1 Analog sources 93
4.1.2 Communication channels 95
4.2 Fourier series 96
4.2.1 Finite-energy waveforms 98
4.3 L2 functions and Lebesgue integration over[-T/2,T/2] 101
4.3.1 Lebesgue measure for a union of intervals 102
4.3.2 Measure for more general sets 104
4.3.3 Measurable functions and integration over [-T/2,T/2] 106
4.3.4 Measurability of functions defined by other functions 108
4.3.5 L1 and L2 functions over [-T/2,T/2]
4.4 Fourier series for 2 waveforms 109
4.4.1 The T-spaced truncated sinusoid expansion 111
4.5 Fourier transforms and 2 waveforms 114
4.5.1 Measure and integration over R 116
4.5.2 Fourier transforms of 2 functions 118
4.6 The DTFT and the sampling theorem 120
4.6.1 The discrete-time Fourier transform 121
4.6.2 The sampling theorem 122
4.6.3 Source coding using sampled waveforms 124
4.6.4 The sampling theorem for[Δ-W,Δ+W]
4.7 Aliasing and the sinc-weighted sinusoid expansion 126
4.7.1 The T-spaced sinc-weighted sinusoid expansion 127
4.7.2 Degrees of freedom 128
4.7.3 Aliasing – a time-domain approach 129
4.7.4 Aliasing – a frequency-domain approach 130
4.8 Summary 132
4.9 Appendix: Supplementary material and proofs 133
4.9.1 Countable sets 133
4.9.2 Finite unions of intervals over [-T/2,T/2]
4.9.3 Countable unions and outer measure over [-T/2,T/2]
4.9.4 Arbitrary measurable sets over[-T/2,T/2]
4.10 Exercises 143
5 Vector spaces and signal space 153
5.1 Axioms and basic properties of vector spaces 154
5.1.1 Finite-dimensional vector spaces 156
5.2 Inner product spaces 158
5.2.1 The inner product spaces Rn and Cn 158
5.2.2 One-dimensional projections 159
5.2.3 The inner product space of 2 functions 161
5.2.4 Subspaces of inner product spaces 162
5.3 Orthonormal bases and the projection theorem 163
5.3.1 Finite-dimensional projections 164
5.3.2 Corollaries of the projection theorem 165
5.3.3 Gram–Schmidt orthonormalization 166
5.3.4 Orthonormal expansions in 2 167
5.4 Summary 169
5.5 Appendix: Supplementary material and proofs 170
5.5.1 The Plancherel theorem 170
5.5.2 The sampling and aliasing theorems 174
5.5.3 Prolate spheroidal waveforms 176
5.6 Exercises 177
6 Channels, modulation, and demodulation 181
6.1 Introduction 181
6.2 Pulse amplitude modulation (PAM) 184
6.2.1 Signal constellations 184
6.2.2 Channel imperfections: a preliminary view 185
6.2.3 Choice of the modulation pulse 187
6.2.4 PAM demodulation 189
6.3 The Nyquist criterion 190
6.3.1 Band-edge symmetry 191
6.3.2 Choosing (p(t-kT);k∈Z) an orthonormal set 193
6.3.3 Relation between PAM and analog source coding 194
6.4 Modulation: baseband to passband and back 195
6.4.1 Double-sideband amplitude modulation 195
6.5 Quadrature amplitude modulation (QAM) 196
6.5.1 QAM signal set 198
6.5.2 QAM baseband modulation and demodulation 199
6.5.3 QAM: baseband to passband and back 200
6.5.4 Implementation of QAM 201
6.6 Signal space and degrees of freedom 203
6.6.1 Distance and orthogonality 204
6.7 Carrier and phase recovery in QAM systems 206
6.7.1 Tracking phase in the presence of noise 207
6.7.2 Large phase errors 208
6.8 Summary of modulation and demodulation 208
6.9 Exercises 209
7 Random processes and noise 216
7.1 Introduction 216
7.2 Random processes 217
7.2.1 Examples of random processes 218
7.2.2 The mean and covariance of a random process 220
7.2.3 Additive noise channels 221
7.3 Gaussian random variables, vectors, and processes 221
7.3.1 The covariance matrix of a jointly Gaussianrandom vector 224
7.3.2 The probability density of a jointly Gaussianrandom vector 224
7.3.3 Special case of a 2D zero-mean Gaussian random vector 227
7.3.4 Z = AW, where A is orthogonal 228
7.3.5 Probability density for Gaussian vectors in terms ofprincipal axes 228
7.3.6 Fourier transforms for joint densities 230
7.4 Linear functionals and filters for random processes 231
7.4.1 Gaussian processes defined over orthonormalexpansions 232
7.4.2 Linear filtering of Gaussian processes 233
7.4.3 Covariance for linear functionals and filters 234
7.5 Stationarity and related concepts 235
7.5.1 Wide-sense stationary (WSS) random processes 236
7.5.2 Effectively stationary and effectively WSSrandom processes 238
7.5.3 Linear functionals for effectively WSS random processes 239
7.5.4 Linear filters for effectively WSS random processes 239
7.6 Stationarity in the frequency domain 242
7.7 White Gaussian noise 244
7.7.1 The sinc expansion as an approximation to WGN 246
7.7.2 Poisson process noise 247
7.8 Adding noise to modulated communication 248
7.8.1 Complex Gaussian random variables and vectors 250
7.9 Signal-to-noise ratio 251
7.10 Summary of random processes 254
7.11 Appendix: Supplementary topics 255
7.11.1 Properties of covariance matrices 255
7.11.2 The Fourier series expansion of a truncated random process 257
7.11.3 Uncorrelated coefficients in a Fourier series 259
7.11.4 The Karhunen–Loeve expansion 262
7.12 Exercises 263
8 Detection, coding, and decoding 268
8.1 Introduction 268
8.2 Binary detection 271
8.3 Binary signals in white Gaussian noise 273
8.3.1 Detection for PAM antipodal signals 273
8.3.2 Detection for binary nonantipodal signals 275
8.3.3 Detection for binary real vectors in WGN 276
8.3.4 Detection for binary complex vectors in WGN 279
8.3.5 Detection of binary antipodal waveforms in WGN 281
8.4 M-ary detection and sequence detection 285
8.4.1 M-ary detection 285
8.4.2 Successive transmissions of QAM signals in WGN 286
8.4.3 Detection with arbitrary modulation schemes 289
8.5 Orthogonal signal sets and simple channel coding 292
8.5.1 Simplex signal sets 293
8.5.2 Biorthogonal signal sets 294
8.5.3 Error probability for orthogonal signal sets 294
8.6 Block coding 298
8.6.1 Binary orthogonal codes and Hadamard matrices 298
8.6.2 Reed–Muller codes 300
8.7 Noisy-channel coding theorem 302
8.7.1 Discrete memoryless channels 303
8.7.2 Capacity 304
8.7.3 Converse to the noisy-channel coding theorem 306
8.7.4 Noisy-channel coding theorem, forward part 307
8.7.5 The noisy-channel coding theorem for WGN 311
8.8 Convolutional codes 312
8.8.1 Decoding of convolutional codes 314
8.8.2 The Viterbi algorithm 315
8.9 Summary of detection, coding, and decoding 317
8.10 Appendix: Neyman–Pearson threshold tests 317
8.11 Exercises 322
9 Wireless digital communication 330
9.1 Introduction 330
9.2 Physical modeling for wireless channels 334
9.2.1 Free-space, fixed transmitting and receiving antennas 334
9.2.2 Free-space, moving antenna 337
9.2.3 Moving antenna, reflecting wall 337
9.2.4 Reflection from a ground plane 340
9.2.5 Shadowing 340
9.2.6 Moving antenna, multiple reflectors 341
9.3 Input/output models of wireless channels 341
9.3.1 The system function and impulse response for LTV systems 343
9.3.2 Doppler spread and coherence time 345
9.3.3 Delay spread and coherence frequency 348
9.4 Baseband system functions and impulse responses 350
9.4.1 A discrete-time baseband model 353
9.5 Statistical channel models 355
9.5.1 Passband and baseband noise 358
9.6 Data detection 359
9.6.1 Binary detection in flat Rayleigh fading 360
9.6.2 Noncoherent detection with known channel magnitude 363
9.6.3 Noncoherent detection in flat Rician fading 365
9.7 Channel measurement 367
9.7.1 The use of probing signals to estimate the channel 368
9.7.2 Rake receivers 373
9.8 Diversity 376
9.9 CDMA: the IS95 standard 379
9.9.1 Voice compression 380
9.9.2 Channel coding and decoding 381
9.9.3 Viterbi decoding for fading channels 382
9.9.4 Modulation and demodulation 383
9.9.5 Multiaccess interference in IS95 386
9.10 Summary of wireless communication 388
9.11 Appendix: Error probability for noncoherent detection 390
9.12 Exercises 391
References 398
Index 400
For any encoder, there must be a decoder that maps the encoded bit sequenceback into the source symbol sequence. For prefix-free codes on k-tuples of sourcesymbols, the decoder waits for each variable-length codeword to arrive, maps it intothe corresponding k-tuple of source symbols, and then starts decoding for the nextk-tuple. For fixed-to-fixed-length schemes, the decoder waits for a block of codesymbols and then decodes the corresponding block of source symbols. In general, the source produces a nonending sequence X1, X2 of source letterswhich are encoded into a nonending sequence of encoded binary digits. The decoderobserves this encoded sequence and decodes source symbol Xn when enough bits havearrived to make a decision on it. For any given coding and decoding scheme for a given iid source, define the rvDn as the number of received bits that permit a decision on Xn = X1……Xn. Thisincludes the possibility of coders and decoders for which some sample source stringsxn are decoded incorrectly or postponed infinitely. For these xn, the sample value ofDn is taken to be infinite. It is assumed that all decisions are final in the sense thatthe decoder cannot decide on a particular xn after observing an initial string of theencoded sequence and then change that decision after observing more of the encodedsequence. What we would like is a scheme in which decoding is correct with highprobability and the sample value of the rate, Dn/n, is small with high probability.What the following theorem shows is that for large n, the sample rate can be strictlybelow the entropy only with vanishingly small probability. This then shows that theentropy lowerbounds the data rate in this strong sense.