幫助中心 | 我的帳號 | 關於我們

麻省理工加拉格爾數字通信原理(英文版)

  • 作者:(美)羅伯特·加拉格爾|責編:陳亮//夏丹
  • 出版社:世界圖書出版公司
  • ISBN:9787519210083
  • 出版日期:2020/07/01
  • 裝幀:平裝
  • 頁數:407
人民幣:RMB 119 元      售價:
放入購物車
加入收藏夾

內容大鋼
    本書是國際信息領域泰斗、5G通信中的LDPC碼技術提出者羅伯特·加拉格爾院士的經典之作,是加拉格爾教授在美國麻省理工學院50余年教學研究的結品,至今仍是麻省理工學院的研究生授課教材。加拉格爾院士的教授方法展示了其大師級的清晰條理和深刻洞察力,使讀者能直觀理解模型對解決實際問題的重要性。書中不僅講述了信源編碼、通道傳輸和信號檢測的原理,更是講清了這些通信原理背後的統一的數學結構。本書是深入理解數字通信原理的必讀書籍,既可作為通信類專業高年級本科生和研究生教材,又可供工程技術人員參考。

作者介紹
(美)羅伯特·加拉格爾|責編:陳亮//夏丹
    羅伯特·加拉格爾(Robert G.Gallager)教授是美國國家科學院與國家工程院的兩院院士。他曾擔任國際資訊理論學會的主席,他于1983年獲得資訊理論的最高獎——香農獎(相當於資訊理論領域的諾貝爾獎),1990年獲得國際電氣電子工程師學會最高榮譽獎章(相當於電子工程領域的諾貝爾獎),2003年獲得馬可尼獎(相當於通信領域的諾貝爾獎),2020年獲得日本國際獎(相當於整個應用科學領域的諾貝爾獎)。加拉格爾教授于1960年在美國麻省理工學院獲得博士學位后留校任教至今,他1960年博士論文中提出的「低密度奇偶校驗碼」(LDPC code)是目前所有5G設備都必用的通道編碼。他培養出的博士埃爾達爾·阿里坎(Erdal Arikan)提出了5G通信中的另一種重要通道編碼「極化碼」(Polar code)。

目錄
Preface
Acknowledgements
1  Introduction to digital communication
  1.1  Standardized interfaces and layering
  1.2  Communication sources
    1.2.1  Source coding
  1.3  Communication channels
    1.3.1  Channel encoding (modulation)
    1.3.2  Error correction
  1.4  Digital interface
    1.4.1  Network aspects of the digital interface
  1.5  Supplementary reading
2  Coding for discrete sources
  2.1  Introduction
  2.2  Fixed-length codes for discrete sources
  2.3  Variable-length codes for discrete sources
    2.3.1  Unique decodability
    2.3.2  Prefix-free codes for discrete sources
    2.3.3  The Kraft inequality for prefix-free codes
  2.4  Probability models for discrete sources
    2.4.1  Discrete memoryless sources
  2.5  Minimum L for prefix-free codes
    2.5.1  Lagrange multiplier solution for the minimum L
    2.5.2  Entropy bounds on L
    2.5.3  Hufman's algorithm for optimal source codes
  2.6  Entropy and fixed-to-variable-length codes
    2.6.1  Fixed-to-variable-length codes
  2.7  The AEP and the source coding theorems
    2.7.1  The weak law of large numbers
    2.7.2  The asymptotic equipartition property
    2.7.3  Source coding theorems
    2.7.4  The entropy bound for general classes of codes
  2.8  Markov sources
    2.8.1  Coding for Markov sources
    2.8.2  Conditional entropy
  2.9  Lempel-Ziv universal data compression
    2.9.1  The LZ77 algorithm
    2.9.2  Why LZ77 works
    2.9.3  Discussion
  2.10  Summary of discrete source coding
  2.11  Exercises
3  Quantization
  3.1  Introduction to quantization
  3.2  Scalar quantization
    3.2.1  Choice of intervals for given representation points
    3.2.2  Choice of representation points for given intervals
    3.2.3  The Lloyd-Max algorithm
  3.3  Vector quantization
  3.4  Entropy-coded quantization
  3.5  High-rate entropy-coded quantization

  3.6  Differential entropy
  3.7  Performance of uniform high-rate scalar quantizers
  3.8  High-rate two-dimensional quantizers
  3.9  Summary of quantization
  3.10  Appendixes
    3.10.1  Nonuniform scalar quantizers
    3.10.2  Nonuniform 2D quantizers
  3.11  Exercises
4  Source and channel waveforms
  4.1  Introduction
    4.1.1  Analog sources
    4.1.2  Communication channels
  4.2  Fourier series
    4.2.1  Finite-energy waveforms
  4.3  L2 functions and Lebesgue integration over [-T/2, T/2]
    4.3.1  Lebesgue measure for a union of intervals
    4.3.2  Measure for more general sets
    4.3.3  Measurable functions and integration over [-T/2, T/2)
    4.3.4  Measurability of functions defined by other functions
    4.3.5  L1 and L2 functions over [-T/2, T/2]
  4.4  Fourier series for L2 waveforms
    4.4.1  The T-spaced truncated sinusoid expansion
  4.5  Fourier transforms and L2 waveforms
    4.5.1  Measure and integration over R
    4.5.2  Fourier transforms of L2 functions
  4.6  The DTFT and the sampling theorem
    4.6.1  The discrete-time Fourier transform
    4.6.2  The sampling theorem
    4.6.3  Source coding using sampled waveforms
    4.6.4  The sampling theorem for [△-W, △+W]
  4.7  Aliasing and the sinc-weighted sinusoid expansion
    4.7.1  The T-spaced sinc-weighted sinusoid expansion
    4.7.2  Degrees of freedom
    4.7.3  Aliasing-a time-domain approach
    4.7.4  Aliasing-a frequency-domain approach
  4.8  Summary
  4.9  Appendix: Supplementary material and proofs
    4.9.1  Countable sets
    4.9.2  Finite unions of intervals over [-T/2, T/2]
    4.9.3  Countable unions and outer measure over [-T/2, T/2]
    4.9.4  Arbitrary measurable sets over [-T/2, 7/2]
  4.10  Exercises
5  Vector spaces and signal space
  5.1  Axioms and basic properties of vector spaces
    5.1.1  Finite-dimensional vector spaces
  5.2  Inner product spaces
    5.2.1  The inner product spaces Rn and Cn
    5.2.2  One-dimensional projections
    5.2.3  The inner product space of L2 functions
    5.2.4  Subspaces of inner product spaces

  5.3  Orthonormal bases and the projection theorem
    5.3.1  Finite-dimensional projections
    5.3.2  Corollaries of the projection theorem
    5.3.3  Gram-Schmidt orthonormalization
    5.3.4  Orthonormal expansions in L2
  5.4  Summary
  5.5  Appendix: Supplementary material and proofs
    5.5.1  The Plancherel theorem
    5.5.2  The sampling and aliasing theorems
    5.5.3  Prolate spheroidal waveforms
  5.6  Exercises
6  Channels, modulation, and demodulation
  6.1  Introduction
  6.2  Pulse amplitude modulation (PAM)
    6.2.1  Signal constellations
    6.2.2  Channel imperfections: a preliminary view
    6.2.3  Choice of the modulation pulse
    6.2.4  PAM demodulation
  6.3  The Nyquist criterion
    6.3.1  Band-edge symmetry
    6.3.2  Choosing {p (t-kT);k?Z} as an orthonormal set
    6.3.3  Relation between PAM and analog source coding
  6.4  Modulation: baseband to passband and back
    6.4.1  Double-sideband amplitude modulation
  6.5  Quadrature amplitude modulation  (QAM)
    6.5.1  QAM signal set
    6.5.2  QAM baseband modulation and demodulation
    6.5.3  QAM: baseband to passband and back
    6.5.4  Implementation of QAM
  6.6  Signal space and degrees of freedom
    6.6.1  Distance and orthogonality
  6.7  Carrier and phase recovery in QAM systems
    6.7.1  Tracking phase in the presence of noise
    6.7.2  Large phase errors
  6.8  Summary of modulation and demodulation
  6.9  Exercises
7  Random processes and noise
  7.1  Introduction
  7.2  Random processes
    7.2.1  Examples of random processes
    7.2.2  The mean and covariance of a random proces
    7.2.3  Additive noise channels
  7.3  Gaussian random variables, vectors, and processes
    7.3.1  The covariance matrix of a jointly Gaussian random vector
    7.3.2  The probability density of a jointly Gaussian random vector
    7.3.3  Special case of a 2D zero-mean Gaussian random vector
    7.3.4  Z=AW, where A is orthogonal
    7.3.5  Probability density for Gaussian vectors in terms of principal axes
    7.3.6  Fourier transforms for joint densities
  7.4  Linear functionals and filters for random processes

    7.4.1  Gaussian processes defined over orthonorma expansions
    7.4.2  Linear filtering of Gaussian processes
    7.4.3  Covariance for linear functionals and filters
  7.5  Stationarity and related concepts
    7.5.1  Wide-sense stationary (WSS) random processes
    7.5.2  Effectively stationary and effectively WSS random processes
    7.5.3  Linear functionals for effectively WSS random processes
    7.5.4  Linear filters for effectively WSS random processes
  7.6  Stationarity in the frequency domain
  7.7  White Gaussian noise
    7.7.1  The sinc expansion as an approximation to WGN
    7.7.2  Poisson process noise
  7.8  Adding noise to modulated communication
    7.8.1  Complex Gaussian random variables and vectors
  7.9  Signal-to-noise ratio
  7.10  Summary of random processes
  7.11  Appendix: Supplementary topics
    7.11.1  Properties of covariance matrices
    7.11.2  The Fourier series expansion of a truncated random procesa
    7.11.3  Uncorrelated coefficients in a Fourier series
    7.11.4  The Karhunen-Loeve expansion
  7.12  Exercises
8  Detection, coding, and decoding
  8.1  Introduction
  8.2  Binary detection
  8.3  Binary signals in white Gaussian noise
    8.3.1  Detection for PAM antipodal signals
    8.3.2  Detection for binary nonantipodal signals
    8.3.3  Detection for binary real vectors in WGN
    8.3.4  Detection for binary complex vectors in WGN
    8.3.5  Detection of binary antipodal waveforms in WGN
  8.4  M-ary detection and sequence detection
    8.4.1  M-ary detection
    8.4.2  Successive transmissions of QAM signals in WGN
    8.4.3  Detection with arbitrary modulation schemes
  8.5  Orthogonal signal sets and simple channel coding
    8.5.1  Simplex signal sets
    8.5.2  Biorthogonal signal sets
    8.5.3  Error probability for orthogonal signal sets
  8.6  Block coding
    8.6.1  Binary orthogonal codes and Hadamard matrices
    8.6.2  Reed-Muller codes
  8.7  Noisy-channel coding theorem
    8.7.1  Discrete memoryless channels
    8.7.2  Capacity
    8.7.3  Converse to the noisy-channel coding theorem
    8.7.4  Noisy-channel coding theorem, forward part
    8.7.5  The noisy-channel coding theorem for WGN
  8.8  Convolutional codes
    8.8.1  Decoding of convolutional codes

    8.8.2  The Viterbi algorithm
  8.9  Summary of detection, coding, and decoding
  8.10  Appendix: Neyman-Pearson threshold tests
  8.11  Exercises
9  Wireless digital communication
  9.1  Introduction
  9.2  Physical modeling for wireless channels
    9.2.1  Free-space, fixed transmitting and receiving antennas
    9.2.2  Free-space, moving antenna
    9.2.3  Moving antenna, reflecting wall
    9.2.4  Reflection from a ground plane
    9.2.5  Shadowing
    9.2.6  Moving antenna, multiple reflectors
  9.3  Inputloutput models of wireless channels
    9.3.1  The system function and impulse response for LTV systems
    9.3.2  Doppler spread and coherence time
    9.3.3  Delay spread and coherence frequency
  9.4  Baseband system functions and impulse responses
    9.4.1  A discrete-time baseband model
  9.5  Statistical channel models
    9.5.1  Passband and baseband noise
  9.6  Data detection
    9.6.1  Binary detection in flat Rayleigh fading
    9.6.2  Noncoherent detection with known channel magnitude
    9.6.3  Noncoherent detection in flat Rician fading
  9.7  Channel measurement
    9.7.1  The use of probing signals to estimate the channel
    9.7.2  Rake receivers
  9.8  Diversity
  9.9  CDMA: the IS95 standard
    9.9.1  Voice compression
    9.9.2  Channel coding and decoding
    9.9.3  Viterbi decoding for fading channels
    9.9.4  Modulation and demodulation
    9.9.5  Multiaccess interference in IS95
  9.10  Summary of wireless communication
  9.11  Appendix: Error probability for noncoherent detection
  9.12  Exercises
References
Index

  • 商品搜索:
  • | 高級搜索
首頁新手上路客服中心關於我們聯絡我們Top↑
Copyrightc 1999~2008 美商天龍國際圖書股份有限公司 臺灣分公司. All rights reserved.
營業地址:臺北市中正區重慶南路一段103號1F 105號1F-2F
讀者服務部電話:02-2381-2033 02-2381-1863 時間:週一-週五 10:00-17:00
 服務信箱:bookuu@69book.com 客戶、意見信箱:cs@69book.com
ICP證:浙B2-20060032