幫助中心 | 我的帳號 | 關於我們

電腦體系結構(量化研究方法英文版原書第6版)/經典原版書庫

  • 作者:(美)約翰·L.亨尼斯//戴維·A.帕特森
  • 出版社:機械工業
  • ISBN:9787111631101
  • 出版日期:2019/07/01
  • 裝幀:平裝
  • 頁數:932
人民幣:RMB 269 元      售價:
放入購物車
加入收藏夾

內容大鋼
    在過去20多年的時間里,本書一直是電腦領域的教師、學生和體系結構設計人員的必讀之作。兩位作者Hennessy和Patterson于2017年榮獲圖靈獎,肯定了他們對電腦領域持久而重要的技術貢獻。隨著處理器和系統架構的最新發展,第6版進行了全面修訂。這一版採用RISC-V指令集體系結構,這是一個現代的RISC指令集,被設計為免費且可公開採用的標準。書中還增加了一個關於領域特定體系結構的新章節,並更新了關於倉儲級計算的章節,其中介紹了谷歌最新的WSC。與本書之前版本的目標一樣,本書致力於揭開電腦體系結構的神秘面紗,關注那些令人興奮的技術創新,同時強調良好的工程設計。

作者介紹
(美)約翰·L.亨尼斯//戴維·A.帕特森

目錄
Chapter 1  Fundamentals of Quantitative Design and Analysis
  1.1  Introduction
  1.2  Classes of Computers
  1.3  Defining Computer Architecture
  1.4  Trends in Technology
  1.5  Trends in Power and Energy in Integrated Circuits 23
  1.6  Trends in Cost
  1.7  Dependability
  1.8  Measuring, Reporting, and Summarizing Performance
  1.9  Quantitative Principles of Computer Design
  1.10  Putting It All Together: Performance, Price, and Power
  1.11  Fallacies and Pitfalls
  1.12  Concluding Remarks
  1.13  Historical Perspectives and References
  Case Studies and Exercises by Diana Franklin
Chapter 2  Memory Hierarchy Design
  2.1  Introduction
  2.2  Memory Technology and Optimizations
  2.3  Ten Advanced Optimizations of Cache Performance
  2.4  Virtual Memory and Virtual Machines
  2.5  Cross-Cutting Issues: The Design of Memory Hierarchies
  2.6  Putting It All Together: Memory Hierarchies in the ARM Cortex-A53 and Intel Core i7 6700
  2.7  Fallacies and Pitfalls
  2.8  Concluding Remarks: Looking Ahead
  2.9  Historical Perspectives and References
  Case Studies and Exercises by Norman P. Jouppi, Rajeev Balasubramonian, Naveen Muralimanohar, and Sheng Li
Chapter 3  Instruction-Level Parallelism and Its Exploitation
  3.1  Instruction-Level Parallelism: Concepts and Challenges
  3.2  Basic Compiler Techniques for Exposing ILP
  3.3  Reducing Branch Costs With Advanced Branch Prediction
  3.4  Overcoming Data Hazards With Dynamic Scheduling
  3.5  Dynamic Scheduling: Examples and the Algorithm
  3.6  Hardware-Based Speculation
  3.7  Exploiting ILP Using Multiple Issue and Static Scheduling
  3.8  Exploiting ILP Using Dynamic Scheduling, Multiple Issue, and Speculation
  3.9  Advanced Techniques for Instruction Delivery and Speculation
  3.10  Cross-Cutting Issues
  3.11  Multithreading: Exploiting Thread-Level Parallelism to Improve Uniprocessor Throughput
  3.12  Putting It All Together: The Intel Core i7 6700 and ARM Cortex-A53
  3.13  Fallacies and Pitfalls
  3.14  Concluding Remarks: What』s Ahead?
  3.15  Historical Perspective and References
  Case Studies and Exercises by Jason D. Bakos and Robert P. Colwell
Chapter 4  Data-Level Parallelism in Vector, SIMD, and GPU Architectures
  4.1  Introduction
  4.2  Vector Architecture
  4.3  SIMD Instruction Set Extensions for Multimedia
  4.4  Graphics Processing Units
  4.5  Detecting and Enhancing Loop-Level Parallelism
  4.6  Cross-Cutting Issues

  4.7  Putting It All Together: Embedded Versus Server GPUs and Tesla Versus Core i7
  4.8  Fallacies and Pitfalls
  4.9  Concluding Remarks
  4.10  Historical Perspective and References
  Case Study and Exercises by Jason D. Bakos
Chapter 5  Thread-Level Parallelism
  5.1  Introduction
  5.2  Centralized Shared-Memory Architectures
  5.3  Performance of Symmetric Shared-Memory Multiprocessors
  5.4  Distributed Shared-Memory and Directory-Based Coherence
  5.5  Synchronization: The Basics
  5.6  Models of Memory Consistency: An Introduction
  5.7  Cross-Cutting Issues
  5.8  Putting It All Together: Multicore Processors and Their Performance
  5.9  Fallacies and Pitfalls
  5.10  The Future of Multicore Scaling
  5.11  Concluding Remarks
  5.12  Historical Perspectives and References
Case Studies and Exercises by Amr Zaky and David A. Wood
Chapter 6  Warehouse-Scale Computers to Exploit Request-Level and Data-Level Parallelism
  6.1  Introduction
  6.2  Programming Models and Workloads for Warehouse-Scale Computers
  6.3  Computer Architecture of Warehouse-Scale Computers
  6.4  The Efficiency and Cost of Warehouse-Scale Computers
  6.5  Cloud Computing: The Return of Utility Computing
  6.6  Cross-Cutting Issues
  6.7  Putting It All Together: A Google Warehouse-Scale Computer
  6.8  Fallacies and Pitfalls
  6.9  Concluding Remarks
  6.10  Historical Perspectives and References
  Case Studies and Exercises by Parthasarathy Ranganathan
Chapter 7  Domain-Specific Architectures
  7.1  Introduction
  7.2  Guidelines for DSAs
  7.3  Example Domain: Deep Neural Networks
  7.4  Google』s Tensor Processing Unit, an Inference Data Center Accelerator
  7.5  Microsoft Catapult, a Flexible Data Center Accelerator
  7.6  Intel Crest, a Data Center Accelerator for Training
  7.7  Pixel Visual Core, a Personal Mobile Device Image Processing Unit
  7.8  Cross-Cutting Issues
  7.9  Putting It All Together: CPUs Versus GPUs Versus DNN Accelerators
  7.10  Fallacies and Pitfalls
  7.11  Concluding Remarks
  7.12  Historical Perspectives and References
Case Studies and Exercises by Cliff Young
Appendix A  Instruction Set Principles
  A.1  Introduction
  A.2  Classifying Instruction Set Architectures
  A.3  Memory Addressing
  A.4  Type and Size of Operands

  A.5  Operations in the Instruction Set
  A.6  Instructions for Control Flow
  A.7  Encoding an Instruction Set
  A.8  Cross-Cutting Issues: The Role of Compilers
  A.9  Putting It All Together: The RISC-V Architecture
  A.10  Fallacies and Pitfalls
  A.11  Concluding Remarks
  A.12  Historical Perspective and References
  Exercises by Gregory D. Peterson
Appendix B  Review of Memory Hierarchy
  B.1  Introduction
  B.2  Cache Performance
  B.3  Six Basic Cache Optimizations
  B.4  Virtual Memory
  B.5  Protection and Examples of Virtual Memory
  B.6  Fallacies and Pitfalls
  B.7  Concluding Remarks
  B.8  Historical Perspective and References
  Exercises by Amr Zaky
Appendix C  Pipelining: Basic and Intermediate Concepts
  C.1  Introduction
  C.2  The Major Hurdle of Pipelining—Pipeline Hazards
  C.3  How Is Pipelining Implemented?
  C.4  What Makes Pipelining Hard to Implement?
  C.5  Extending the RISC V Integer Pipeline to Handle Multicycle Operations
  C.6  Putting It All Together: The MIPS R4000 Pipeline
  C.7  Cross-Cutting Issues
  C.8  Fallacies and Pitfalls
  C.9  Concluding Remarks
  C.10  Historical Perspective and References
  Updated Exercises by Diana Franklin
References
Index
Online Appendices
Appendix D  Storage Systems
Appendix E  Embedded Systems
  by Thomas M. Conte
Appendix F  Interconnection Networks
  by Timothy M. Pinkston and Jos.e Duato
Appendix G  Vector Processors in More Depth
  by Krste Asanovic
Appendix H  Hardware and Software for VLIW and EPIC
Appendix I  Large-Scale Multiprocessors and Scientific Applications
Appendix J  Computer Arithmetic
  by David Goldberg
Appendix K  Survey of Instruction Set Architectures
Appendix L  Advanced Concepts on Address Translation
  by Abhishek Bhattacharjee
Appendix M  Historical Perspectives and References

  • 商品搜索:
  • | 高級搜索
首頁新手上路客服中心關於我們聯絡我們Top↑
Copyrightc 1999~2008 美商天龍國際圖書股份有限公司 臺灣分公司. All rights reserved.
營業地址:臺北市中正區重慶南路一段103號1F 105號1F-2F
讀者服務部電話:02-2381-2033 02-2381-1863 時間:週一-週五 10:00-17:00
 服務信箱:bookuu@69book.com 客戶、意見信箱:cs@69book.com
ICP證:浙B2-20060032