幫助中心 | 我的帳號 | 關於我們

深度學習基礎(影印版)(英文版)

  • 作者:(美)尼基爾·巴杜馬
  • 出版社:東南大學
  • ISBN:9787564175177
  • 出版日期:2018/02/01
  • 裝幀:平裝
  • 頁數:283
人民幣:RMB 80 元      售價:
放入購物車
加入收藏夾

內容大鋼
    隨著神經網路在21世紀重振旗鼓,深度學習已成為極其活躍的研究領域,為現代機器學習開闢了道路。在《深度學習基礎(影印版)(英文版)》這本實用書籍中,作者尼基爾·巴杜馬提供清晰的解釋以引導你學完這個複雜領域的主要概念。
    Google、微軟和Facebook等公司正在積極發展內部的深度學習團隊。對於我們而言,儘管如此,深度學習仍然是一門非常複雜和難以掌握的課題。如果你熟悉Python,並且具有微積分背景,以及對於機器學習的基本理解,本書將幫助你開啟深度學習之旅。

作者介紹
(美)尼基爾·巴杜馬
    Nikhil Buduma是Remedy的聯合創始人和首席科學家,該公司位於美國舊金山,旨在建立數據驅動為主的健康管理新系統。16歲時,他在聖何塞州立大學管理過一個藥物發現實驗室,為資源受限的社區研發新穎而低成本的篩查方法。到了19歲,他是國際生物學奧林匹克競賽的兩枚金牌獲得者。隨後加入MIT,在那裡他專註于開發大規模數據系統以影響健康服務、精神健康和醫藥研究。在MIT,他聯合創立了Lean On Me,一家全國性的非營利組織,提供匿名簡訊熱線在大學校園內實現有效的一對一支持,並運用數據來積極影響身心健康。如今,Nikhil通過他的風投基金QVenture Partners投資硬科技和數據公司,還為Milwaukee Brewers籃球隊管理一支數據分析團隊。

目錄
Preface
1. The Neural Network
  Building Intelligent Machines
  The Limits of Traditional Computer Programs
  The Mechanics of Machine Learning
  The Neuron
  Expressing Linear Perceptrons as Neurons
  Feed-Forward Neural Networks
  Linear Neurons and Their Limitations
  Sigmoid, Tanh, and ReLU Neurons
  Softmax Output Layers
  Looking Forward
2. Training Feed-Forward Neural Networks
  The Fast-Food Problem
  Gradient Descent
  The Delta Rule and Learning Rates
  Gradient Descent with Sigmoidal Neurons
  The Backpropagation Algorithm
  Stochastic and Minibatch Gradient Descent
  Test Sets, Validation Sets, and Overfitting
  Preventing Overfitting in Deep Neural Networks
  Summary
3. Implementing Neural Networks in TensorFIow
  What Is TensorFlow?
  How Does TensorFlow Compare to Alternatives?
  Installing TensorFlow
  Creating and Manipulating TensorFlow Variables
  TensorFlow Operations
  Placeholder Tensors
  Sessions in TensorFlow
  Navigating Variable Scopes and Sharing Variables
  Managing Models over the CPU and GPU
  Specifying the Logistic Regression Model in TensorFlow
  Logging and Training the Logistic Regression Model
  Leveraging TensorBoard to Visualize Computation Graphs and Learning
  Building a Multilayer Model for MNIST in TensorFlow
  Summary
4. Beyond Gradient Descent
  The Challenges with Gradient Descent
  Local Minima in the Error Surfaces of Deep Networks
  Model Identifiability
  How Pesky Are Spurious Local Minima in Deep Networks?
  Flat Regions in the Error Surface
  When the Gradient Points in the Wrong Direction
  Momentum-Based Optimization
  A Brief View of Second-Order Methods
  Learning Rate Adaptation
    AdaGrad--Accumulating Historical Gradients
    RMSProp--Exponentially Weighted Moving Average of Gradients
    Adam--Combining Momentum and RMSProp

  The Philosophy Behind Optimizer Selection
  Summary
5. Convolutional Neural Networks
  Neurons in Human Vision
  The Shortcomings of Feature Selection
  Vanilla Deep Neural Networks Don't Scale
  Filters and Feature Maps
  Full Description of the Convolutional Layer
  Max Pooling
  Full Architectural Description of Convolution Networks
  Closing the Loop on MNIST with Convolutional Networks
  Image Preprocessing Pipelines Enable More Robust Models
  Accelerating Training with Batch Normalization
  Building a Convolutional Network for CIFAR-10
  Visualizing Learning in Convolutional Networks
  Leveraging Convolutional Filters to Replicate Artistic Styles
  Learning Convolutional Filters for Other Problem Domains
  Summary
6. Embedding and Representation Learning
  Learning Lower-Dimensional Representations
  Principal Component Analysis
  Motivating the Autoencoder Architecture
  Implementing an Autoencoder in TensorFlow
  Denoising to Force Robust Representations
  Sparsity in Autoencoders
  When Context Is More Informative than the Input Vector
  The Word2Vec Framework
  Implementing the Skip-Gram Architecture
  Summary
7. Models for Sequence Analysis
  Analyzing Variable-Length Inputs
  Tackling seq2seq with Neural N-Grams
  Implementing a Part-of-Speech Tagger
  Dependency Parsing and SyntaxNet
  Beam Search and Global Normalization
  A Case for Stateful Deep Learning Models
  Recurrent Neural Networks
  The Challenges with Vanishing Gradients
  Long Short-Term Memory (LSTM) Units
  TensorFlow Primitives for RNN Models
  Implementing a Sentiment Analysis Model
  Solving seq2seq Tasks with Recurrent Neural Networks
  Augmenting Recurrent Networks with Attention
  Dissecting a Neural Translation Network
  Summary
8. Memory Augmented Neural Networks
  Neural Turing Machines
  Attention-Based Memory Access
  NTM Memory Addressing Mechanisms
  Differentiable Neural Computers

  Interference-Free Writing in DNCs
  DNC Memory Reuse
  Temporal Linking of DNC Writes
  Understanding the DNC Read Head
  The DNC Controller Network
  Visualizing the DNC in Action
  Implementing the DNC in TensorFlow
  Teaching a DNC to Read and Comprehend
  Summary
9. Deep Reinforcement Learning
  Deep Reinforcement Learning Masters Atari Games
  What Is Reinforcement Learning?
  Markov Decision Processes (MDP)
    Policy
    Future Return
    Discounted Future Return
  Explore Versus Exploit
  Policy Versus Value Learning
    Policy Learning via Policy Gradients
  Pole-Cart with Policy Gradients
    OpenAI Gym
    Creating an Agent
    Building the Model and Optimizer
    Sampling Actions
    Keeping Track of History
    Policy Gradient Main Function
    PGAgent Performance on Pole-Cart
  Q-Learning and Deep Q-Networks
    The Bellman Equation
    Issues with Value Iteration
    Approximating the Q-Function
    Deep Q-Network (DQN)
    Training DQN
    Learning Stability
    Target Q-Network
    Experience Replay
    From Q-Function to Policy
    DQN and the Markov Assumption
    DQN's Solution to the Markov Assumption
    Playing Breakout wth DQN
    Building Our Architecture
    Stacking Frames
    Setting Up Training Operations
    Updating Our Target Q-Network
    Implementing Experience Replay
    DQN Main Loop
    DQNAgent Results on Breakout
  Improving and Moving Beyond DQN
    Deep Recurrent Q-Networks (DRQN)
    Asynchronous Advantage Actor-Critic Agent (A3C)

    UNsupervised REinforcement and Auxiliary Learning (UNREAL)
  Summary
Index

  • 商品搜索:
  • | 高級搜索
首頁新手上路客服中心關於我們聯絡我們Top↑
Copyrightc 1999~2008 美商天龍國際圖書股份有限公司 臺灣分公司. All rights reserved.
營業地址:臺北市中正區重慶南路一段103號1F 105號1F-2F
讀者服務部電話:02-2381-2033 02-2381-1863 時間:週一-週五 10:00-17:00
 服務信箱:bookuu@69book.com 客戶、意見信箱:cs@69book.com
ICP證:浙B2-20060032