目錄
Prologue to 2nd Edition
Prologue to 1st Edition
CHAPTER 1 Introduction
1.1 IF DATA HAD MASS, THE EARTH WOULD BE A BLACK HOLE
1.2 LEARNING
1.2.1 Machine Learning
1.3 TYPES OF MACHINE LEARNING
1.4 SUPERVISED LEARNING
1.4.1 Regression
1.4.2 Classification
1.5 THE MACHINE LEARNING PROCESS
1.6 A NOTE ON PROGRAMMING
1.7 A ROADMAP TO THE BOOK
FURTHER READING
CHAPTER 2 Preliminaries
2.1 SOME TERMINOLOGY
2.1.1 Weight Space
2.1.2 The Curse of Dimensionality
2.2 KNOWING WHAT YOU KNOW: TESTING MACHINE LEARNING AL-GORITHMS
2.2.1 Overfitting
2.2.2 Training, Testing, and Validation Sets
2.2.3 The Confusion Matrix
2.2.4 Accuracy Metrics
2.2.5 The Receiver Operator Characteristic (ROC) Curve
2.2.6 Unbalanced Datasets
2.2.7 Measurement Precision
2.3 TURNING DATA INTO PROBABILITIES
2.3.1 Minimising Risk
2.3.2 The Naive Bayes' Classifier
2.4 SOME BASIC STATISTICS
2.4.1 Averages
2.4.2 Variance and Covariance
2.4.3 The Gaussian
2.5 THE BIAS-VARIANCE TRADEOFF
FURTHER READING
PRACTICE QUESTIONS
CHAPTER 3 Neurons, Neural Networks, and Linear Discriminants
3.1 THE BRAIN AND THE NEURON
3.1.1 Hebb's Rule
3.1.2 McCulloch and Pitts Neurons
3.1.3 Limitations of the McCulloch and Pitts Neuronal Model
3.2 NEURAL NETWORKS
3.3 THE PERCEPTRON
3.3.1 The Learning Rate 7/
3.3.2 The Bias Input
3.3.3 The Perceptron Learning Algorithm
3.3.4 An Example of Perceptron Learning: Logic Functions
3.3.5 Implementation
3.4 LINEAR SEPARABILITY
3.4.1 The Perceptron Convergence Theorem
3.4.2 The Exclusive Or (XOR) Function
3.4.3 A Useful Insight
3.4.4 Another Example: The Pima Indian Dataset
3.4.5 Preprocessing: Data Preparation
3.5 LINEAR REGRESSION
3.5.1 Linear Regression Examples
FURTHER READING
PRACTICE QUESTIONS
CHAPTER 4 The Multi-layer Perceptron
4.1 GOING FORWARDS
4.1.1 Biases
4.2 GOING BACKWARDS: BACK-PROPAGATION OF ERROR
4.2.1 The Multi-layer Perceptron Algorithm
4.2.2 Initialising the Weights
4.2.3 Different Output Activation Functions
CHAPTER 5 Radial Basis Functions and Splines
CHAPTER 6 Dimensionality Reduction
CHAPTER 7 Probabilistic Learning
CHAPTER 8 Support Vector Machines
CHAPTER 9 Optimisation and Search
CHAPTER 10 Evolutionary Learning
CHAPTER 11 Reinforcement Learning
CHAPTER 12 Learning with Trees
CHAPTER 13 Decision by Committee: Ensemble Learning
CHAPTER 14 Unsupervised Learning
CHAPTER 15 Markov Chain Monte Carlo (MCMC) Methods
CHAPTER 16 Graphical Models
CHAPTER 17 Symmetric Weights and Deed Belief Networks
CHAPTER 18 Gaussian Processes
APPENDIX A Python
Index