幫助中心 | 我的帳號 | 關於我們

稀疏統計學習(LASSO方法及其推廣全彩英文版)/香農信息科學經典

  • 作者:(美)特雷弗·哈斯蒂//羅伯特·蒂布希拉尼//馬丁·溫賴特|責編:陳亮
  • 出版社:世圖出版公司
  • ISBN:9787523201329
  • 出版日期:2023/09/01
  • 裝幀:平裝
  • 頁數:351
人民幣:RMB 179 元      售價:
放入購物車
加入收藏夾

內容大鋼
    稀疏統計模型只具有少數非零參數或權重,經典地體現了化繁為簡的理念,因而廣泛應用於諸多領域。本書就稀疏性統計學習做出總結,以LASSO方法為中心,層層推進,逐漸囊括其他方法,深入探討諸多稀疏性問題的求解和應用;不僅包含大量的例子和清晰的圖表,還附有文獻註釋和課後練習,是深入學習統計學知識的參考。本書適合電腦科學、統計學和機器學習的學生和研究人員。

作者介紹
(美)特雷弗·哈斯蒂//羅伯特·蒂布希拉尼//馬丁·溫賴特|責編:陳亮

目錄
Preface
1  Introduction
2  The Lasso for Linear Models
  2.1  Introduction
  2.2  The Lasso Estimator
  2.3  Cross-Validation and Inference
  2.4  Computation of the Lasso Solution
    2.4.1  Single Predictor: Soft Thresholding
    2.4.2  Multiple Predictors: Cyclic Coordinate Descent
    2.4.3  Soft-Thresholding and Orthogonal Bases
  2.5  Degrees of Freedom
  2.6  Uniqueness of the Lasso Solutions
  2.7  A Glimpse at the Theory
  2.8  The Nonnegative Garrote
  2.9  lq Penalties and Bayes Estimates
  2.10  Some Perspective
  Exercises
3  Generalized Linear Models
  3.1  Introduction
  3.2  Logistic Regression
    3.2.1  Example: Document Classification
    3.2.2  Algorithms
  3.3  Multiclass Logistic Regression
    3.3.1  Example: Handwritten Digits
    3.3.2  Algorithms
    3.3.3  Grouped-Lasso Multinomial
  3.4  Log-Linear Models and the Poisson GLM
    3.4.1  Example: Distribution Smoothing
  3.5  Cox Proportional Hazards Models
    3.5.1  Cross-Validation
    3.5.2  Pre-Validation
  3.6  Support Vector Machines
    3.6.1  Logistic Regression with Separable Data
  3.7  Computational Details and glmnet
  Bibliographic Notes
  Exercises
4  Generalizations of the Lasso Penalty
  4.1  Introduction
  4.2  The Elastic Net
  4.3  The Group Lasso
    4.3.1  Computation for the Group Lasso
    4.3.2  Sparse Group Lasso
    4.3.3  The Overlap Group Lasso
  4.4  Sparse Additive Models and the Group Lasso
    4.4.1  Additive Models and Backfitting
    4.4.2  Sparse Additive Models and Backfitting
    4.4.3  Approaches Using Optimization and the Group Lasso
    4.4.4  Multiple Penalization for Sparse Additive Models
  4.5  The Fused Lasso
    4.5.1  Fitting the Fused Lasso

      4.5.1.1  Reparametrization
      4.5.1.2  A Path Algorithm
      4.5.1.3  A Dual Path Algorithm
      4.5.1.4  Dynamic Programming for the Fused Lasso
    4.5.2  Trend Filtering
    4.5.3  Nearly Isotonic Regression
  4.6  Nonconvex Penalties
  Bibliographic Notes
  Exercises
5  Optimization Methods
  5.1  Introduction
  5.2  Convex Optimality Conditions
    5.2.1  Optimality for Differentiable Problems
    5.2.2  Nondifferentiable Functions and Subgradients
  5.3  Gradient Descent
    5.3.1  Unconstrained Gradient Descent
    5.3.2  Projected Gradient Methods
    5.3.3  Proximal Gradient Methods
    5.3.4  Accelerated Gradient Methods
  5.4  Coordinate Descent
    5.4.1  Separability and Coordinate Descent
    5.4.2  Linear Regression and the Lasso
    5.4.3  Logistic Regression and Generalized Linear Models
  5.5  A Simulation Study
  5.6  Least Angle Regression
  5.7  Alternating Direction Method of Multipliers
  5.8  Minorization-Maximization Algorithms
  5.9  Biconvexity and Alternating Minimization
  5.10  Screening Rules
  Bibliographic Notes
  Appendix
  Exercises
6  Statistical Inference
  6.1  The Bayesian Lasso
  6.2  The Bootstrap
  6.3  Post-Selection Inference for the Lasso
    6.3.1  The Covariance Test
    6.3.2  A General Scheme for Post-Selection Inference
      6.3.2.1  Fixed-入 Inference for the Lasso
      6.3.2.2  The Spacing Test for LAR
    6.3.3  What Hypothesis Is Being Tested?
    6.3.4  Back to Forward Stepwise Regression
  6.4  Inference via a Debiased Lasso
  6.5  Other Proposals for Post-Selection Inference
  Bibliographic Notes
  Exercises
7  Matrix Decompositions, Approximations, and Completion
  7.1  Introduction
  7.2  The Singular Value Decomposition
  7.3  Missing Data and Matrix Completion

    7.3.1  The Netflix Movie Challenge
    7.3.2  Matrix Completion Using Nuclear Norm
    7.3.3  Theoretical Results for Matrix Completion
    7.3.4  Maximum Margin Factorization and Related Methods
  7.4  Reduced-Rank Regression
  7.5  A General Matrix Regression Framework
  7.6  Penalized Matrix Decomposition
  7.7  Additive Matrix Decomposition
  Bibliographic Notes
  Exercises
8  Sparse Multivariate Methods
  8.1  Introduction
  8.2  Sparse Principal Components Analysis
    8.2.1  Some Background
    8.2.2  Sparse Principal Components
      8.2.2.1  Sparsity from Maximum Variance
      8.2.2.2  Methods Based on Reconstruction
    8.2.3  Higher-Rank Solutions
      8.2.3.1  Illustrative Application of Sparse PCA
    8.2.4  Sparse PCA via Fantope Projection
    8.2.5  Sparse Autoencoders and Deep Learning
    8.2.6  Some Theory for Sparse PCA
  8.3  Sparse Canonical Correlation Analysis
    8.3.1  Example: Netflix Movie Rating Data
  8.4  Sparse Linear Discriminant Analysis
    8.4.1  Normal Theory and Bayes' Rule
    8.4.2  Nearest Shrunken Centroids
    8.4.3  Fisher's Linear Discriminant Analysis
      8.4.3.1  Example: Simulated Data with Five Classes
    8.4.4  Optimal Scoring
      8.4.4.1  Example: Face Silhouettes
  8.5  Sparse Clustering
    8.5.1  Some Background on Clustering
      8.5.1.1  Example: Simulated Data with Six Classes
    8.5.2  Sparse Hierarchical Clustering
    8.5.3  Sparse K-Means Clustering
    8.5.4  Convex Clustering
  Bibliographic Notes
  Exercises
9  Graphs and Model Selection
  9.1  Introduction
  9.2  Basics of Graphical Models
    9.2.1  Factorization and Markov Properties
      9.2.1.1  Factorization Property
      9.2.1.2  Markov Property
      9.2.1.3  Equivalence of Factorization and Markov Properties
    9.2.2  Some Examples
      9.2.2.1  Discrete Graphical Models
      9.2.2.2  Gaussian Graphical Models
  9.3  Graph Selection via Penalized Likelihood

    9.3.1  Global Likelihoods for Gaussian Models
    9.3.2  Graphical Lasso Algorithm
    9.3.3  Exploiting Block-Diagonal Structure
    9.3.4  Theoretical Guarantees for the Graphical Lasso
    9.3.5  Global Likelihood for Discrete Models
  9.4  Graph Selection via Conditional Inference
    9.4.1  Neighborhood-Based Likelihood for Gaussians
    9.4.2  Neighborhood-Based Likelihood for Discrete Models
    9.4.3  Pseudo-Likelihood for Mixed Models
  9.5  Graphical Models with Hidden Variables
  Bibliographic Notes
  Exercises
10  Signal Approximation and Compressed Sensing
  10.1  Introduction
  10.2  Signals and Sparse Representations
    10.2.1  Orthogonal Bases
    10.2.2  Approximation in Orthogonal Bases
    10.2.3  Reconstruction in Overcomplete Bases
  10.3  Random Projection and Approximation
    10.3.1  Johnson–Lindenstrauss Approximation
    10.3.2  Compressed Sensing
  10.4  Equivalence between lo and l1 Recovery
    10.4.1  Restricted Nullspace Property
    10.4.2  Sufficient Conditions for Restricted Nullspace
    10.4.3  Proofs
      10.4.3.1  Proof of Theorem 10.1
      10.4.3.2  Proof of Proposition 10.1
  Bibliographic Notes
  Exercises
11  Theoretical Results for the Lasso
  11.1  Introduction
    11.1.1  Types of Loss Functions
    11.1.2  Types of Sparsity Models
  11.2  Bounds on Lasso l2-Error
    11.2.1  Strong Convexity in the Classical Setting
    11.2.2  Restricted Eigenvalues for Regression
    11.2.3  A Basic Consistency Result
  11.3  Bounds on Prediction Error
  11.4  Support Recovery in Linear Regression
    11.4.1  Variable-Selection Consistency for the Lasso
      11.4.1.1  Some Numerical Studies
  11.5  Beyond the Basic Lasso
  Bibliographic Notes
  Exercises
Bibliography
Author Index
Index

  • 商品搜索:
  • | 高級搜索
首頁新手上路客服中心關於我們聯絡我們Top↑
Copyrightc 1999~2008 美商天龍國際圖書股份有限公司 臺灣分公司. All rights reserved.
營業地址:臺北市中正區重慶南路一段103號1F 105號1F-2F
讀者服務部電話:02-2381-2033 02-2381-1863 時間:週一-週五 10:00-17:00
 服務信箱:bookuu@69book.com 客戶、意見信箱:cs@69book.com
ICP證:浙B2-20060032