暂无评论
图文详情
- ISBN:9787519261870
- 装帧:一般胶版纸
- 册数:暂无
- 重量:暂无
- 开本:其他
- 页数:351
- 出版时间:2020-06-01
- 条形码:9787519261870 ; 978-7-5192-6187-0
内容简介
统计学习是一快速发展的领域,作者描述了线性回归套索和一个简单的坐标下降算法。稀疏统计模型只有少量的非零参数或权重,因此,它比稠密模型更容易估计和解释。稀疏性统计学习:套索和推广提出了利用稀疏性帮助恢复一组数据中潜在信号的方法。 目次:
目录
Preface
1 Introduction
2 The Lasso for Linear Models
2.1 Introduction
2.2 The Lasso Estimator
2.3 Cross-Vlalidation and Inference
2.4 Computation of the Lasso Solution
2.4.1 Single Predictor:Soft Thresholding
2.4.2 Multiple Predictors:Cyclic Coordinate Descent
2.4.3 Soft-Thresholding and Orthogonal Bases
2.5 Degrees of Freedom
2.6 Uniqueness of the LaSSO Solutions
2.7 A Glimpse at the Theory
2.8 The Nonnegative Garrote
2.9 Penalties and Bayes Estimates
2.10 Some Perspective
Exercises
3 Generalized Linear M0dels
3.1 Introduction
3.2 Logistic Regression
3.2.1 Example:Document Classification
3.2.2 Algorithms
3.3 Multiclass Logistic Regression
3.3.1 Example:Handwritten Digits
3.3.2 Algorithms
3.3.3 Grouped。Lasso Multinomial
3.4 Log-Linear Models and the Poisson GLM
3.4.1 Example:Distribution Smoothing
3.5 COX Proportional Hazards Models
3.5.1 Cross-Validation
3.5.2 Pre-Validation
3.6 Support Vector Machines
3.6.1 Logistic Regression with Separable Data
3.7 Computational Details and glmnet
Bibliographic Notes
Exercises
4 Generalizations of the Lasso Penalty
4.1 Introduction
4.2 The Elastic Net
4.3 The Group Lasso
4.3.1 Computation for the Group Lasso
4.3.2 Sparse Group Lasso
4.3.3 The Overlap Group Lasso
4.4 Sparse Additive Models and the Group Lasso
4.4.1 Additive Models and Backfitting
4.4.2 Sparse Additive Models and Backfitting
4.4.3 Approaches Using Optimization and the Group Lasso
4.4.4 Multiple Penalization for Sparse Additive Models
4.5 The Fused Lasso
4.5.1 Fitting the Fused Lasso
4.5.1.1 Reparametrization
4.5.1.2 A Path Algorithm
4.5.1.3 A Dual Path Algorithm
4.5.1.4 Dynamic Programming for the Fused Lass0
4.5.2 Trend Filtering
4.5.3 Nearly Isotonic Regression
4.6 Nonconvex Penalties
Bibliographic Notes
Exercises
5 Optimization Methods
5.1 Introduction
5.2 Convex Optimality Conditions
5.2.1 Optimality for Differentiable Problems
5.2.2 Nondifferentiable Functions and Subgradients
5.3 Gradient Descent
5.3.1 Unconstrained Gradient Descent
5.3.2 Projected Gradient Methods
5.3.3 ProximaI Gradient Methods
5.3.4 Accelerated Gradient Methods
5.4 Coordinate Descent
5.4.1 Separability and Coordinate Descent
5.4.2 Linear Regression and the Lasso
5.4.3 Logistic Regression and Generalized Linear Models
5.5 A Simulation Study
5.6 Least Angle Regression
5.7 Alternating Direction Method of Multipliers
5.8 Minorization-Maximization Algorithms
5.9 Biconvexity and Alternating Minimazation
5.10 Screening Rules
Bibliographic Notes
Appendix
Exercises
6 Statistical Inference
6.1 The Bayesian Lasso
6.2 The Bootstrap
6.3 Post.Selection Inference for the Lasso
6.3.I The Covariance Test
6.3.2 A General Scheme for Post-Selection Inference
6.3.2.1 Fixed-r Inference for the Lasso
6.3.2.2 The Spacing Test for LAR
6.3.3 What Hypothesis Is Being Tested?
6.3.4 Back to Forward Stepwise Regression
6.4 Inference via a Debiased Lasso
6.5 Other Proposals for Post-Selection Inference
Bibliographic Notes
Exercises
7 Matrix Decompositions,Approximations,and Completion
7.1 Introduction
7.2 The Singular Value Decomposition
7.3 Missing Data and Matrix Completion
7.3.1 The Netflix Movie Challenge
7.3.2 Matrix Completion Using Nuclear Norm
7.3.3 Theoretical Results for Matrix Completion
7.3.4 Maximum Margin Factorization and Related Methods
7.4 Reduced-Rank Regression
7.5 A General Matrix Regression Framework
7.6 Penalized Matrix Decomposition
7.7 Additive Matrix Decomposition
Bibliographic Notes
Exercises
8 Sparse Multivariate Methods
8.1 Introduction
8.2 Sparse Principal Components Analysis
8.2.1 Some Background
8.2.2 Sparse Principal Components
8.2.2.1 Sparsity from Maximum Variance
8.2.2.2 Methods Based on Reconstruction
8.2.3 Higher-Rank Solutions
8.2.3.1 Illustrative Application of Sparse PCA
8.2.4 Sparse PCA via FantoDe Projection
8.2.5 Sparse Autoencoders and Deep Learning
8.2.6 Some Theory for Sparse PCA
8.3 Sparse Canonical Correlation Analysis
8.3.1 Example:Netflix Movie Rating Data
8.4 Sparse Linear Discriminant Analysis
8.4.1 Normal Theory and Bayes'R1lle
8.4.2 Nearest Shrunken Centroids .
8.4.3 Fisher'S Linear Discriminant Analysis
8.4.3.1 Example:Simulated Data with Five Classes
8.4.4 Optimal Scoring
8.4.4.1 Example:Face Silhouettes
8.5 Sparse Clustering
8.5.1 Some Background on Clustering
8.5.1.1 Example:Simulated Data with Six Classes
8.5.2 Sparse Hierarchical Clustering
8.5.3 Sparse K-Means Clustering
8.5.4 Convex Clustering
Bibliographic Notes
Exercises
9 Graphs and Model Selection
9.1 Introduction
9.2 Basics of Graphical Models
9.2.1 Factorization and Markov Properties
9.2.1.1 Factorization Property
9.2.1.2 Markov Property
9.2.1.3 Equivalence of Factorization and Markov
Properties
9.2.2 Some Examples
9.2.2.1 Discrete Graphical Models
9.2.2.2 Gaussian Graphical Models
9.3 Graph Selection via Penalized Likelihood
9.3.1 Global Likelihoods for Gaussian M0dels
9.3.2 Graphical Lasso Algorithm
9.3.3 Exploiting Block-Diagonal Structure
9.3.4 Theoretical Guarantees for the Graphical Lasso
9.3.5 Global Likelihood for Discrete Models
9.4 Graph Selection via Conditional Inference
9.4.1 Neighborhood-Based Likelihood for Gaussians
9.4.2 Neighborhood-Based Likelihood for Discrete Models
9.4.3 Pseudo-Likelihood for Mixed Models
9.5 Graphical Models with Hidden Variables
Bibliographic Notes
Exercises
10 Signal Approximation and Compressed Sensing
l0.1 IntrOduction
10.2 Signals and Sparse Representations
l 0.2.1 Orthogonal Bases
10.2.2 Approximation in Orthogonal Bases
10.2.3 Reconstruction in Overcomplete Bases
10.3 Random Projection and Approxima'tion
10.3.1 Johnson-Lindenstrauss Approximation
10.3.2 Compressed Sensing
10.4 Equivalence betweenl0 andl1 Recoverv
10.4.1 Restricted Nullspace Property
10.4.2 Sufficient Conditions for Resiricted Nullspace
10.4.3 Proofs
10.4.3 Proof of Theorem 10.1
10.4.3.2 Proof of Proposition 10.1
Bibliographic Notes
Exercises
11 Theoretical Results for the Lasso
11.1 Introduction
11.1.1 Types of Loss Functions
11.1.2 Types of Sparsity Models
11.2 Bounds on Lasso l2-Error
11.2.1 Strong Convexity in the Classical Setting
11.2.2 Restricted Eigenvalues for Regression。
11.2.3 A Basic ConSistency Result
11.3 Bounds on Prediction Error
11.4 Support Recovery in Linear Regression
11.4.1 Variable-Selection Consistency for the Lasso 11.4.1 Some Numerical Studies
11.4.2 Proof of Theorem 11.3
1 1.5 Beyond the Basic Lasso
Bibliographic Notes
Exercises
Bibliography
Author Index
Index
展开全部
本类五星书
浏览历史
本类畅销
-
世界尽头的咖啡馆
¥14.4¥45.0 -
乌合之众:大众心理研究
¥11.8¥36.8 -
两汉社会生活概述
¥13.7¥36.0 -
何为生命的意义
¥18.9¥49.8 -
理解人性
¥16.3¥39.8 -
儿童教育心理学:儿童的人格形成与培养
¥10.5¥32.8 -
蛤蟆先生去看心理医生
¥30.4¥38.0 -
大师的国民理想
¥10.2¥32.0 -
我们时代的神经症人格
¥12.7¥39.8 -
乡土中国-彩色插图版
¥15.1¥39.8 -
那时的大学
¥11.2¥28.0 -
汉字王国
¥14.6¥46.0 -
怅望山河
¥18.5¥42.0 -
社会学:原来这么有趣有用
¥9.0¥36.0 -
中国图书史十讲-插图本
¥21.8¥68.0 -
雅俗轩墨馀脞录
¥17.5¥46.0 -
文言津逮
¥9.2¥28.0 -
稳:自洽地接住生命中的所有未知
¥18.2¥48.0 -
万物皆无序
¥20.6¥42.0 -
如何阅读一本书
¥27.6¥46.0