×
超值优惠券
¥50
100可用 有效期2天

全场图书通用(淘书团除外)

关闭
暂无评论
图文详情
  • ISBN:9787302599814
  • 装帧:一般胶版纸
  • 册数:暂无
  • 重量:暂无
  • 开本:其他
  • 页数:347
  • 出版时间:2022-04-01
  • 条形码:9787302599814 ; 978-7-302-59981-4

本书特色

本书以动态规划为基础,运用抽象映射的单调性和压缩映射理论研究近似动态规划或动态规划的若干典型问题,主要特点是:不涉及所讨论问题的随机特性,也不涉及特殊类型的动态规划问题的某些有趣特征。本书中展示的理论方法位居随机运筹学和随机控制领域的学科前沿,其严谨的分析方法和处理技巧具有重要的理论价值,在数学与人工智能科学的交叉研究领域具有广阔的应用前景。

内容简介

第2版的主要目的是扩大第1版(2013)的第3章和第4章的半契约模型的内容,并以自第1版以来作者在期刊和报告中发表的研究成果作为补充。这本书的数学内容很好优雅且严格,依靠抽象的力量专注于基础知识。该书抢先发售提供了该领域的全面综合知识,同时提出了许多新研究,其中一些研究与当前很好活跃的领域(如近似动态编程)有关。本书中散布着许多例子,用严谨的理论统一起来,并将其应用于特定类型的问题,例如折扣、随机*短路径、半马尔可夫、*小极大、序贯博弈、乘法和风险敏感模型。本书还包括练习(提供完整的解答),并通过示例、反例和理论扩展来补充本文。 就像Bertsekas的其他几本著作一样,这本书写得很好,很好适合自学。它可用作研究生动态编程课程的补充。

目录

1 Introduction
1.1 Structure ofDynamic Programming Problems
1.2 Abstract Dynamic Programming Models
1.2.1 Problem Formulation
1.2.2 Monotonicity and Contraction Properties
1.2.3 Some Examples
1.2.4 Approximation Models-Projected and Aggregation Bellman Equations
1.2.5 Multistep Models-Temporal Difference and ProximalAlgorithms
1.3 Organizationofthe Book
1.4 Notes, Sources, and Exercises

2 Contractive Models
2.1 Bellman's Equation and Optimality Conditions
2.2 Limited Lookahead Policies
2.3 Value Iteration
2.4 Policylteration
2.4.1 Approximate Policylteration
2.4.2 Approximate Policy Iteration Where Policies Converge
2.5 Optimistic Policylteration and A-Policylteration
2.5.1 Convergence ofOptimistic Policylteration
2.5.2 Approximate Optimistic Policylteration
2.5.3 Randomized Optimistic Policylteration
2.6 Asynchronous Algorithms
2.6.1 AsynchronousValuelteration
2.6.2 AsynchronousPolicylteration
2.6.3 Optimistic Asynchronous Policy Iteration with a Uniform Fixed Point
2.7 Notes, Sources, and Exercises

3 Semicontractive Models
3.1 Pathologies of Noncontractive DP Models
3.1.1 Deterministic Shortest Path Problems
3.1.2 Stochastic Shortest Path Problems
3.1.3 The Blackmailer's Dilemma
3.1.4 Linear-QuadraticProblems
3.1.5 An Intuitive View of Semicontractive Analysis
3.2 Semicontractive Models and Regular Policies
3.2.1 S-Regular Policies
3.2.2 Restricted Optimization over S-Regular Policies
3.2.3 Policy Iteration Analysis of Bellman's Equation
3.2.4 Optimistic Policy Iteration and A-Policy Iteration
3.2.5 A MathematicalProgrammingApproach
3.3 Irregular Policies/lnfinite Cost Case
3.4 Irregular Policies/Finite Cost Case-A Perturbation Approach
3.5 Applications in Shortest Path and Other Contexts
3.5.1 Stochastic Shortest Path Problems
3.5.2 Affine Monotonic Problems
3.5.3 Robust Shortest Path Planning
3.5.4 Linear-QuadraticOptimalControl
3.5.5 Continuous-State Deterministic Optimal Control
3.6 Algorithms
3.6.1 AsynchronousValuelteration
3.6.2 Asynchronous Policylteration
3.7 Notes, Sources, and Exercises

4 Noncontractive Models
4.1 Noncontractive Models-Problem Formulation
4.2 Finite Horizon Problems
4.3 Infinite Horizon Problems
4.3.1 Fixed Point Properties and Optimality Conditions
4.3.2 Value Iteration
4.3.3 Exact and Optimistic Policy Iteration-A-Policylteration
4.4 Regularity and Nonstationary Policies
4.4.1 Regularity and Monotonelncreasing Models
4.4.2 Nonnegative Cost Stochastic Optimal Control
4.4.3 Discounted Stochastic OptimalControl
4.4.4 Convergent Models
4.5 Stable Policies for Deterministic Optimal Control
……
Appendix A: Notation and Mathematical Conventions
Appendix B: Contraction Mappings.
References
Index..
展开全部

作者简介

德梅萃 P.博塞克斯(Dimitri P. Bertseka),美国MIT终身教授,美国国家工程院院士,清华大学复杂与网络化系统研究中心客座教授。电气工程与计算机科学领域国际知名作者,著有《非线性规划》《网络优化》《凸优化》等十几本畅销教材和专著。

预估到手价 ×

预估到手价是按参与促销活动、以最优惠的购买方案计算出的价格(不含优惠券部分),仅供参考,未必等同于实际到手价。

确定
快速
导航