Dynamic Programming
Dynamic programming (DP) is a sub-field of optimization concerned with sequential decision making over time. The essential ideas of DP have been adopted in many applications, from robotics and AI to the sequencing of DNA. It is used around the world to control aircraft, route shipping, test products, recommend information on media platforms and solve major research problems. Dynamic Programming: Finite States treats the theory of dynamic programming and its applications in economics, finance, and operations research. It contains classical results on dynamic programming as well as extensions created by researchers and practitioners as they wrestle with formulating and solving dynamic models that can explain patterns observed in data. Adopting an abstract framework that provides great generality, this book facilitates rapid progress to the research frontier by combining rigorous theory with numerous applications, many solved exercises, and detailed open-source computer code.
- Presents major advances in the field of dynamic programming
- Allows researchers and graduate students to better understand dynamic programming and handle new and important applications
- Allows readers to gain a better understanding of the theory and brings readers closer to implementing their own research programs
Product details
July 2025Hardback
9781009540797
300 pages
244 × 170 mm
Not yet published - available from July 2025
Table of Contents
- 1. Introduction
- 2. Operators and Fixed Points
- 3. Markov Dynamics
- 4. Optimal Stopping
- 5. Markov Decision Processes
- 6. Stochastic Discounting
- 7. Nonlinear Valuation
- 8. Recursive Decision Processes
- 9. Abstract Dynamic Programming
- 10
- Continuous Time.