Our systems are now restored following recent technical disruption, and we’re working hard to catch up on publishing. We apologise for the inconvenience caused. Find out more

Recommended product

Popular links

Popular links


Control Systems and Reinforcement Learning

Control Systems and Reinforcement Learning

Control Systems and Reinforcement Learning

Sean Meyn , University of Florida
May 2022
This ISBN is for an eBook version which is distributed on our behalf by a third party.
Adobe eBook Reader
9781009063395
$69.00
USD
Adobe eBook Reader
GBP
Hardback

    A high school student can create deep Q-learning code to control her robot, without any understanding of the meaning of 'deep' or 'Q', or why the code sometimes fails. This book is designed to explain the science behind reinforcement learning and optimal control in a way that is accessible to students with a background in calculus and matrix algebra. A unique focus is algorithm design to obtain the fastest possible speed of convergence for learning algorithms, along with insight into why reinforcement learning sometimes fails. Advanced stochastic process theory is avoided at the start by substituting random exploration with more intuitive deterministic probing for learning. Once these ideas are understood, it is not difficult to master techniques rooted in stochastic control. These topics are covered in the second part of the book, starting with Markov chain theory and ending with a fresh look at actor-critic methods for reinforcement learning.

    • Presents optimal control as an accessible path to understanding the goals and behavior of current reinforcement learning techniques
    • Focuses on the ODE method to provide a large toolbox for algorithm design, methods to estimate the speed of learning, and insight as to why algorithms sometimes fail
    • Contains summaries of most reinforcement learning algorithms, and worked examples to guide the choice of 'meta parameters' that appear in each of these recursive algorithms
    • Over 100 exercises - theoretical and computational - illustrate key concepts and applications

    Reviews & endorsements

    'Control Systems and Reinforcement Learning is a densely packed book with a vivid, conversational style. It speaks both to computer scientists interested in learning about the tools and techniques of control engineers and to control engineers who want to learn about the unique challenges posed by reinforcement learning and how to address these challenges. The author, a world-class researcher in control and probability theory, is not afraid of strong and perhaps controversial opinions, making the book entertaining and attractive for open-minded readers. Everyone interested in the "why" and "how" of RL will use this gem of a book for many years to come.' Csaba Szepesvári, Canada CIFAR AI Chair, University of Alberta, and Head of the Foundations Team at DeepMind

    'This book is a wild ride, from the elements of control through to bleeding-edge topics in reinforcement learning. Aimed at graduate students and very good undergraduates who are willing to invest some effort, the book is a lively read and an important contribution.' Shane G. Henderson, Charles W. Lake, Jr. Chair in Productivity, Cornell University

    'Reinforcement learning, now the de facto workhorse powering most AI-based algorithms, has deep connections with optimal control and dynamic programing. Meyn explores these connections in a marvelous manner and uses them to develop fast, reliable iterative algorithms for solving RL problems. This excellent, timely book from a leading expert on stochastic optimal control and approximation theory is a must-read for all practitioners in this active research area.' Panagiotis Tsiotras, David and Andrew Lewis Chair and Professor, Guggenheim School of Aerospace Engineering, Georgia Institute of Technology

    See more reviews

    Product details

    May 2022
    Adobe eBook Reader
    9781009063395
    0 pages
    This ISBN is for an eBook version which is distributed on our behalf by a third party.

    Table of Contents

    • 1. Introduction
    • Part I. Fundamentals Without Noise:
    • 2. Control crash course
    • 3. Optimal control
    • 4. ODE methods for algorithm design
    • 5. Value function approximations
    • Part II. Reinforcement Learning and Stochastic Control:
    • 6. Markov chains
    • 7. Stochastic control
    • 8. Stochastic approximation
    • 9. Temporal difference methods
    • 10. Setting the stage, return of the actors
    • A. Mathematical background
    • B. Markov decision processes
    • C. Partial observations and belief states
    • References
    • Glossary of Symbols and Acronyms
    • Index.
    Resources for
    Type
    Visit author's webpage
      Author
    • Sean Meyn , University of Florida

      Sean Meyn is a professor and holds the Robert C. Pittman Eminent Scholar Chair in the Department of Electrical and Computer Engineering, University of Florida. He is well known for his research on stochastic processes and their applications. His award-winning monograph Markov Chains and Stochastic Stability with R. L. Tweedie is now a standard reference. In 2015 he and Prof. Ana Busic received a Google Research Award recognizing research on renewable energy integration. He is an IEEE Fellow and IEEE Control Systems Society distinguished lecturer on topics related to both reinforcement learning and energy systems.