go to top scroll for more

Projects


Projects: Custom Search
Reference Number EP/X03917X/1
Title Robust and Efficient Model-based Reinforcement Learning
Status Started
Energy Categories Nuclear Fission and Fusion 5%;
Not Energy Related 95%;
Research Types Basic and strategic applied research 100%
Science and Technology Fields PHYSICAL SCIENCES AND MATHEMATICS (Physics) 5%;
PHYSICAL SCIENCES AND MATHEMATICS (Applied Mathematics) 45%;
PHYSICAL SCIENCES AND MATHEMATICS (Computer Science and Informatics) 50%;
UKERC Cross Cutting Characterisation Not Cross-cutting 100%
Principal Investigator Dr I Bogunovic

Electronic and Electrical Engineering
University College London
Award Type Standard
Funding Source EPSRC
Start Date 01 December 2023
End Date 30 November 2026
Duration 36 months
Total Grant Value £398,101
Industrial Sectors
Region London
Programme NC : ICT
 
Investigators Principal Investigator Dr I Bogunovic , Electronic and Electrical Engineering, University College London (100.000%)
  Industrial Collaborator Project Contact , EURATOM/CCFE (0.000%)
Web Site
Objectives
Abstract Reinforcement learning (RL) is concerned with training data-driven agents to make decisions. In particular, an RL agent interacting with an environment needs to learn an optimal policy, i.e., which actions to take in different states to maximize its rewards. Recently, RL has become one of the most prominent areas of machine learning since RL methods can have tremendous potential in solving complex tasks across various fields (e.g., in autonomous driving, nuclear fusion, healthcare, hardware design, etc.). However, a number of challenges still stand in the way of its widespread adoption. Contemporary RL algorithms are often data-intensive and lack robustness guarantees. Established (deep) RL approaches require a vast amount of data that is readily available in some environments (e.g., in video games). This is often not the case with real-world tasks where data acquisition is costly. Another major challenge is to use the learned control policies in the real world while ensuring reliable, robust, and safe performance. This research aims to provide practical model-based RL algorithms with rigorous statistical and robustness guarantees. This is significant in safety-critical applications where obtaining data is expensive, e.g., in nuclear fusion, learning policies to control plasmas is performed via expensive simulators. The key novelty will be to incorporate the versatile robustness aspects into model-based RL allowing for its broad application across different applications and domains.This project focuses on designing algorithms that make use of powerful non-linear statistical models to learn about the world and can tackle large state spaces present in modern RL tasks. The focus is on obtaining near-optimal policies that are robust against distributional shifts in the environmental dynamics, (adversarial) data corruptions/outliers, and satisfy application-dependent safety constraints during exploration. A major contribution will be novel rigorous statistical sample complexity guarantees for designed algorithms that characterize convergence to optimal robust and safe policies. The obtained guarantees will be efficient in the sense of being independent of the number of states, and hence applicable to complex applications. This will require designing new robust estimators and confidence intervals for popular statistical models. Moreover, the project will result in an entire testbed with distributional shifts and attacking strategies that will be provided to benchmark the robustness of standard and novel robust RL algorithms. This project will be among the first contribution to achieving both robustness and efficiency in MBRL by providing practical algorithms that can be readily applied to emerging impactful real-world tasks such as robust control of nuclear plasmas (an exciting and promising path toward sustainable energy) and efficient discovery of system-on-chip designs.
Publications (none)
Final Report (none)
Added to Database 17/04/24