Please use this identifier to cite or link to this item:
Title: Sample-efficient reinforcement learning for CERN accelerator control
Authors: Kain, Verena
Hirlander, Simon
Goddard, Brennan
Velotti, Francesco Maria
Zevi Della Porta, Giovanni
Bruchon, Niky
Valentino, Gianluca
Keywords: Reinforcement learning
Particle accelerators
Issue Date: 2020-12
Publisher: American Physical Society
Citation: Kain, V., Hirlander, S., Goddard, B., Velotti, F. M., Della Porta, G. Z., Bruchon, N., & Valentino, G. (2020). Sample-efficient reinforcement learning for CERN accelerator control. Physical Review Accelerators and Beams, 23(12), 124801.
Abstract: Numerical optimization algorithms are already established tools to increase and stabilize the performance of particle accelerators. These algorithms have many advantages, are available out of the box, and can be adapted to a wide range of optimization problems in accelerator operation. The next boost in efficiency is expected to come from reinforcement learning algorithms that learn the optimal policy for a certain control problem and hence, once trained, can do without the time-consuming exploration phase needed for numerical optimizers. To investigate this approach, continuous model-free reinforcement learning with up to 16 degrees of freedom was developed and successfully tested at various facilities at CERN. The approach and algorithms used are discussed and the results obtained for trajectory steering at the AWAKE electron line and LINAC4 are presented. The necessary next steps, such as uncertainty aware model-based approaches, and the potential for future applications at particle accelerators are addressed.
Appears in Collections:Scholarly Works - FacICTCCE

Files in This Item:
File Description SizeFormat 
PhysRevAccelBeams.23.124801.pdf1.41 MBAdobe PDFView/Open

Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.