Tutorials

Following the tradition initiated with L4DC 2023, this year we are pleased to offer a series of pre-conference tutorials that will be run the on Monday, July 15th. These tutorials aim to offer a gentle introduction to key topics anticipated to be of significant interest to the L4DC community. This year, three tutorials are offered - one in the morning in a plenary format, and two in the afternoon in a semi-plenary format - as detailed below. These sessions will cover optimization, machine learning, and system & control - the three primary scientific areas that L4DC aims to unite. Note: participation in all tutorials is included in the registration fee.

 

Distributionally Robust Optimization for Control

The first part of the tutorial will focus on two recent topics in Markov decision processes (MDPs). We will first discuss the construction of data-driven MDPs that combine the tasks of estimating the system’s behavior and selecting a policy that performs well out of sample. We will then discuss the exploitation of problem structure to scale to large problem sizes via weakly coupled MDPs that combine a potentially large number of MDPs via a small number of linking constraints. The second part of the tutorial will provide an introduction to distributionally robust optimization with Wasserstein ambiguity sets. In particular, we will address robust Linear-Quadratic-Gaussian (LQG) control problems, where the noise distributions are unknown and belong to Wasserstein ambiguity sets centered at nominal Gaussian distributions. We will derive structural properties of the optimal primal and dual solutions and develop an efficient Frank-Wolfe algorithm to solve robust LQG problems.

Expand All

daniel kuhn

Daniel Kuhn

(EPFL

wolfram tutorial

Wolfram Wiesemann

(Imperial College

 

 

L1, Mathematical Institute

July 15, 2024

9:00 - 10:30

Lectures

10:30 - 11:00

Coffee break
11:00 - 12:30 Lectures

 

Learning under Requirements: Supervised and Reinforcement Learning with Constraints

Requirements are inherent to systems. This is because systems are always defined as tradeoffs between multiple competing specifications such as, e.g., stability, safety, robustness, and efficiency. Learning to satisfy requirements is, however, antithetical to the standard ML practice of minimizing individual losses. To close this gap, we develop the theory and practice of constrained learning. This tutorial provides an overview theoretical and algorithmic developments that show when and how it is possible to learn with constraints. We describe how theoretical guarantees and viable learning algorithms are hindered by lack of convexity of the resulting optimization problems and explain how a near-duality theory circumvents this challenge. Throughout the tutorial we explore supervised learning, robust learning, and reinforcement learning with constraints. We put emphasis on showcasing the breadth of potential applications by discussing fairness, robust classification, federated learning, learning under invariance, and safe reinforcement learning. Attendees will be prepared to start conducting research in this emerging research frontier.

Further information here: https://luizchamon.com/l4dc/

Expand All

ribeiro

 Alejandro Ribeiro

(Penn)

calvofullana

Miguel Calvo-Fullana

(University Pompeu Fabra

 

chamon

 Luiz Chamon

(Stuttgart)

paternain

Santiago Paternain

(Rensselaer PI

L1, Mathematical Institute

July 15, 2024

14:00 - 15:30 Lecture 
15:30 - 16:00 Coffee Break
16:00 - 17:30 Lecture

 

Safety Filters for Control: Concepts, Theory and Practice

Autonomous systems impose numerous challenging requirements on the underlying control algorithms. In particular, while ensuring safety is often critical in the design, the complexity and uncertainty of the dynamics and the multifaceted nature of the control objective make it hard to achieve. Classical control methods are often unsuited to apply in such unstructured tasks with potentially conflicting safety specifications. While impressive progress has been achieved through learning-based policies, missing safety certificates often still prohibit the widespread application of these techniques outside of research environments. This tutorial presents safety filter techniques, providing general and modular approaches to augment any control policy with safety guarantees in the form of constraint satisfaction. The first part provides a comprehensive overview of three main classes of safety filters and introduces invariance-based methods relying on Hamilton-Jacobi reachability, control barrier functions, and predictive control techniques. The second part then focuses on extensions towards improved practical applicability including data-driven enhancements and reduced-order models. The tutorial aims at providing a comprehensive introduction for researchers of different backgrounds. It provides the main concepts, theoretical guarantees and addresses practical aspects, accompanied by application examples.

Expand All

melanie zeilinger

Melanie Zeilinger

(ETH Zurich

claire tomlin

Claire Tomlin

(UC Berkeley

 

 

  • Melanie Zeilinger, ETH Zurich
  • Jason Choi, UC Berkeley
  • Kim Wabersich, Robert Bosch GmbH
  • Max Cohen, Caltech

L3, Mathematical Institute

July 15, 2024

14:00 - 15:30 Lecture 
15:30 - 16:00 Coffee Break
16:00 - 17:30 Lecture