Poster sessions

Poster Session 1

Tuesday, July 16, 13:30 - 14:30

Paper Title Authors
Leveraging Hamilton-Jacobi PDEs with time-dependent Hamiltonians for continual scientific machine learning Paula Chen, Tingwei Meng, Zongren Zou, Jerome Darbon and George Em Karniadakis
Learning to Stabilize High-dimensional Unknown Systems Using Lyapunov-guided Exploration Songyuan Zhang and Chuchu Fan
Tracking Object Positions in Reinforcement Learning: A Metric for Keypoint Detection Emma Cramer, Jonas Reiher and Sebastian Trimpe
Controlgym: Large-Scale Control Environments for Benchmarking Reinforcement Learning Algorithms Xiangyuan Zhang, Weichao Mao, Saviz Mowlavi, Mouhacine Benosman and Tamer Basar
Physics-informed neural networks with unknown measurement noise Philipp Pilar and Niklas Wahlström
Adapting Image-based RL Policies via Predicted Rewards Weiyao Wang, Xinyuan Fang and Gregory Hager
Learning Soft Constrained MPC Value Functions: Efficient MPC Design and Implementation providing Stability and Safety Guarantees Nicolas Chatzikiriakos, Kim Peter Wabersich, Felix Berkel, Patricia Pauli and Andrea Iannelli
Learning-based Rigid Tube Model Predictive Control Yulong Gao, Shuhao Yan, Jian Zhou, Mark Cannon, Alessandro Abate and Karl Henrik Johansson
Uncertainty Quantification and Robustification of Model-based Controllers using Conformal Prediction Kong Yao Chee, Thales C. Silva, M. Ani Hsieh and George J. Pappas
Learning for CasADi: Data-driven Models in Numerical Optimization Tim Salzmann, Jon Arrizabalaga, Joel Andersson, Marco Pavone and Markus Ryll
Learning flow functions of spiking systems Miguel Aguiar, Amritam Das and Karl H. Johansson
Combining Model-based Controller and ML Advice via Convex Reparameterization Junxuan Shen, Adam Wierman and Guannan Qu
Random Features Approximation for Control-Affine Systems Kimia Kazemian, Yahya Sattar and Sarah Dean
Safe Multi-Task Bayesian Optimization Jannis Lübsen, Christian Hespe and Annika Eichler
Mixing Classifiers to Alleviate the Accuracy-Robustness Trade-Off Yatong Bai, Brendon G. Anderson and Somayeh Sojoudi
Probabilistic ODE Solvers for Integration Error-Aware Model Predictive Control Amon Lahr, Filip Tronarp, Nathanael Bosch, Jonathan Schmidt, Philipp Hennig and Melanie N. Zeilinger
Event-Triggered Safe Bayesian Optimization on Quadcopters Antonia Holzapfel, Paul Brunzema and Sebastian Trimpe
On Task-Relevant Loss Functions in Meta-Reinforcement Learning and Online LQR Jaeuk Shin, Giho Kim, Howon Lee, Joonho Han and Insoon Yang
Multi-agent assignment via state augmented reinforcement learning Leopoldo Agorio, Sean Van Alen, Miguel Calvo-Fullana, Santiago Paternain and Juan Andrés Bazerque
Distributed Learning and Function Fusion in Reproducing Kernel Hilbert Space Aneesh Raghavan and Karl Henrik Johansson
From Raw Data to Safety: Reducing Conservatism by Set Expansion Mohammad Bajelani and Klaske Van Heusden
Multi-Modal Conformal Prediction Regions by Optimizing Convex Shape Templates Renukanandan Tumu, Matthew Cleaveland, Rahul Mangharam, George Pappas and Lars Lindemann
Increasing Information for Model Predictive Control with Semi-Markov Decision Processes Rémy Hosseinkhan Boucher, Stella Douka, Onofrio Semeraro and Lionel Mathelin
Multi-Agent Coverage Control with Transient Behavior Consideration Runyu Zhang, Haitong Ma and Na Li
Adaptive Learning from Demonstration in Heterogeneous Agents: Concurrent Minimization and Maximization of Surprise in Sparse Reward Environments Emma Clark, Kanghyun Ryu and Negar Mehr
DC4L: Distribution Shift Recovery via Data-Driven Control for Deep Learning Models Vivian Lin, Kuk Jin Jang, Souradeep Dutta, Michele Caprio, Oleg Sokolsky and Insup Lee
A framework for evaluating human driver models using neuroimaging Christopher Strong, Kaylene Stocking, Jingqi Li, Tianjiao Zhang, Jack Gallant and Claire Tomlin
Robust Exploration with Adversary via Langevin Monte Carlo Hao-Lun Hsu and Miroslav Pajic
Wasserstein Distributionally Robust Regret-Optimal Control in the Infinite-Horizon Taylan Kargin, Joudi Hajar, Vikrant Malik and Babak Hassibi
Physics-Constrained Learning for PDE Systems with Uncertainty Quantified Port-Hamiltonian Models Kaiyuan Tan, Peilun Li and Thomas Beckers

 

Poster Session 2

Tuesday, July 16, 16:45 - 17:45

Paper Title Author
Data-efficient, Explainable and Safe Box Manipulation: Illustrating the Advantages of Physical Priors in Model-Predictive Control Achkan Salehi and Stephane Doncieux
An Investigation of Time Reversal Symmetry in Reinforcement Learning Brett Barkley, Amy Zhang and David Fridovich-Keil
The Behavioral Toolbox Ivan Markovsky
Learning “Look-Ahead” Nonlocal Traffic Dynamics in a Ring Road Chenguang Zhao and Huan Yu
On the convergence of adaptive first order methods: proximal gradient and alternating minimization algorithms Puya Latafat, Andreas Themelis and Panagiotis Patrinos
Global Rewards in Multi-Agent Deep Reinforcement Learning for Autonomous Mobility on Demand Systems Heiko Hoppe, Tobias Enders, Quentin Cappart and Maximilian Schiffer
Soft Convex Quantization: Revisiting Vector Quantization with Convex Optimization Tanmay Gautam, Reid Pryzant, Ziyi Yang, Chenguang Zhu and Somayeh Sojoudi
Piecewise regression via mixed-integer programming for MPC Dieter Teichrib and Moritz Schulze Darup
On the Uniqueness of Solution for the Bellman Equation of LTL Objectives Zetong Xuan, Alper Bozkurt, Miroslav Pajic and Yu Wang
Nonconvex Scenario Optimization for Data-Driven Reachability Elizabeth Dietrich, Alex Devonport and Murat Arcak
Efficient Skill Acquisition for Insertion Tasks in Obstructed Environments Jun Yamada, Jack Collins and Ingmar Posner
An Invariant Information Geometric Method for High-Dimensional Online Optimization Zhengfei Zhang, Yunyue Wei and Yanan Sui
Pointwise-in-Time Diagnostics for Reinforcement Learning During Training and Runtime Noel Brindise, Andres Posada Moreno, Cedric Langbort and Sebastian Trimpe
Verification of Neural Reachable Tubes via Scenario Optimization and Conformal Prediction Albert Lin and Somil Bansal
Improving sample efficiency of high dimensional Bayesian optimization with MCMC Zeji Yi, Yunyue Wei, Chu Xin Cheng, Kaibo He and Yanan Sui
A Large Deviations Perspective on Policy Gradient Algorithms Wouter Jongeneel, Daniel Kuhn and Mengmeng Li
Deep model-free KKL observer: A switching approach Johan Peralez and Madiha Nadri
Parameterized Fast and Safe Tracking (FaSTrack) using DeepReach Hyun Joe Jeong, Zheng Gong, Somil Bansal and Sylvia Herbert
Convergence guarantees for adaptive model predictive control with kinky inference Riccardo Zuliani, Raffaele Soloperto and John Lygeros
CoVO-MPC: Theoretical Analysis of Sampling-based MPC and Optimal Covariance Design Zeji Yi, Chaoyi Pan, Guanqi He, Guannan Qu and Guanya Shi
A Learning-based Framework to Adapt Legged Robots On-the-fly to Unexpected Disturbances Nolan Fey, He Li, Nicholas Adrian, Patrick Wensing and Michael Lemmon
Mapping back and forth between model predictive control and neural networks Ross Drummond, Pablo Baldivieso and Giorgio Valmorbida
Recursively Feasible MPC in Dynamic Environments with Conformal Prediction Guarantees Charis Stamouli, Lars Lindemann and George Pappas
Physically Consistent Modeling & Identification of Nonlinear Friction with Dissipative Gaussian Processes Rui Dai, Giulio Evangelisti and Sandra Hirche
STEMFold: Stochastic Temporal Manifold for Multi-Agent Interactions in the Presence of Hidden Agents Hemant Kumawat, Biswadeep Chakraborty and Saibal Mukhopadhyay
A Deep Learning Approach for Distributed Aggregative Optimization with Users’ Feedback Riccardo Brumali, Guido Carnevale and Giuseppe Notarstefano
Deep Hankel matrices with random elements Nathan Lawrence, Philip Loewen, Shuyuan Wang, Michael Forbes and Bhushan Gopaluni
Generalized Safe Reinforcement Learning Weiqin Chen and Santiago Paternain
Do No Harm: A Counterfactual Approach to Safe Reinforcement Learning Sean Vaskov, Wilko Schwarting and Chris Baker
Pontryagin Neural Operator for Solving General-Sum Differential Games with Parametric State Constraints Lei Zhang, Mukesh Ghimire, Zhe Xu, Wenlong Zhang and Yi Ren
Proto-MPC: An Encoder-Prototype-Decoder Approach for Quadrotor Control in Challenging Winds Yuliang Gu, Sheng Cheng and Naira Hovakimyan
Restless Bandits with Rewards Generated by a Linear Gaussian Dynamical System Jonathan Gornet and Bruno Sinopoli

 

Poster Session 3

Wednesday, July 17, 13:00 - 14:00

Paper Title Author
Gradient Shaping for Multi-Constraint Safe Reinforcement Learning Yihang Yao, Zuxin Liu, Zhepeng Cen, Peide Huang, Tingnan Zhang, Wenhao Yu and Ding Zhao
Real-Time Safe Control of Neural Network Dynamic Models with Sound Approximation Hanjiang Hu, Jianglin Lan and Changliu Liu
Linearised Data-Driven LSTM-Based Control of Multi-Input HVAC Systems Andreas Hinderyckx and Florence Guillaume
Safe Online Convex Optimization with Multi-Point Feedback Spencer Hutchinson and Mahnoosh Alizadeh
Interpretable Data-Driven Model Predictive Control of Building Energy Systems using SHAP Patrick Henkel, Tobias Kasperski, Phillip Stoffel and Dirk Müller
Adaptive Online Non-stochastic Control Naram Mhaisen and George Iosifidis
An efficient data-based off-policy Q-learning algorithm for optimal output feedback control of linear systems Mohammad Alsalti, Victor G. Lopez and Matthias A. Müller
Bridging the Gaps: Learning Verifiable Model-Free Quadratic Programming Controllers Inspired by Model Predictive Control Yiwen Lu, Zishuo Li, Yihan Zhou, Na Li and Yilin Mo
Decision Boundary Learning For Safe Vision-based Navigation via Hamilton-Jacobi Reachability Analysis and Support Vector Machine Tara Toufighi, Minh Bui, Rakesh Shrestha and Mo Chen
Online Decision Making with History-Average Dependent Costs Vijeth Hebbar and Cedric Langbort
A Data-driven Riccati Equation Anders Rantzer
Neural Operators for Boundary Stabilization of Stop-and-go Traffic Yihuai Zhang, Ruiguo Zhong and Huan Yu
Balanced Reward-inspired Reinforcement Learning for Autonomous Vehicle Racing Zhen Tian, Dezong Zhao, Zhihao Lin, David Flynn, Wenjing Zhao and Daxin Tian
Expert with Clustering: Hierarchical Online Preference Learning Framework Tianyue Zhou, Jung-Hoon Cho, Babak Rahimi Ardabili, Hamed Tabkhi and Cathy Wu
Hacking Predictors Means Hacking Cars: Using Sensitivity Analysis to Identify Trajectory Prediction Vulnerabilities for Autonomous Driving Security Marsalis Gibson, David Babazadeh, Claire Tomlin and Shankar Sastry
SpOiLer: Offline Reinforcement Learning using Scaled Penalties Padmanaba Srinivasan and William J. Knottenbelt
Design of observer-based finite-time control for inductively coupled power transfer system with random gain fluctuations Satheesh Thangavel and Sakthivel Rathinasamy
Convex Approximations for a Bi-level Formulation of Data-Enabled Predictive Control Xu Shang and Yang Zheng
Bio-Inspired Flight State Prediction. Towards Bio-Inspired Control of Aerial Vehicle: Distributed Aerodynamic Parameters for State Prediction Yikang Wang, Adolfo Perrusquia and Dmitry Ignatyev
Residual Learning and Context Encoding for Adaptive Offline-to-Online Reinforcement Learning Mohammadreza Nakhaeinezhadfard, Aidan Scannell and Joni Pajarinen
Stable Modular Control via Contraction Theory for Reinforcement Learning Bing Song, Jean-Jacques Slotine and Quang-Cuong Pham
State-Wise Safe Reinforcement Learning with Pixel Observations Sinong Zhan, Yixuan Wang, Qingyuan Wu, Ruochen Jiao, Chao Huang and Qi Zhu
PlanNetX: Learning an Efficient Neural Network Planner from MPC for Longitudinal Control Jasper Hoffmann, Diego Fernandez Clausen, Julien Brosseit, Julian Bernhard, Klemens Esterle, Moritz Werling, Michael Karg and Joschka Joschka Bödecker
Safety Filters for Black-Box Dynamical Systems by Learning Discriminating Hyperplanes Will Lavanakul, Jason Choi, Koushil Sreenath and Claire Tomlin
Lagrangian Inspired Polynomial Estimator for black-box learning and control of underactuated systems Giulio Giacomuzzo, Riccardo Cescon, Diego Romeres, Ruggero Carli and Alberto Dalla Libera
Convex neural network synthesis for robustness in the 1-norm Ross Drummond, Chris Guiver and Matthew Turner
CACTO-SL: Using Sobolev Learning to improve Continuous Actor-Critic with Trajectory Optimization Elisa Alboni, Gianluigi Grandesso, Gastone Pietro Rosati Papini, Justin Carpentier and Andrea Del Prete
Data-Driven Simulator for Mechanical Circulatory Support with Domain Adversarial Neural Process Sophia Sun, Wenyuan Chen, Zihao Zhou, Sonia Fereidooni, Elise Jortberg and Rose Yu
Data-Driven Strategy Synthesis for Stochastic Systems with Unknown Nonlinear Disturbances Ibon Gracia, Dimitris Boskos, Luca Laurenti and Morteza Lahijanian
Probably approximately correct stability of allocations in uncertain coalitional games with private sampling George Pantazis, Filiberto Fele, Filippo Fabiani, Sergio Grammatico and Kostas Margellos

Neural Processes with Event Triggers for Fast Adaptation to Changes

Paul Brunzema, Paul Kruse and Sebastian Trimpe

 

Poster Session 4

Wednesday, July 17, 16:15 - 17:15

Paper Title Authors
Continual Learning of Multi-modal Dynamics with External Memory Abdullah Akgül, Gozde Unal and Melih Kandemir
HSVI-based Online Minimax Strategies for Partially Observable Stochastic Games with Neural Perception Mechanisms Rui Yan, Gabriel Santos, Gethin Norman, David Parker and Marta Kwiatkowska
Safe Dynamic Pricing for Nonstationary Network Resource Allocation Berkay Turan, Spencer Hutchinson and Mahnoosh Alizadeh
Strengthened stability analysis of discrete-time Lurie systems involving ReLU neural networks Carl Richardson, Matthew Turner and Steve Gunn
Minimax dual control with finite-dimensional information state Olle Kjellqvist
Real-World Fluid Directed Rigid Body Control via Deep Reinforcement Learning Mohak Bhardwaj, Thomas Lampe, Michael Neunert, Francesco Romano, Abbas Abdolmaleki, Arunkumar Byravan, Markus Wulfmeier, Martin Riedmiller and Jonas Buchli
Understanding the Difficulty of Solving Cauchy Problems with PINNs Tao Wang, Bo Zhao, Sicun Gao and Rose Yu
Signatures Meet Dynamic Programming: Generalizing Bellman Equations for Trajectory Following Motoya Ohnishi, Iretiayo Akinola, Jie Xu, Ajay Mandlekar and Fabio Ramos
Submodular Information Selection for Hypothesis Testing with Misclassification Penalties Jayanth Bhargav, Mahsa Ghasemi and Shreyas Sundaram
Learning and Deploying Robust Locomotion Policies with Minimal Dynamics Randomization Luigi Campanaro, Siddhant Gangapurwala, Wolfgang Merkt and Ioannis Havoutis
Safe Learning in Nonlinear Model Predictive Control Johannes Buerger, Mark Cannon and Martin Doff-Sotta
On the Nonsmooth Geometry and Neural Approximation of the Optimal Value Function of Infinite-Horizon Pendulum Swing-up Haoyu Han and Heng Yang
Robust Cooperative Multi-Agent Reinforcement Learning: A Mean-Field Type Game Perspective Muhammad Aneeq Uz Zaman, Mathieu Laurière, Alec Koppel and Tamer Başar
Uncertainty Informed Optimal Resource Allocation with Gaussian Process based Bayesian Inference Samarth Gupta and Saurabh Amin
Conditions for Parameter Unidentifiability of Linear ARX Systems for Enhancing Security Xiangyu Mao, Jianping He, Chengpu Yu and Chongrong Fang
Error bounds, PL condition, and quadratic growth for weakly convex functions, and linear convergences of proximal point methods Feng-Yi Liao, Lijun Ding and Yang Zheng
Finite-Time Complexity of Incremental Policy Gradient Methods for Solving Multi-Task Reinforcement Learning Yitao Bai and Thinh Doan
PDE Control Gym: A Gym for Data-driven Control and Reinforcement Learning of Partial Differential Equations Luke Bhan, Yuexin Bian, Miroslav Krstic and Yuanyuan Shi
Data-Driven Bifurcation Analysis via Learning of Homeomorphism Wentao Tang
Learning True Objectives: Linear Algebraic Characterizations of Identifiability in Inverse Reinforcement Learning Mohamad Louai Shehab, Antoine Aspeel, Nikos Arechiga, Andrew Best and Necmiye Ozay
Dynamics Harmonic Analysis of Robotic Systems: Application in Data-Driven Koopman Modelling Daniel Ordonez-Apraez, Vladimir Kostic, Giulio Turrisi, Pietro Novelli, Carlos Mastalli, Claudio Semini and Massimilano Pontil
Learning Locally Interacting Discrete Dynamical Systems: Towards Data-Efficient and Scalable Prediction Beomseok Kang, Harshit Kumar, Minah Lee, Biswadeep Chakraborty and Saibal Mukhopadhyay
How Safe Am I Given What I See? Calibrated Prediction of Safety Chances for Image-Controlled Autonomy Zhenjiang Mao, Carson Sobolewski and Ivan Ruchkin
Distributed On-the-Fly Control of Multi-Agent Systems With Unknown Dynamics: Using Limited Data to Obtain Near-optimal Control Shayan Meshkat Alsadat, Nasim Baharisangari and Zhe Xu
Can a Transformer Represent a Kalman Filter? Gautam Goel and Peter Bartlett
QCQP-Net: Reliably Learning Feasible Alternating Current Optimal Power Flow Solutions Under Constraints Sihan Zeng, Youngdae Kim, Yuxuan Ren and Kibaek Kim
Growing Q-Networks: Increasing Control Resolution via Decoupled Q-learning and Action Masking Tim Seyde, Peter Werner, Wilko Schwarting, Markus Wulfmeier and Daniela Rus
Hamiltonian GAN Christine Allen-Blanchette
Reinforcement Learning-Driven Parametric Curve Fitting for Snake Robot Gait Design Jack Naish, Jacob Rodriguez, Jenny Zhang, Bryson Jones, Guglielmo Daddi, Andrew Orekhov, Rob Royce, Michael Paton, Howie Choset, Masahiro Ono and Rohan Thakker
Adaptive neural network based control approach for building energy control under changing environmental conditions Lilli Frison and Simon Gölzhäuser

 

Poster Instructions

Please prepare your poster in A0 portrait, print it, and bring it with you. We will not be able to print posters on site.

It is important that you follow these poster preparation guidelines as our display frames allow posters to slide through sidebars and there will be no possibility of using pins. Moreover, it is best that posters are printed on paper rather than fabric, since fabric tends to slump in the frames.