Invited Speakers

You will find our list of invited speakers with their bios, talk titles and abstracts on this page.

All speakers listed on this page have been confirmed.

Prof. Gagandeep Singh (UIUC)

Gagandeep Singh is an Assistant Professor in the Department of Computer Science at the University of Illinois Urbana-Champaign (UIUC). He also holds an Affiliated Researcher position with VMware Research. His current focus is on combining ideas from Formal Logic, Machine Learning, and Systems research to construct intelligent computer systems with formal guarantees about their behavior and safety. He obtained a PhD in Computer Science from ETH Zurich in 2020 working with Prof. Markus Püschel and Prof. Martin Vechev, and completed a Masters in Computer Science at ETH in 2014 receiving the ETH Master Medal and Bachelors in Computer Science and Engineering from IIT Patna in 2012 receiving the President of India Gold Medal. Gagandeep co-received the ACM SIGPLAN Doctoral Dissertation Award for his work on scalable and precise automated reasoning methods and tools for programs and deep neural networks.

Invited Talk: Trust and Safety with Certified AI

Real-world adoption of deep neural networks (DNNs) in critical applications requires ensuring strong generalization beyond testing datasets. Unfortunately, the standard practice of measuring DNN performance on a finite set of test inputs cannot ensure DNN safety on inputs in the wild. In this talk, I will focus on how certified AI can be leveraged as a service to bridge this gap by building DNNs with strong generalization on an infinite set of unseen inputs. In the process, I will discuss some of our recent work for building trust and safety in diverse domains such as vision, systems, finance, and more. I will also describe a path toward making certified AI more scalable, easy to develop, and accessible to DNN developers lacking formal backgrounds.

Prof. Aws Albarghouthi (UW-Madison)

Aws Albarghouthi is an Associate Professor at University of Wisconsin-Madison. He studies the art and science of program verification and synthesis. He is a member of madPL,  which studies the problems of automated verification and synthesis. Over the past few years, the group has applied formal methods and programming-language techniques to problems in data privacy, fairness, machine-learning correctness, human-robot interaction, and beyond. 

Invited Talk: Certifying Robustness: From Training to Inference (joint with Yuhao Zhang)

Researchers have demonstrated that the machine-learning pipeline is susceptible to attacks both at training and inference time -- poisoning, backdoor, and evasion attacks. In this talk, we will describe new results on holistic approaches for certifying robustness. Our techniques draw upon ideas from test-time certification and ensembling to simultaneously establish formal robustness guarantees for both training and inference.

Yuhao Zhang (UW-Madison)

Yuhao Zhang is a final year PhD student in computer science at the University of Wisconsin–Madison, under the supervision of Loris D'Antoni and Aws Albarghouthi. His research focuses on software engineering, programming languages, and deep learning. One of his primary objectives is to develop formal methods for trustworthy deep learning. He is particularly interested in developing formal methods that can certify and enhance the robustness of neural networks, as well as verify the correctness of deep learning software.

Invited Talk: Certifying Robustness: From Training to Inference (joint with Prof. Aws Albarghouthi)


Prof. Armando Solar-Lezama (MIT)

Armando Solar-Lezama is a Professor of Electrical Engineering and Computer Science and Associate Director and COO of the Computer Science and Artificial Intelligence Lab (CSAIL) at MIT. He is best known for his work on program synthesis and the development of the Sketch program synthesis system. He is currently the lead PI of the NSF funded Expeditions project “Understanding the World through Code” and is also the founder of playskript.com, an online platform for creating interactive presentations.

Invited Talk: Neurosymbolic Learning as a Path to Learning with Guarantees

Learning representations that combine program structure with neural components can help provide a path to learned systems with stronger safety guarantees. In this talk, I will describe some of our recent work on learning with program representations and its connection to safety and verification. 

Prof. Marta Kwiatkowska (Oxford)

Marta Kwiatkowska is Professor of Computing Systems and Fellow of Trinity College, University of Oxford, and Associate Head of MPLS. She spearheaded the development of probabilistic and quantitative methods in verification on the international scene and is currently working on safety and robustness for machine learning and AI. She led the development of the PRISM model checker, the leading software tool in the area and widely used for research and teaching and winner of the HVC 2016 Award. Applications of probabilistic model checking have spanned communication and security protocols, nanotechnology designs, power management, game theory, planning and systems biology, with genuine flaws found and corrected in real-world protocols. Kwiatkowska gave the Milner Lecture in 2012 in recognition of "excellent and original theoretical work which has a perceived significance for practical computing". She is the first female winner of the 2018 Royal Society Milner Award and Lecture, see her lecture here, and won the BCS Lovelace Medal in 2019. Marta Kwiatkowska was invited to give keynotes at the LICS 2003, ESEC/FSE 2007 and 2019, ETAPS/FASE 2011, ATVA 2013, ICALP 2016, CAV 2017, CONCUR 2019 and UbiComp 2019 conferences. 

Invited Talk: Robustness Guarantees for Bayesian Neural Networks

Bayesian neural networks (BNNs), a family of neural networks with a probability distribution placed on their weights, have the advantage of being able to reason about uncertainty in their predictions as well as data. Their deployment in safety-critical applications demands rigorous robustness guarantees. This paper summarises recent progress in developing algorithmic methods to ensure certifiable safety and robustness guarantees for BNNs, with the view to support design automation for systems incorporating BNN components.

Prof. Sam Coogan (GA Tech)

Sam Coogan is an Associate Professor at Georgia Tech. He received the B.S. degree in Electrical Engineering from Georgia Tech and the M.S. and Ph.D. degrees in Electrical Engineering from the University of California, Berkeley. In 2015, he was a postdoctoral research engineer at Sensys Networks, Inc., and in 2012 he spent time at NASA's Jet Propulsion Lab. Before joining Georgia Tech in 2017, he was an assistant professor in the Electrical Engineering department at UCLA from 2015–2017. His awards and recognitions include the 2020 Donald P Eckman Award from the American Automatic Control Council recognizing "an outstanding young engineer in the field of automatic control", a Young Investigator Award from the Air Force Office of Scientific Research in 2019, a CAREER Award from the National Science Foundation in 2018, and the Outstanding paper award for the IEEE Transactions on Control of Network Systems in 2017.

Invited Talk: Contraction-guided safety and reachability analysis of neural network controlled systems

In this talk, we study safety properties of a dynamical system in feedback with a neural network controller from a reachability perspective. We first embed the closed-loop dynamics into a larger system using the theory of mixed monotone dynamical systems with favorable control theoretic properties. In particular, we show that hyper-rectangular over-approximations of the reachable sets are efficiently computed using a single trajectory of the embedding system. Numerically computing this trajectory requires bounds on the input-output behavior of the neural network controller, which we obtain via carefully selected and infrequent queries to an oracle. We assume the oracle provides these input-output bounds as intervals or as affine bounding functions, which is common for many state-of-the-art methods.  Moreover, we show that, if this embedding system is constructed in a certain way, the contraction rate of the embedding system is the same as the original closed-loop system. Thus, this embedding provides a scalable approach for reachability analysis of the neural network control loop while preserving the nonlinear structure of the system. We design an algorithm to leverage this computational advantage through partitioning strategies, improving our reachable set estimates while balancing its runtime with tunable parameters.

Prof. Chuchu Fan (MIT)

Chuchu Fan is an Assistant Professor in the Department of Aeronautics and Astronautics (AeroAstro) and Laboratory for Information and Decision Systems (LIDS) at MIT. Before that, she was a postdoc researcher at Caltech and got her Ph.D. from ECE at the University of Illinois at Urbana-Champaign. Her research group Realm at MIT works on using rigorous mathematics, including formal methods, machine learning, and control theory, for the design, analysis, and verification of safe autonomous systems. Chuchu is the recipient of an NSF CAREER Award, an AFOSR Young Investigator Program (YIP) Award, and the 2020 ACM Doctoral Dissertation Award. 

Invited Talk: Density of Reachable States for Safe Autonomous Motion Planning

State density distribution, in contrast to worst-case reachability, can be leveraged for safety-related problems to better quantify the likelihood of the risk for potentially hazardous situations. We developed a data-driven method to compute the density distribution of reachable states for nonlinear and even black-box systems. Our approach can estimate the set of all possible future states as well as their density. Moreover, we could perform online safety verification with probability ranges for unsafe behaviors to occur. We show that our approach can learn the density distribution of the reachable set more accurately with less data and quantify risks less conservatively and flexibly compared with worst-case analysis. We also study the use of such an approach in combination with model predictive control for verifiable safe path planning under uncertainties.

Prof. Işıl Dillig (UT Austin)

Işıl Dillig is a Professor at the Computer Science Department of the University of Texas at Austin where she leads the UToPiA research group. Her main research interests are program analysis and verification, program synthesis, and automated logical reasoning. She obtained all my degrees (BS, MS, PhD) at Stanford University. She is a 2015 Sloan Fellow and a recipient of an NSF CAREER award.