Invited Speakers

You will find our list of invited speakers with their bios, talk titles and abstracts on this page.

Prof. Somayeh Sojoudi (UC Berkeley)

Somayeh Sojoudi is an Assistant Professor in the Departments of Electrical Engineering & Computer Sciences and Mechanical Engineering at the University of California, Berkeley. She is an Associate Editor for the journals of the IEEE Transactions on Smart Grid, Systems & Control Letters, IEEE Access, and IEEE Open Journal of Control Systems. She is also a member of the conference editorial board of the IEEE Control Systems Society. She received several awards and honors, including INFORMS Optimization Society Prize for Young Researchers, INFORMS Energy Best Publication Award, INFORMS Data Mining Best Paper Award, NSF CAREER Award, and ONR Young Investigator Award. She has also received several best student conference paper awards (as advisor or co-author) from the Control Systems Society.

Invited Talk: Computational Methods for Non-convex Machine Learning Problems

We need efficient computational methods with provable guarantees that can cope with the complex nature and high nonlinearity of many real-world systems. Practitioners often design heuristic algorithms tailored to specific applications, but the theoretical underpinnings of these methods remain a mystery, and this limits their usage in safety-critical systems. In this talk, we aim to address the above issue for some machine learning problems. First, we study the problem of certifying the robustness of neural networks against adversarial inputs. Then we study when simple local search algorithms could solve a class of nonlinear problems to global optimality. We discuss our recent results in addressing these problems and demonstrate them on tensor decomposition with outliers and video processing.

Prof. Suman Jana (Columbia University)

Suman Jana is an associate professor in the department of computer science at Columbia University. His primary research interests are at the intersection of computer security and machine learning. More specifically, he is interested both in using machine learning to improve software security and in improving security and reliability of the machine learning models themselves. Suman’s work is well recognized in computer security and he received the 2021 OSDI best paper award, 2019 NSF CAREER Award, 2018 ARO Young Investigator Award, 2017 SOSP best paper award, 2017 Google Faculty Research Award, IEEE S&P 2014 Best Practical Paper Award and IEEE S&P 2012 Best Student Paper Award.

Invited Talk: Efficient Neural Network Verification using Branch and Bound

In this talk, I will describe two recent Branch and Bound (BaB) verifiers developed by our group to ensure different safety properties of neural networks. The BaB verifiers involve two main steps: (1) recursively splitting the original verification problem into easier independent subproblems by splitting input or hidden neurons;  and (2) for each split subproblem, using fast but incomplete bound propagation techniques to compute sound estimated bounds for the outputs of the target neural network. One of the key limitations of existing BaB verifiers is computing tight relaxations of activation functions' (i.e., ReLU) nonlinearities. Our recent works (α-CROWN and β-CROWN) introduce a primal-dual approach and jointly optimize the corresponding Lagrangian multipliers for each ReLU with gradient ascent. Such an approach is highly parallelizable and avoids calls to expensive LP solvers. Our verifiers not only provide tighter output estimations than existing bound propagation methods but also can fully leverage GPUs with massive parallelization. Our verifier, α, β-CROWN (alpha-beta-CROWN), won the second International Verification of Neural Networks Competition (VNN-COMP 2021) with the highest total score. 

Prof. Changliu Liu (CMU)

Changliu Liu is an assistant professor in the Robotics Institute at CMU, where she leads the Intelligent Control Lab. Prior to joining CMU in 2019, she was a postdoc at Stanford Intelligent Systems Laboratory. She obtained her Ph.D from Berkeley in 2017, where she worked in Mechanical Systems & Control Lab. She got her bachelor degree from Tsinghua University in 2012. Changliu’s primary research focus is on the design and verification of intelligent systems that work with people, with application to manufacturing and transportation. Changliu published the book “Designing Robot Behavior in Human-Robot Interactions” with CRC press in 2019. Changliu co-organized the International Verification of Neural Networks Competition (VNN-COMP) in 2020 and 2021.

Invited Talk: Applications of Neural Verification on Robotics

In this talk, I will discuss how algorithms for neural network verification can be used to solve robotics problems. I will first introduce our toolbox NeuralVerification.jl, followed by the discussion of two applications. The first is to build verified models for human behavior prediction in human-robot systems. The second is to develop safe controllers for dynamic models encoded in deep neural networks. To support real-time computation in robotics tasks, I will also discuss potential approaches to speed up the computation of the verification algorithms.

Prof. Gagandeep Singh (UIUC)

Gagandeep Singh is an Assistant Professor in the Department of Computer Science at the University of Illinois Urbana-Champaign (UIUC). He also holds an Affiliated Researcher position with VMware Research. His current focus is on combining ideas from Formal Logic, Machine Learning, and Systems research to construct intelligent computer systems with formal guarantees about their behavior and safety. He obtained a PhD in Computer Science from ETH Zurich in 2020 working with Prof. Markus Püschel and Prof. Martin Vechev, and completed a Masters in Computer Science at ETH in 2014 receiving the ETH Master Medal and Bachelors in Computer Science and Engineering from IIT Patna in 2012 receiving the President of India Gold Medal. Gagandeep co-received the ACM SIGPLAN Doctoral Dissertation Award for his work on scalable and precise automated reasoning methods and tools for programs and deep neural networks.

Invited Talk: Proof Sharing and Transfer for Boosting Neural Network Verification

The deployment pipeline of deep neural networks (DNNs) in safety-critical settings involves formally verifying their trustworthiness. Domain experts design a large number of specifications (~10-100K), typically defined for inputs in the test set, to cover DNN behavior under diverse real-world scenarios. If the DNN is proved to be trustworthy, then it can be deployed. Otherwise, it is repaired or retrained. While there has been a lot of work in recent years on developing precise and scalable DNN verifiers, existing verifiers are inefficient when applied inside the deployment pipeline. This inefficiency is because the verifier needs to be run from scratch for every new specification and DNN. 

Dr. M. Pawan Kumar (DeepMind)

M. Pawan Kumar is a research scientist at DeepMind. Prior to that, he was a faculty member in the Department of Engineering Science at the University of Oxford during 2015 - 2021, where he led the OVAL group which focused on the design and analysis of optimization algorithms for problems arising in computer vision and machine learning. During 2012 - 2015, he was a faculty member in the Center for Visual Computing at Ecole Centrale Paris. His recent research focuses include reliable parameter estimation of deep neural networks and scalable neural network verification.

Invited Talk: Neural Networks for Neural Network Verification

Can neural networks help us better exploit the inherent structure in the neural network verification problem? In this talk, I will present some recent efforts to speed-up formal verification of neural networks using deep learning. Specifically, we use deep learning to make branching decisions for branch-and-bound, compute descent directions for bound computation, and reduce the search space for counterexample generation. The common methodology behind all these approaches is a graph neural network (GNN) whose architecture closely resembles the network we wish to verify. Using shared parameters, the GNN can be trained on smaller networks and tested on larger ones. Using standard verification tasks, I will show some promising results for our approach.

Dr. Anton Dahbura (Johns Hopkins University)

Anton (Tony) Dahbura is the executive director of the Johns Hopkins University Information Security Institute, co-director of the Johns Hopkins Institute of Assured Autonomy, and an associate research scientist in computer science. His research focuses on security, fault-tolerant computing, distributed systems, and testing. From 1983 until 1996, he was a researcher at AT&T Bell Laboratories, was an invited lecturer in the Department of Computer Science at Princeton University, and served as research director for the Motorola Cambridge Research Center in Cambridge, Massachusetts. Since 1996, Dahbura has led several entrepreneurial efforts in the areas of printing, professional baseball operations, and commercial real estate. From 2000 to 2002, he served as a chair of the Johns Hopkins University Engineering Alumni. He chaired the Johns Hopkins Computer Science Department Advisory Board and also served on the Johns Hopkins University Whiting School of Engineering National Advisory Council from 1998 until 2012. He received the Johns Hopkins Heritage Award in 2004 in recognition of his service to the university.

Invited Talk: Undeterminism and the AI Uncertainty Principle

As AI/ML-based systems are deployed in increasingly autonomous and safety-critical applications, there is increasing concern for many reasons about the behaviors of the ML in the wild, perhaps most of all because the problems for which ML is applied are so complex that it’s not possible to know in all cases what the ML will do. In this talk I will introduce the concept of undeterminism, extracted from my experience in developing techniques for communication protocol conformance testing, and argue that undeterminism yields inherent uncertainty.  I show why systems cannot have certainty and ML simultaneously and that there is a clear set of tradeoffs that should guide research in formal methods for ML for the foreseeable future.