Accepted Papers

Backward Reachability for Neural Feedback Loops

Nicholas Rober (MIT)*; Michael Everett (MIT); Jonathan How (MIT)

On Quantum Computing for Neural Network Robustness Verification

Nicola Franco (Fraunhofer IKS)*; Tom H Wollschläger (Technical University of Munich); Jeanette M Lorenz (Fraunhofer Institute for Cognitive Systems IKS); Stephan Günnemann (Technical University of Munich)

IBP Regularization for Verified Adversarial Robustness via Branch-and-Bound

Alessandro De Palma (University of Oxford)*; Rudy Bunel (Deepmind); Krishnamurthy Dvijotham (DeepMind); M. Pawan Kumar (University of Oxford); Robert Stanforth (Deepmind)

Verification-friendly Networks: the Case for Parametric ReLUs

Patrick Henriksen (Imperial College London)*; Francesco Leofante (Imperial College London); Alessio Lomuscio (Imperial College London)

Formal Privacy Guarantees for Neural Network queries by estimating local Lipschitz constant

Abhishek Singh (MIT)*; Praneeth Vepakomma (MIT); Vivek Sharma (MIT); Ramesh Raskar (Massachusetts Institute of Technology)

Sound randomized smoothing in floating-point arithmetics

Václav Voráček (University of Tübingen)*; Matthias Hein (University of Tübingen)

Optimized Symbolic Interval Propagation for Neural Network Verification

Philipp D Kern (Karlsruhe Institute of Technology)*; Marko Kleine Büning (Karlsruhe Institute of Technology (KIT), Institute of Theoretical Informatics); Carsten Sinz (Karlsruhe Institute of Technology (KIT), Institute of Theoretical Informatics)

Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation

Wenxiao Wang (University of Maryland)*; Alexander J Levine (University of Maryland); Soheil Feizi (University of Maryland)

Sound and Complete Verification of Polynomial Network

Elias Abad Rocamora (EPFL)*; Mehmet Fatih Sahin (EPFL); Fanghui Liu (EPFL); Grigorios Chrysos (EPFL); Volkan Cevher (EPFL)

Safety Verification and Repair of Deep Neural Networks

Xiaodong Yang (Vanderbilt University); Tomoya Yamaguchi (Toyota Motor North America); Bardh Hoxha (Toyota Research Institute North America); Danil Prokhorov (Toyota Research Institute); Taylor T Johnson (Vanderbilt University)*

Robustness Verification for Contrastive Learning

Zekai Wang (Wuhan University)*; Weiwei Liu (Wuhan University)

CertiFair: A Framework for Certified Global Fairness of Neural Networks

Haitham Khedr (University of California, Irvine)*; Yasser Shoukry (University of California, Irvine)

Neural Network Compression of ACAS Xu Early Prototype is Unsafe: Closed-Loop Verification through Quantized State Backreachability

Stanley Bak (Stony Brook University); Dung Tran (University of Nebraska-Lincoln)*

Towards Optimal Randomized Smoothing: A Semi-Infinite Linear Programming Approach

Brendon G Anderson (University of California, Berkeley)*; Samuel Pfrommer (Berkeley); Somayeh Sojoudi (UC Berkeley)

Programmatic Reinforcement Learning with Formal Verification

Yuning Wang (Rutgers University); He Zhu (Rutgers University)*

Toward Certified Robustness Against Real-World Distribution Shifts

Haoze Wu (Stanford University)*; Teruhiro Tagomori (Stanford University); Alexander Robey (University of Pennsylvania); Fengjun Yang (University of Pennsylvania); Nikolai Matni (University of Pennsylvania); George J. Pappas (University of Pennsylvania); Hamed Hassani (University of Pennsylvania); Corina Pasareanu (Carnegie Mellon University); Clark Barrett (Stanford Computer Science)

Verification of Neural Ordinary Differential Equations using Reachability Analysis

Diego Manzanas Lopez (Vanderbilt University)*; Patrick Musau (Vanderbilt University); Nathaniel P Hamilton (Vanderbilt University); Taylor T Johnson (Vanderbilt University)

Robust Training and Verification of Implicit Neural Networks: A Non-Euclidean Contractive Approach

Saber Jafarpour (Georgia Institute of Technology)*; Alexander Davydov (University of California, Santa Barbara); Matthew Abate (Georgia Institute of Technology); Francesco Bullo (University of California, Santa Barbara); Samuel Coogan (Georgia Institute of Technology)

Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis

Melanie Ducoffe (Airbus)*; David Vigouroux (IRT Saint Exupery); Thomas Serre (Brown University); Remi Cadene (LIP6); Thomas FEL (ANITI, Brown University); Mikael Capelle (Thales Alenia Space)

Improving adversarial robustness via joint classification and multiple explicit detection classes

Sina Baharlouei (UNIVERSITY OF SOUTHERN CALIFORNIA)*; Fatemeh Sheikholeslami (Bosch Center for Artificial Intelligence); Meisam Razaviyayn (USC); Zico Kolter (Carnegie Mellon University)

Characterizing Neural Network Verification for Systems with NN4SysBench

Haoyu He (Northeastern University); Cheng Tan (Northeastern)*

ReCIPH: Relational Coefficients for Input Partitioning Heuristic

Serge Durand (CEA)*; Augustin Lemesle (CEA LIST)

Certified Robustness Against Natural Language Attacks by Causal Intervention

Haiteng Zhao (PKU)*; Chang Ma (Peking University); Xinshuai Dong (Nanyang Technological University); Anh Tuan Luu (Nanyang Technological University); Zhi-Hong Deng (Peking University); Hanwang Zhang (Nanyang Technological University)