SPLASH 2023
Sun 22 - Fri 27 October 2023 Cascais, Portugal
Wed 25 Oct 2023 15:12 - 15:30 at Room I - SE4AI Chair(s): Arjun Guha

While successful, neural networks have been shown to be vulnerable to adversarial example attacks. In $L_0$ adversarial attacks, also known as few-pixel attacks, the attacker picks $t$ pixels from the image and arbitrarily perturbs them. To understand the robustness level of a network to these attacks, it is required to check the robustness of the network to perturbations of every set of $t$ pixels. Since the number of sets is exponentially large, existing robustness verifiers, which can reason about a single set of pixels at a time, are impractical for $L_0$ robustness verification. We introduce Calzone, an $L_0$ robustness verifier for neural networks. To the best of our knowledge, Calzone is the first to provide a sound and complete analysis for $L_0$ adversarial attacks. Calzone builds on the following observation: if a classifier is robust to any perturbation of a set of $k$ pixels, for $k>t$, then it is robust to any perturbation of its subsets of size $t$. Thus, to reduce the verification time, Calzone predicts the largest $k$ that can be proven robust, via dynamic programming and sampling. It then relies on covering designs to compute a covering of the image with sets of size $k$. For each set in the covering, Calzone submits its corresponding box neighborhood to an existing $L_\infty$ robustness verifier. If a set's neighborhood is not robust, Calzone repeats this process and covers this set with sets of size $k'<k$. We evaluate Calzone on several datasets and networks, for $t\leq 5$. Typically, Calzone verifies $L_0$ robustness within few minutes. On our most challenging instances (e.g., $t=5$), Calzone completes within few hours. We compare to a MILP baseline and show that it does not scale already for $t=3$.

Wed 25 Oct

Displayed time zone: Lisbon change

14:00 - 15:30
SE4AIOOPSLA at Room I
Chair(s): Arjun Guha Northeastern University; Roblox
14:00
18m
Talk
Run-Time Prevention of Software Integration Failures of Machine Learning APIs
OOPSLA
Chengcheng Wan East China Normal University, Yuhan Liu University of Chicago, Kuntai Du University of Chicago, Henry Hoffmann University of Chicago, Junchen Jiang University of Chicago, Michael Maire University of Chicago, Shan Lu Microsoft; University of Chicago
DOI
14:18
18m
Talk
Compiling Structured Tensor Algebra
OOPSLA
Mahdi Ghorbani University of Edinburgh, Mathieu Huot University of Oxford, Shideh Hashemian University of Edinburgh, Amir Shaikhha University of Edinburgh
DOI
14:36
18m
Talk
Perception Contracts for Safety of ML-Enabled Systems
OOPSLA
Angello Astorga University of Illinois at Urbana-Champaign, Chiao Hsieh Kyoto University, P. Madhusudan University of Illinois at Urbana-Champaign, Sayan Mitra University of Illinois at Urbana-Champaign
DOI
14:54
18m
Talk
Languages with Decidable Learning: A Meta-theoremDistinguished Paper
OOPSLA
Paul Krogmeier University of Illinois at Urbana-Champaign, P. Madhusudan University of Illinois at Urbana-Champaign
DOI
15:12
18m
Talk
Deep Learning Robustness Verification for Few-Pixel Attacks
OOPSLA
Yuval Shapira Technion, Eran Avneri Technion, Dana Drachsler Cohen Technion
DOI