NeurIPS 2019: Empirically Measuring Concentration

Xiao Zhang will present our work (with Saeed Mahloujifar and Mohamood Mahmoody) as a spotlight at NeurIPS 2019, Vancouver, 10 December 2019.

Recent theoretical results, starting with Gilmer et al.’s Adversarial Spheres (2018), show that if inputs are drawn from a concentrated metric probability space, then adversarial examples with small perturbation are inevitable.c The key insight from this line of research is that concentration of measure gives lower bound on adversarial risk for a large collection of classifiers (e.g. imperfect classifiers with risk at least $\alpha$), which further implies the impossibility results for robust learning against adversarial examples.

However, it is not clear whether these theoretical results apply to actual distributions such as images. This work presents a method for empirically measuring and bounding the concentration of a concrete dataset which is proven to converge to the actual concentration. More specifically, we prove that by simultaneously increasing the sample size and a complexity parameter of the selected collection of subsets $\mathcal{G}$, the concentration of the empirical measure based on samples converges to the actual concentration asymptotically.

To solve the empirical concentration problem, we propose heuristic algorithms to find error regions with small expansion under both $\ell_\infty$ and $\ell_2$ metrics.

For instance, our algorithm for $\ell_\infty$ starts by sorting the dataset based on the empirical density estimated using k-nearest neighbor, and then obtains $T$ rectangular data clusters by performing k-means clustering on the top-$q$ densest images. After expanding each of the rectangles by $\epsilon$, the error region $\mathcal{E}$ is then specified as the complement of the expanded rectangles (the reddish region in the following figure). Finally, we search for the best error region by tuning the number of rectangles $T$ and the initial coverage percentile $q$.

Based on the proposed algorithm, we empirically measure the concentration for image benchmarks, such as MNIST and CIFAR-10. Compared with state-of-the-art robustly trained models, our estimated bound shows that, for most settings, there exists a large gap between the robust error achieved by the best current models and the theoretical limits implied by concentration.


This suggests the concentration of measure is not the only reason behind the vulnerability of existing classifiers to adversarial perturbations. Thus, either there is room for improving the robustness of image classifiers or a need for deeper understanding of the reasons for the gap between intrinsic robustness and the actual robustness achieved by robust models.

Paper

Saeed Mahloujifar, Xiao Zhang, Mohamood Mahmoody and David Evans. Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness. In NeurIPS 2019 (spotlight presentation). Vancouver, December 2019. [PDF] [arXiv]

Code

https://github.com/xiaozhanguva/Measure-Concentration