Tailoring Mixup to Data for Calibration
By Quentin Bouniot
Quentin Bouniot will give a talk on Tailoring Mixup to Data for Calibration.
Abstract
Among all data augmentation techniques proposed so far, linear interpolation of training samples, also called Mixup, has found to be effective for a large panel of applications. Along with improved performance, Mixup is also a good technique for improving calibration and predictive uncertainty. However, mixing data carelessly can lead to manifold intrusion, i.e., conflicts between the synthetic labels assigned and the true label distributions, which can deteriorate calibration. In this work, we argue that the likelihood of manifold intrusion increases with the distance between data to mix. To this end, we propose to dynamically change the underlying distributions of interpolation coefficients depending on the similarity between samples to mix, and define a flexible framework to do so without losing in diversity. We provide extensive experiments for classification and regression tasks, showing that our proposed method improves performance and calibration of models, while being much more efficient.
Biography
Quentin Bouniot is currently a Post-Doctoral researcher at Telecom Paris, working in the LTCI under the supervision of Florence d’Alché-Buc and Pavlo Mozharovskyi.
He earned his PhD from Université Jean-Monnet Saint-Étienne, where he conducted research in the Vision and Learning Lab for Scene Analysis at CEA LIST and the Data Intelligence research group at Hubert Curien Laboratory, under the supervision of Amaury Habrard.
His current research focuses on Uncertainty and Explainability in Deep Learning models. During his PhD, he specialized in Few-shot Learning and Meta-Learning, particularly as applied to Object Detection and Image Segmentation. More broadly, Quentin is interested in Deep Representation Learning.