Doctorants EDMH
By Doctorants EDMH
Four of our PhD students will give talks on their current work.
Abstract
14h-14h30 Arturo Castellanos Title: Fast kernel half-space depth for data with non-convex supports Abstract: Data depth is a statistical function that generalizes order and quantiles to the multivariate setting and beyond, with applications spanning over descriptive and visual statistics, anomaly detection, testing, etc. The celebrated halfspace depth exploits data geometry via an optimization program to deliver properties of invariances, robustness, and non-parametricity. Nevertheless, it implicitly assumes convex data supports and requires exponential computational cost. To tackle distribution’s multimodality, we extend the halfspace depth in a Reproducing Kernel Hilbert Space (RKHS). We show that the obtained depth is intuitive and establish its consistency with provable concentration bounds that allow for homogeneity testing. The proposed depth can be computed using manifold gradient making it faster than halfspace depth. The performance of our depth is demonstrated through numerical simulations as well as applications such as anomaly detection on real data and homogeneity testing.
14h30-15h Joel Garde Title: Title Sparse Learning with Discrete Measures. Abstract We present a practical and theoretical overview of sparse learning with discrete measures. We approach the issue as an optimization program over measures, extending the LASSO algorithm. This framework handles set valued learning and non-convex optimization. We use this framework for solving sparse, non-convex, inverse problems. We use optimal transport, in particular the Kantorovich duality, to give a dual characterization of measure spaces from which we can employ the Fenchel transform. We show how Vaserstein spaces naturally complement the inverse problem hypotheses, and justify the sliding approach that has been used with great success in the literature. We apply the framework to recover brain bioelectromagnetic activity. Neurotransmitter sources are parametrized by both their location and their amplitude. We model this using vector-valued measures and consider location and amplitudes separately. This gives way to a practical algorithm based on accelerated coordinate descent.
15h-15h30 Iyad Walwil Title: The Smoothed Duality Gap as a Stopping Criterion Abstract: We optimize the running time of the primal-dual algorithms by optimizing their stopping criteria for solving convex optimization problems under affine equality constraints, which means terminating the algorithm earlier with fewer iterations. We study the relations between four stopping criteria and show under which conditions they are accurate to detect optimal solutions. The uncomputable one: ”Optimality gap and Feasibility error”, and the computable ones: the ”Karush-Kuhn-Tucker error”, the Projected Duality Gap”, and the ”Smoothed Duality Gap”. Assuming metric sub-regularity or quadratic error bound, we establish that all of the computable criteria provide practical upper bounds for the optimality gap, and approximate it effectively. Furthermore, we establish comparability between some of the computable criteria under certain conditions. Numerical experiments on basis pursuit, and quadratic programs with(out) non-negative weights corroborate these findings and show the superior stability of the smoothed duality gap over the rest.
14h30-15h Anton Emelchenkov Title: Multifrequency Highly Oscillating Aperiodic Amplitude Estimation for Nonlinear Chirp Signal Abstract: In this talk, I will address the challenge of estimating multiple highly oscillating amplitudes within the nonlinear chirp signal model. The problem is analogous to the mode detection task with fixed instantaneous frequencies, where the oscillating amplitudes signify mechanical vibrations concealing crucial information for predictive maintenance. Existing methods often focus on single-frequency estimation, employ simple amplitude functions, or impose strong noise assumptions. Furthermore, these methods frequently rely on arbitrarily chosen hyperparameters, leading to sub-optimal generalization for a diverse range of amplitudes. During the talk, I will describe several methods to obtain accurate amplitude estimates and propose data-driven approaches to select the hyperparamers. I will conclude by discussing ongoing work on the statistical analysis of the proposed estimators.