Search Results for author: Tatjana Chavdarova

Found 18 papers, 5 papers with code

Learning Variational Inequalities from Data: Fast Generalization Rates under Strong Monotonicity

no code implementations28 Oct 2024 Eric Zhao, Tatjana Chavdarova, Michael Jordan

Variational inequalities (VIs) are a broad class of optimization problems encompassing machine learning problems ranging from standard convex minimization to more complex scenarios like min-max optimization and computing the equilibria of multi-player games.

On the Hypomonotone Class of Variational Inequalities

no code implementations11 Oct 2024 Khaled Alomar, Tatjana Chavdarova

This paper studies the behavior of the extragradient algorithm when applied to hypomonotone operators, a class of problems that extends beyond the classical monotone setting.

A Primal-Dual Approach to Solving Variational Inequalities with General Constraints

2 code implementations27 Oct 2022 Tatjana Chavdarova, Tong Yang, Matteo Pagliardini, Michael I. Jordan

We prove the convergence of this method and show that the gap function of the last iterate of the method decreases at a rate of $O(\frac{1}{\sqrt{K}})$ when the operator is $L$-Lipschitz and monotone.

Continuous-time Analysis for Variational Inequalities: An Overview and Desiderata

no code implementations14 Jul 2022 Tatjana Chavdarova, Ya-Ping Hsieh, Michael I. Jordan

Algorithms that solve zero-sum games, multi-objective agent objectives, or, more generally, variational inequality (VI) problems are notoriously unstable on general problems.

Solving Constrained Variational Inequalities via a First-order Interior Point-based Method

1 code implementation21 Jun 2022 Tong Yang, Michael I. Jordan, Tatjana Chavdarova

We provide convergence guarantees for ACVI in two general classes of problems: (i) when the operator is $\xi$-monotone, and (ii) when it is monotone, some constraints are active and the game is not purely rotational.

Improving Generalization via Uncertainty Driven Perturbations

no code implementations11 Feb 2022 Matteo Pagliardini, Gilberto Manunza, Martin Jaggi, Michael I. Jordan, Tatjana Chavdarova

We show that UDP is guaranteed to achieve the maximum margin decision boundary on linear models and that it notably increases it on challenging simulated datasets.

Last-Iterate Convergence of Saddle-Point Optimizers via High-Resolution Differential Equations

no code implementations27 Dec 2021 Tatjana Chavdarova, Michael I. Jordan, Manolis Zampetakis

However, the convergence properties of these methods are qualitatively different, even on simple bilinear games.

The Peril of Popular Deep Learning Uncertainty Estimation Methods

1 code implementation9 Dec 2021 Yehao Liu, Matteo Pagliardini, Tatjana Chavdarova, Sebastian U. Stich

Secondly, we show on a 2D toy example that both BNNs and MCDropout do not give high uncertainty estimates on OOD samples.

Deep Learning

Improved Generalization-Robustness Trade-off via Uncertainty Targeted Attacks

no code implementations29 Sep 2021 Matteo Pagliardini, Gilberto Manunza, Martin Jaggi, Tatjana Chavdarova

The deep learning models' sensitivity to small input perturbations raises security concerns and limits their use for applications where reliability is critical.

Semantic Perturbations with Normalizing Flows for Improved Generalization

1 code implementation ICCV 2021 Oguz Kaan Yuksel, Sebastian U. Stich, Martin Jaggi, Tatjana Chavdarova

We find that our latent adversarial perturbations adaptive to the classifier throughout its training are most effective, yielding the first test accuracy improvement results on real-world datasets -- CIFAR-10/100 -- via latent-space perturbations.

Data Augmentation Decoder

Reducing Noise in GAN Training with Variance Reduced Extragradient

no code implementations NeurIPS 2019 Tatjana Chavdarova, Gauthier Gidel, François Fleuret, Simon Lacoste-Julien

We study the effect of the stochastic gradient noise on the training of generative adversarial networks (GANs) and show that it can prevent the convergence of standard game optimization methods, while the batch version converges.

SGAN: An Alternative Training of Generative Adversarial Networks

no code implementations CVPR 2018 Tatjana Chavdarova, François Fleuret

The Generative Adversarial Networks (GANs) have demonstrated impressive performance for data synthesis, and are now used in a wide range of computer vision tasks.

Deep Multi-camera People Detection

no code implementations15 Feb 2017 Tatjana Chavdarova, François Fleuret

The former does not exploit joint information, whereas the latter deals with ambiguous input due to the foreground blobs becoming more and more interconnected as the number of targets increases.

Cannot find the paper you are looking for? You can Submit a new open access paper.