Search Results for author: Vyacheslav Kungurtsev

Found 23 papers, 5 papers with code

Group Distributionally Robust Dataset Distillation with Risk Minimization

1 code implementation7 Feb 2024 Saeed Vahidian, Mingyu Wang, Jianyang Gu, Vyacheslav Kungurtsev, Wei Jiang, Yiran Chen

However, targeting the training dataset must be thought of as auxiliary in the same sense that the training set is an approximate substitute for the population distribution, and the latter is the data of interest.

Federated Learning Neural Architecture Search +1

Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative Latents

1 code implementation3 Dec 2023 Yuqi Jia, Saeed Vahidian, Jingwei Sun, Jianyi Zhang, Vyacheslav Kungurtsev, Neil Zhenqiang Gong, Yiran Chen

This process allows local devices to train smaller surrogate models while enabling the training of a larger global model on the server, effectively minimizing resource utilization.

Federated Learning

Efficient Dataset Distillation via Minimax Diffusion

1 code implementation27 Nov 2023 Jianyang Gu, Saeed Vahidian, Vyacheslav Kungurtsev, Haonan Wang, Wei Jiang, Yang You, Yiran Chen

Observing that key factors for constructing an effective surrogate dataset are representativeness and diversity, we design additional minimax criteria in the generative training to enhance these facets for the generated images of diffusion models.

Quantum Solutions to the Privacy vs. Utility Tradeoff

no code implementations6 Jul 2023 Sagnik Chatterjee, Vyacheslav Kungurtsev

In this work, we propose a novel architecture (and several variants thereof) based on quantum cryptographic primitives with provable privacy and security guarantees regarding membership inference attacks on generative models.

A Stochastic-Gradient-based Interior-Point Algorithm for Solving Smooth Bound-Constrained Optimization Problems

no code implementations28 Apr 2023 Frank E. Curtis, Vyacheslav Kungurtsev, Daniel P. Robinson, Qi Wang

A stochastic-gradient-based interior-point algorithm for minimizing a continuously differentiable objective function (that may be nonconvex) subject to bound constraints is presented, analyzed, and demonstrated through experimental results.

Riemannian Stochastic Approximation for Minimizing Tame Nonsmooth Objective Functions

no code implementations1 Feb 2023 Johannes Aspman, Vyacheslav Kungurtsev, Reza Roohi Seraji

In many learning applications, the parameters in a model are structurally constrained in a way that can be modeled as them lying on a Riemannian manifold.

Riemannian optimization

When Do Curricula Work in Federated Learning?

no code implementations ICCV 2023 Saeed Vahidian, Sreevatsank Kadaveru, Woonjoon Baek, Weijia Wang, Vyacheslav Kungurtsev, Chen Chen, Mubarak Shah, Bill Lin

Specifically, we aim to investigate how ordered learning principles can contribute to alleviating the heterogeneity effects in FL.

Federated Learning

Jump-Diffusion Langevin Dynamics for Multimodal Posterior Sampling

no code implementations2 Nov 2022 Jacopo Guidolin, Vyacheslav Kungurtsev, Ondřej Kuželka

Bayesian methods of sampling from a posterior distribution are becoming increasingly popular due to their ability to precisely display the uncertainty of a model fit.

Mean-field analysis for heavy ball methods: Dropout-stability, connectivity, and global convergence

no code implementations13 Oct 2022 Diyuan Wu, Vyacheslav Kungurtsev, Marco Mondelli

In this paper, we focus on neural networks with two and three layers and provide a rigorous understanding of the properties of the solutions found by SHB: \emph{(i)} stability after dropping out part of the neurons, \emph{(ii)} connectivity along a low-loss path, and \emph{(iii)} convergence to the global optimum.

Efficient Distribution Similarity Identification in Clustered Federated Learning via Principal Angles Between Client Data Subspaces

1 code implementation21 Sep 2022 Saeed Vahidian, Mahdi Morafah, Weijia Wang, Vyacheslav Kungurtsev, Chen Chen, Mubarak Shah, Bill Lin

This small set of principal vectors is provided to the server so that the server can directly identify distribution similarities among the clients to form clusters.

Federated Learning

Stochastic Langevin Differential Inclusions with Applications to Machine Learning

no code implementations23 Jun 2022 Fabio V. Difonzo, Vyacheslav Kungurtsev, Jakub Marecek

In this paper, we show some foundational results regarding the flow and asymptotic properties of Langevin-type Stochastic Differential Inclusions under assumptions appropriate to the machine-learning settings.

BIG-bench Machine Learning

Scaling the Wild: Decentralizing Hogwild!-style Shared-memory SGD

1 code implementation13 Mar 2022 Bapi Chatterjee, Vyacheslav Kungurtsev, Dan Alistarh

Our scheme is based on the following algorithmic tools and features: (a) asynchronous local gradient updates on the shared-memory of workers, (b) partial backpropagation, and (c) non-blocking in-place averaging of the local models.

Blocking Image Classification

Randomized Algorithms for Monotone Submodular Function Maximization on the Integer Lattice

no code implementations19 Nov 2021 Alberto Schiabel, Vyacheslav Kungurtsev, Jakub Marecek

Optimization problems with set submodular objective functions have many real-world applications.

Mean-field Analysis of Piecewise Linear Solutions for Wide ReLU Networks

no code implementations3 Nov 2021 Alexander Shevchenko, Vyacheslav Kungurtsev, Marco Mondelli

Understanding the properties of neural networks trained via stochastic gradient descent (SGD) is at the heart of the theory of deep learning.

Decentralized Bayesian Learning with Metropolis-Adjusted Hamiltonian Monte Carlo

no code implementations15 Jul 2021 Vyacheslav Kungurtsev, Adam Cobb, Tara Javidi, Brian Jalaian

Federated learning performed by a decentralized networks of agents is becoming increasingly important with the prevalence of embedded software on autonomous devices.

Federated Learning

Trilevel and Multilevel Optimization using Monotone Operator Theory

no code implementations19 May 2021 Allahkaram Shafiei, Vyacheslav Kungurtsev, Jakub Marecek

We consider rather a general class of multi-level optimization problems, where a convex objective function is to be minimized subject to constraints of optimality of nested convex optimization problems.

Local SGD Meets Asynchrony

no code implementations1 Jan 2021 Bapi Chatterjee, Vyacheslav Kungurtsev, Dan Alistarh

On the theoretical side, we show that this method guarantees ergodic convergence for non-convex objectives, and achieves the classic sublinear rate under standard assumptions.

Blocking

Stochastic Gradient Langevin with Delayed Gradients

no code implementations12 Jun 2020 Vyacheslav Kungurtsev, Bapi Chatterjee, Dan Alistarh

Stochastic Gradient Langevin Dynamics (SGLD) ensures strong guarantees with regards to convergence in measure for sampling log-concave posterior distributions by adding noise to stochastic gradient iterates.

Stochastic Optimization

Elastic Consistency: A General Consistency Model for Distributed Stochastic Gradient Descent

no code implementations16 Jan 2020 Giorgi Nadiradze, Ilia Markov, Bapi Chatterjee, Vyacheslav Kungurtsev, Dan Alistarh

Our framework, called elastic consistency enables us to derive convergence bounds for a variety of distributed SGD methods used in practice to train large-scale machine learning models.

BIG-bench Machine Learning

Asynchronous Stochastic Subgradient Methods for General Nonsmooth Nonconvex Optimization

no code implementations25 Sep 2019 Vyacheslav Kungurtsev, Malcolm Egan, Bapi Chatterjee, Dan Alistarh

This is all the more surprising since these objectives are the ones appearing in the training of deep neural networks.

Scheduling

Algorithms for solving optimization problems arising from deep neural net models: smooth problems

no code implementations30 Jun 2018 Vyacheslav Kungurtsev, Tomas Pevny

Machine Learning models incorporating multiple layered learning networks have been seen to provide effective models for various classification problems.

BIG-bench Machine Learning General Classification

Algorithms for solving optimization problems arising from deep neural net models: nonsmooth problems

no code implementations30 Jun 2018 Vyacheslav Kungurtsev, Tomas Pevny

Machine Learning models incorporating multiple layered learning networks have been seen to provide effective models for various classification problems.

BIG-bench Machine Learning General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.