Search Results for author: Vyacheslav Kungurtsev

Found 30 papers, 8 papers with code

Federated Sinkhorn

no code implementations10 Feb 2025 Jeremy Kulcsar, Vyacheslav Kungurtsev, Georgios Korpas, Giulio Giaconi, William Shoosmith

In this work we investigate the potential of solving the discrete Optimal Transport (OT) problem with entropy regularization in a federated learning setting.

Federated Learning

"Cause" is Mechanistic Narrative within Scientific Domains: An Ordinary Language Philosophical Critique of "Causal Machine Learning"

no code implementations10 Jan 2025 Vyacheslav Kungurtsev, Leonardo Christov Moore, Gustav Sir, Martin Krutsky

Causal Learning has emerged as a major theme of research in statistics and machine learning in recent years, promising specific computational techniques to apply to datasets that reveal the true nature of cause and effect in a number of important domains.

Towards Diverse Device Heterogeneous Federated Learning via Task Arithmetic Knowledge Integration

1 code implementation27 Sep 2024 Mahdi Morafah, Vyacheslav Kungurtsev, Hojin Chang, Chen Chen, Bill Lin

To address these challenges, we introduce TAKFL, a novel KD-based framework that treats the knowledge transfer from each device prototype's ensemble as a separate task, independently distilling each to preserve its unique contributions and avoid dilution.

Federated Learning Knowledge Distillation +2

Dataset Distillation from First Principles: Integrating Core Information Extraction and Purposeful Learning

no code implementations2 Sep 2024 Vyacheslav Kungurtsev, Yuanfang Peng, Jianyang Gu, Saeed Vahidian, Anthony Quinn, Fadwa Idlahcen, Yiran Chen

Dataset distillation (DD) is an increasingly important technique that focuses on constructing a synthetic dataset capable of capturing the core information in training data to achieve comparable performance in models trained on the latter.

Dataset Distillation

Probabilistic Iterative Hard Thresholding for Sparse Learning

1 code implementation2 Sep 2024 Matteo Bergamaschi, Andrea Cristofari, Vyacheslav Kungurtsev, Francesco Rinaldi

For statistical modeling wherein the data regime is unfavorable in terms of dimensionality relative to the sample size, finding hidden sparsity in the ground truth can be critical in formulating an accurate statistical model.

Sparse Learning

Empirical Bayes for Dynamic Bayesian Networks Using Generalized Variational Inference

no code implementations25 Jun 2024 Vyacheslav Kungurtsev, Apaar, Aarya Khandelwal, Parth Sandeep Rastogi, Bapi Chatterjee, Jakub Mareček

This approach uses a recent development of Generalized Variational Inference, and indicates the potential of sampling the uncertainty of a mixture of DAG structures as well as a parameter posterior.

Variational Inference

Learning Dynamic Bayesian Networks from Data: Foundations, First Principles and Numerical Comparisons

no code implementations25 Jun 2024 Vyacheslav Kungurtsev, Fadwa Idlahcen, Petr Rysavy, Pavel Rytir, Ales Wodecki

We present the analytical form of the models, with a comprehensive discussion on the interdependence between structure and weights in a DBN model and their implications for learning.

Form

Group Distributionally Robust Dataset Distillation with Risk Minimization

1 code implementation7 Feb 2024 Saeed Vahidian, Mingyu Wang, Jianyang Gu, Vyacheslav Kungurtsev, Wei Jiang, Yiran Chen

The most popular methods for constructing the synthetic data rely on matching the convergence properties of training the model with the synthetic dataset and the training dataset.

Dataset Distillation Federated Learning +2

Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative Latents

1 code implementation3 Dec 2023 Yuqi Jia, Saeed Vahidian, Jingwei Sun, Jianyi Zhang, Vyacheslav Kungurtsev, Neil Zhenqiang Gong, Yiran Chen

This process allows local devices to train smaller surrogate models while enabling the training of a larger global model on the server, effectively minimizing resource utilization.

Dataset Distillation Federated Learning

Efficient Dataset Distillation via Minimax Diffusion

1 code implementation CVPR 2024 Jianyang Gu, Saeed Vahidian, Vyacheslav Kungurtsev, Haonan Wang, Wei Jiang, Yang You, Yiran Chen

Observing that key factors for constructing an effective surrogate dataset are representativeness and diversity, we design additional minimax criteria in the generative training to enhance these facets for the generated images of diffusion models.

Dataset Distillation Diversity

Quantum Solutions to the Privacy vs. Utility Tradeoff

no code implementations6 Jul 2023 Sagnik Chatterjee, Vyacheslav Kungurtsev

In this work, we propose a novel architecture (and several variants thereof) based on quantum cryptographic primitives with provable privacy and security guarantees regarding membership inference attacks on generative models.

A Stochastic-Gradient-based Interior-Point Algorithm for Solving Smooth Bound-Constrained Optimization Problems

no code implementations28 Apr 2023 Frank E. Curtis, Vyacheslav Kungurtsev, Daniel P. Robinson, Qi Wang

A stochastic-gradient-based interior-point algorithm for minimizing a continuously differentiable objective function (that may be nonconvex) subject to bound constraints is presented, analyzed, and demonstrated through experimental results.

Riemannian Stochastic Approximation for Minimizing Tame Nonsmooth Objective Functions

no code implementations1 Feb 2023 Johannes Aspman, Vyacheslav Kungurtsev, Reza Roohi Seraji

In many learning applications, the parameters in a model are structurally constrained in a way that can be modeled as them lying on a Riemannian manifold.

Riemannian optimization

When Do Curricula Work in Federated Learning?

1 code implementation ICCV 2023 Saeed Vahidian, Sreevatsank Kadaveru, Woonjoon Baek, Weijia Wang, Vyacheslav Kungurtsev, Chen Chen, Mubarak Shah, Bill Lin

Specifically, we aim to investigate how ordered learning principles can contribute to alleviating the heterogeneity effects in FL.

Federated Learning

Jump-Diffusion Langevin Dynamics for Multimodal Posterior Sampling

no code implementations2 Nov 2022 Jacopo Guidolin, Vyacheslav Kungurtsev, Ondřej Kuželka

Bayesian methods of sampling from a posterior distribution are becoming increasingly popular due to their ability to precisely display the uncertainty of a model fit.

Mean-field analysis for heavy ball methods: Dropout-stability, connectivity, and global convergence

no code implementations13 Oct 2022 Diyuan Wu, Vyacheslav Kungurtsev, Marco Mondelli

In this paper, we focus on neural networks with two and three layers and provide a rigorous understanding of the properties of the solutions found by SHB: \emph{(i)} stability after dropping out part of the neurons, \emph{(ii)} connectivity along a low-loss path, and \emph{(iii)} convergence to the global optimum.

Efficient Distribution Similarity Identification in Clustered Federated Learning via Principal Angles Between Client Data Subspaces

1 code implementation21 Sep 2022 Saeed Vahidian, Mahdi Morafah, Weijia Wang, Vyacheslav Kungurtsev, Chen Chen, Mubarak Shah, Bill Lin

This small set of principal vectors is provided to the server so that the server can directly identify distribution similarities among the clients to form clusters.

Federated Learning

Stochastic Langevin Differential Inclusions with Applications to Machine Learning

no code implementations23 Jun 2022 Fabio V. Difonzo, Vyacheslav Kungurtsev, Jakub Marecek

In this paper, we show some foundational results regarding the flow and asymptotic properties of Langevin-type Stochastic Differential Inclusions under assumptions appropriate to the machine-learning settings.

BIG-bench Machine Learning

Scaling the Wild: Decentralizing Hogwild!-style Shared-memory SGD

1 code implementation13 Mar 2022 Bapi Chatterjee, Vyacheslav Kungurtsev, Dan Alistarh

Our scheme is based on the following algorithmic tools and features: (a) asynchronous local gradient updates on the shared-memory of workers, (b) partial backpropagation, and (c) non-blocking in-place averaging of the local models.

Blocking Image Classification

Randomized Algorithms for Monotone Submodular Function Maximization on the Integer Lattice

no code implementations19 Nov 2021 Alberto Schiabel, Vyacheslav Kungurtsev, Jakub Marecek

Optimization problems with set submodular objective functions have many real-world applications.

Mean-field Analysis of Piecewise Linear Solutions for Wide ReLU Networks

no code implementations3 Nov 2021 Alexander Shevchenko, Vyacheslav Kungurtsev, Marco Mondelli

Understanding the properties of neural networks trained via stochastic gradient descent (SGD) is at the heart of the theory of deep learning.

Decentralized Bayesian Learning with Metropolis-Adjusted Hamiltonian Monte Carlo

no code implementations15 Jul 2021 Vyacheslav Kungurtsev, Adam Cobb, Tara Javidi, Brian Jalaian

Federated learning performed by a decentralized networks of agents is becoming increasingly important with the prevalence of embedded software on autonomous devices.

Federated Learning

Trilevel and Multilevel Optimization using Monotone Operator Theory

no code implementations19 May 2021 Allahkaram Shafiei, Vyacheslav Kungurtsev, Jakub Marecek

We consider rather a general class of multi-level optimization problems, where a convex objective function is to be minimized subject to constraints of optimality of nested convex optimization problems.

Local SGD Meets Asynchrony

no code implementations1 Jan 2021 Bapi Chatterjee, Vyacheslav Kungurtsev, Dan Alistarh

On the theoretical side, we show that this method guarantees ergodic convergence for non-convex objectives, and achieves the classic sublinear rate under standard assumptions.

Blocking

Stochastic Gradient Langevin with Delayed Gradients

no code implementations12 Jun 2020 Vyacheslav Kungurtsev, Bapi Chatterjee, Dan Alistarh

Stochastic Gradient Langevin Dynamics (SGLD) ensures strong guarantees with regards to convergence in measure for sampling log-concave posterior distributions by adding noise to stochastic gradient iterates.

Stochastic Optimization

Elastic Consistency: A General Consistency Model for Distributed Stochastic Gradient Descent

no code implementations16 Jan 2020 Giorgi Nadiradze, Ilia Markov, Bapi Chatterjee, Vyacheslav Kungurtsev, Dan Alistarh

Our framework, called elastic consistency enables us to derive convergence bounds for a variety of distributed SGD methods used in practice to train large-scale machine learning models.

BIG-bench Machine Learning

Asynchronous Stochastic Subgradient Methods for General Nonsmooth Nonconvex Optimization

no code implementations25 Sep 2019 Vyacheslav Kungurtsev, Malcolm Egan, Bapi Chatterjee, Dan Alistarh

This is all the more surprising since these objectives are the ones appearing in the training of deep neural networks.

Scheduling

Algorithms for solving optimization problems arising from deep neural net models: nonsmooth problems

no code implementations30 Jun 2018 Vyacheslav Kungurtsev, Tomas Pevny

Machine Learning models incorporating multiple layered learning networks have been seen to provide effective models for various classification problems.

BIG-bench Machine Learning General Classification

Algorithms for solving optimization problems arising from deep neural net models: smooth problems

no code implementations30 Jun 2018 Vyacheslav Kungurtsev, Tomas Pevny

Machine Learning models incorporating multiple layered learning networks have been seen to provide effective models for various classification problems.

BIG-bench Machine Learning General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.