no code implementations • 10 Feb 2025 • Jeremy Kulcsar, Vyacheslav Kungurtsev, Georgios Korpas, Giulio Giaconi, William Shoosmith
In this work we investigate the potential of solving the discrete Optimal Transport (OT) problem with entropy regularization in a federated learning setting.
no code implementations • 10 Jan 2025 • Vyacheslav Kungurtsev, Leonardo Christov Moore, Gustav Sir, Martin Krutsky
Causal Learning has emerged as a major theme of research in statistics and machine learning in recent years, promising specific computational techniques to apply to datasets that reveal the true nature of cause and effect in a number of important domains.
1 code implementation • 27 Sep 2024 • Mahdi Morafah, Vyacheslav Kungurtsev, Hojin Chang, Chen Chen, Bill Lin
To address these challenges, we introduce TAKFL, a novel KD-based framework that treats the knowledge transfer from each device prototype's ensemble as a separate task, independently distilling each to preserve its unique contributions and avoid dilution.
no code implementations • 2 Sep 2024 • Vyacheslav Kungurtsev, Yuanfang Peng, Jianyang Gu, Saeed Vahidian, Anthony Quinn, Fadwa Idlahcen, Yiran Chen
Dataset distillation (DD) is an increasingly important technique that focuses on constructing a synthetic dataset capable of capturing the core information in training data to achieve comparable performance in models trained on the latter.
1 code implementation • 2 Sep 2024 • Matteo Bergamaschi, Andrea Cristofari, Vyacheslav Kungurtsev, Francesco Rinaldi
For statistical modeling wherein the data regime is unfavorable in terms of dimensionality relative to the sample size, finding hidden sparsity in the ground truth can be critical in formulating an accurate statistical model.
no code implementations • 25 Jun 2024 • Vyacheslav Kungurtsev, Apaar, Aarya Khandelwal, Parth Sandeep Rastogi, Bapi Chatterjee, Jakub Mareček
This approach uses a recent development of Generalized Variational Inference, and indicates the potential of sampling the uncertainty of a mixture of DAG structures as well as a parameter posterior.
no code implementations • 25 Jun 2024 • Vyacheslav Kungurtsev, Fadwa Idlahcen, Petr Rysavy, Pavel Rytir, Ales Wodecki
We present the analytical form of the models, with a comprehensive discussion on the interdependence between structure and weights in a DBN model and their implications for learning.
1 code implementation • 7 Feb 2024 • Saeed Vahidian, Mingyu Wang, Jianyang Gu, Vyacheslav Kungurtsev, Wei Jiang, Yiran Chen
The most popular methods for constructing the synthetic data rely on matching the convergence properties of training the model with the synthetic dataset and the training dataset.
1 code implementation • 3 Dec 2023 • Yuqi Jia, Saeed Vahidian, Jingwei Sun, Jianyi Zhang, Vyacheslav Kungurtsev, Neil Zhenqiang Gong, Yiran Chen
This process allows local devices to train smaller surrogate models while enabling the training of a larger global model on the server, effectively minimizing resource utilization.
1 code implementation • CVPR 2024 • Jianyang Gu, Saeed Vahidian, Vyacheslav Kungurtsev, Haonan Wang, Wei Jiang, Yang You, Yiran Chen
Observing that key factors for constructing an effective surrogate dataset are representativeness and diversity, we design additional minimax criteria in the generative training to enhance these facets for the generated images of diffusion models.
no code implementations • 6 Jul 2023 • Sagnik Chatterjee, Vyacheslav Kungurtsev
In this work, we propose a novel architecture (and several variants thereof) based on quantum cryptographic primitives with provable privacy and security guarantees regarding membership inference attacks on generative models.
no code implementations • 28 Apr 2023 • Frank E. Curtis, Vyacheslav Kungurtsev, Daniel P. Robinson, Qi Wang
A stochastic-gradient-based interior-point algorithm for minimizing a continuously differentiable objective function (that may be nonconvex) subject to bound constraints is presented, analyzed, and demonstrated through experimental results.
no code implementations • 1 Feb 2023 • Johannes Aspman, Vyacheslav Kungurtsev, Reza Roohi Seraji
In many learning applications, the parameters in a model are structurally constrained in a way that can be modeled as them lying on a Riemannian manifold.
1 code implementation • ICCV 2023 • Saeed Vahidian, Sreevatsank Kadaveru, Woonjoon Baek, Weijia Wang, Vyacheslav Kungurtsev, Chen Chen, Mubarak Shah, Bill Lin
Specifically, we aim to investigate how ordered learning principles can contribute to alleviating the heterogeneity effects in FL.
no code implementations • 2 Nov 2022 • Jacopo Guidolin, Vyacheslav Kungurtsev, Ondřej Kuželka
Bayesian methods of sampling from a posterior distribution are becoming increasingly popular due to their ability to precisely display the uncertainty of a model fit.
no code implementations • 13 Oct 2022 • Diyuan Wu, Vyacheslav Kungurtsev, Marco Mondelli
In this paper, we focus on neural networks with two and three layers and provide a rigorous understanding of the properties of the solutions found by SHB: \emph{(i)} stability after dropping out part of the neurons, \emph{(ii)} connectivity along a low-loss path, and \emph{(iii)} convergence to the global optimum.
1 code implementation • 21 Sep 2022 • Saeed Vahidian, Mahdi Morafah, Weijia Wang, Vyacheslav Kungurtsev, Chen Chen, Mubarak Shah, Bill Lin
This small set of principal vectors is provided to the server so that the server can directly identify distribution similarities among the clients to form clusters.
no code implementations • 23 Jun 2022 • Fabio V. Difonzo, Vyacheslav Kungurtsev, Jakub Marecek
In this paper, we show some foundational results regarding the flow and asymptotic properties of Langevin-type Stochastic Differential Inclusions under assumptions appropriate to the machine-learning settings.
1 code implementation • 13 Mar 2022 • Bapi Chatterjee, Vyacheslav Kungurtsev, Dan Alistarh
Our scheme is based on the following algorithmic tools and features: (a) asynchronous local gradient updates on the shared-memory of workers, (b) partial backpropagation, and (c) non-blocking in-place averaging of the local models.
no code implementations • 19 Nov 2021 • Alberto Schiabel, Vyacheslav Kungurtsev, Jakub Marecek
Optimization problems with set submodular objective functions have many real-world applications.
no code implementations • 3 Nov 2021 • Alexander Shevchenko, Vyacheslav Kungurtsev, Marco Mondelli
Understanding the properties of neural networks trained via stochastic gradient descent (SGD) is at the heart of the theory of deep learning.
no code implementations • 15 Jul 2021 • Vyacheslav Kungurtsev, Adam Cobb, Tara Javidi, Brian Jalaian
Federated learning performed by a decentralized networks of agents is becoming increasingly important with the prevalence of embedded software on autonomous devices.
no code implementations • 19 May 2021 • Allahkaram Shafiei, Vyacheslav Kungurtsev, Jakub Marecek
We consider rather a general class of multi-level optimization problems, where a convex objective function is to be minimized subject to constraints of optimality of nested convex optimization problems.
no code implementations • 1 Jan 2021 • Bapi Chatterjee, Vyacheslav Kungurtsev, Dan Alistarh
On the theoretical side, we show that this method guarantees ergodic convergence for non-convex objectives, and achieves the classic sublinear rate under standard assumptions.
no code implementations • 12 Jun 2020 • Vyacheslav Kungurtsev, Bapi Chatterjee, Dan Alistarh
Stochastic Gradient Langevin Dynamics (SGLD) ensures strong guarantees with regards to convergence in measure for sampling log-concave posterior distributions by adding noise to stochastic gradient iterates.
no code implementations • 16 Jan 2020 • Giorgi Nadiradze, Ilia Markov, Bapi Chatterjee, Vyacheslav Kungurtsev, Dan Alistarh
Our framework, called elastic consistency enables us to derive convergence bounds for a variety of distributed SGD methods used in practice to train large-scale machine learning models.
no code implementations • 25 Sep 2019 • Vyacheslav Kungurtsev, Malcolm Egan, Bapi Chatterjee, Dan Alistarh
This is all the more surprising since these objectives are the ones appearing in the training of deep neural networks.
no code implementations • 7 Mar 2019 • Ondrej Kuzelka, Vyacheslav Kungurtsev
We study lifted weight learning of Markov logic networks.
no code implementations • 30 Jun 2018 • Vyacheslav Kungurtsev, Tomas Pevny
Machine Learning models incorporating multiple layered learning networks have been seen to provide effective models for various classification problems.
no code implementations • 30 Jun 2018 • Vyacheslav Kungurtsev, Tomas Pevny
Machine Learning models incorporating multiple layered learning networks have been seen to provide effective models for various classification problems.