Search Results for author: Nikola Konstantinov

Found 10 papers, 4 papers with code

Provable Mutual Benefits from Federated Learning in Privacy-Sensitive Domains

no code implementations11 Mar 2024 Nikita Tsoy, Anna Mihalkova, Teodora Todorova, Nikola Konstantinov

In this paper, we study the question of when and how a server could design a FL protocol provably beneficial for all participants.

Federated Learning Stochastic Optimization

Human-Guided Fair Classification for Natural Language Processing

1 code implementation20 Dec 2022 Florian E. Dorner, Momchil Peychev, Nikola Konstantinov, Naman Goel, Elliott Ash, Martin Vechev

While existing research has started to address this gap, current methods are based on hardcoded word replacements, resulting in specifications with limited expressivity or ones that fail to fully align with human intuition (e. g., in cases of asymmetric counterfactuals).

Classification Fairness +1

Data Leakage in Federated Averaging

1 code implementation24 Jun 2022 Dimitar I. Dimitrov, Mislav Balunović, Nikola Konstantinov, Martin Vechev

On the popular FEMNIST dataset, we demonstrate that on average we successfully recover >45% of the client's images from realistic FedAvg updates computed on 10 local epochs of 10 batches each with 5 images, compared to only <10% using the baseline.

Federated Learning

FLEA: Provably Robust Fair Multisource Learning from Unreliable Training Data

1 code implementation22 Jun 2021 Eugenia Iofinova, Nikola Konstantinov, Christoph H. Lampert

In this work we address the problem of fair learning from unreliable training data in the robust multisource setting, where the available training data comes from multiple sources, a fraction of which might not be representative of the true data distribution.

Fairness

Fairness-Aware PAC Learning from Corrupted Data

no code implementations11 Feb 2021 Nikola Konstantinov, Christoph H. Lampert

Addressing fairness concerns about machine learning models is a crucial step towards their long-term adoption in real-world automated systems.

Fairness PAC learning

Fairness Through Regularization for Learning to Rank

no code implementations11 Feb 2021 Nikola Konstantinov, Christoph H. Lampert

Given the abundance of applications of ranking in recent years, addressing fairness concerns around automated ranking systems becomes necessary for increasing the trust among end-users.

Binary Classification Fairness +1

On the Sample Complexity of Adversarial Multi-Source PAC Learning

no code implementations ICML 2020 Nikola Konstantinov, Elias Frantar, Dan Alistarh, Christoph H. Lampert

We study the problem of learning from multiple untrusted data sources, a scenario of increasing practical relevance given the recent emergence of crowdsourcing and collaborative learning paradigms.

PAC learning

Robust Learning from Untrusted Sources

2 code implementations29 Jan 2019 Nikola Konstantinov, Christoph Lampert

Modern machine learning methods often require more data for training than a single expert can provide.

Distributed Optimization Learning Theory

The Convergence of Sparsified Gradient Methods

no code implementations NeurIPS 2018 Dan Alistarh, Torsten Hoefler, Mikael Johansson, Sarit Khirirat, Nikola Konstantinov, Cédric Renggli

Distributed training of massive machine learning models, in particular deep neural networks, via Stochastic Gradient Descent (SGD) is becoming commonplace.

Quantization

The Convergence of Stochastic Gradient Descent in Asynchronous Shared Memory

no code implementations23 Mar 2018 Dan Alistarh, Christopher De Sa, Nikola Konstantinov

Stochastic Gradient Descent (SGD) is a fundamental algorithm in machine learning, representing the optimization backbone for training several classic models, from regression to neural networks.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.