Search Results for author: Dimitrios Dimitriadis

Found 26 papers, 7 papers with code

FLUTE: A Scalable, Extensible Framework for High-Performance Federated Learning Simulations

1 code implementation25 Mar 2022 Mirian Hipolito Garcia, Andre Manoel, Daniel Madrigal Diaz, FatemehSadat Mireshghallah, Robert Sim, Dimitrios Dimitriadis

We compare the platform with other state-of-the-art platforms and describe available features of FLUTE for experimentation in core areas of active research, such as optimization, privacy, and scalability.

Federated Learning Quantization +3

GPT-FL: Generative Pre-trained Model-Assisted Federated Learning

1 code implementation3 Jun 2023 Tuo Zhang, Tiantian Feng, Samiul Alam, Dimitrios Dimitriadis, Mi Zhang, Shrikanth S. Narayanan, Salman Avestimehr

Through comprehensive ablation analysis, we discover that the downstream model generated by synthetic data plays a crucial role in controlling the direction of gradient diversity during FL training, which enhances convergence speed and contributes to the notable accuracy boost observed with GPT-FL.

Federated Learning

Distribution inference risks: Identifying and mitigating sources of leakage

2 code implementations18 Sep 2022 Valentin Hartmann, Léo Meynent, Maxime Peyrard, Dimitrios Dimitriadis, Shruti Tople, Robert West

We identify three sources of leakage: (1) memorizing specific information about the $\mathbb{E}[Y|X]$ (expected label given the feature values) of interest to the adversary, (2) wrong inductive bias of the model, and (3) finiteness of the training data.

Inductive Bias

Counterfactual Augmentation for Multimodal Learning Under Presentation Bias

1 code implementation23 May 2023 Victoria Lin, Louis-Philippe Morency, Dimitrios Dimitriadis, Srinagesh Sharma

In real-world machine learning systems, labels are often derived from user behaviors that the system wishes to encourage.

counterfactual

Progressive Neural Networks for Transfer Learning in Emotion Recognition

1 code implementation10 Jun 2017 John Gideon, Soheil Khorram, Zakaria Aldeneh, Dimitrios Dimitriadis, Emily Mower Provost

Many paralinguistic tasks are closely related and thus representations learned in one domain can be leveraged for another.

Emotion Recognition Transfer Learning

Low-Latency Speaker-Independent Continuous Speech Separation

no code implementations13 Apr 2019 Takuya Yoshioka, Zhuo Chen, Changliang Liu, Xiong Xiao, Hakan Erdogan, Dimitrios Dimitriadis

Speaker independent continuous speech separation (SI-CSS) is a task of converting a continuous audio stream, which may contain overlapping voices of unknown speakers, into a fixed number of continuous signals each of which contains no overlapping speech segment.

speech-recognition Speech Recognition +1

Combining Acoustics, Content and Interaction Features to Find Hot Spots in Meetings

no code implementations24 Oct 2019 Dave Makhervaks, William Hinthorn, Dimitrios Dimitriadis, Andreas Stolcke

Involvement hot spots have been proposed as a useful concept for meeting analysis and studied off and on for over 15 years.

Word Embeddings

A Review of Speaker Diarization: Recent Advances with Deep Learning

no code implementations24 Jan 2021 Tae Jin Park, Naoyuki Kanda, Dimitrios Dimitriadis, Kyu J. Han, Shinji Watanabe, Shrikanth Narayanan

Speaker diarization is a task to label audio or video recordings with classes that correspond to speaker identity, or in short, a task to identify "who spoke when".

Retrieval speaker-diarization +3

Dynamic Gradient Aggregation for Federated Domain Adaptation

no code implementations14 Jun 2021 Dimitrios Dimitriadis, Kenichi Kumatani, Robert Gmyr, Yashesh Gaur, Sefik Emre Eskimez

The proposed scheme is based on a weighted gradient aggregation using two-step optimization to offer a flexible training pipeline.

Domain Adaptation Federated Learning +3

Tackling Dynamics in Federated Incremental Learning with Variational Embedding Rehearsal

no code implementations19 Oct 2021 Tae Jin Park, Kenichi Kumatani, Dimitrios Dimitriadis

Federated Learning is a fast growing area of ML where the training datasets are extremely distributed, all while dynamically changing over time.

Federated Learning Incremental Learning

Heterogeneous Ensemble Knowledge Transfer for Training Large Models in Federated Learning

no code implementations27 Apr 2022 Yae Jee Cho, Andre Manoel, Gauri Joshi, Robert Sim, Dimitrios Dimitriadis

In this work, we propose a novel ensemble knowledge transfer method named Fed-ET in which small models (different in architecture) are trained on clients, and used to train a larger model at the server.

Ensemble Learning Federated Learning +1

Invariant Aggregator for Defending against Federated Backdoor Attacks

no code implementations4 Oct 2022 Xiaoyang Wang, Dimitrios Dimitriadis, Sanmi Koyejo, Shruti Tople

Federated learning enables training high-utility models across several clients without directly sharing their private data.

Federated Learning Model Optimization

Efficient and Light-Weight Federated Learning via Asynchronous Distributed Dropout

no code implementations28 Oct 2022 Chen Dun, Mirian Hipolito, Chris Jermaine, Dimitrios Dimitriadis, Anastasios Kyrillidis

Asynchronous learning protocols have regained attention lately, especially in the Federated Learning (FL) setup, where slower clients can severely impede the learning process.

Federated Learning

Federated Multilingual Models for Medical Transcript Analysis

no code implementations4 Nov 2022 Andre Manoel, Mirian Hipolito Garcia, Tal Baumel, Shize Su, Jialei Chen, Dan Miller, Danny Karmon, Robert Sim, Dimitrios Dimitriadis

Federated Learning (FL) is a novel machine learning approach that allows the model trainer to access more data samples, by training the model across multiple decentralized data sources, while data access constraints are in place.

Federated Learning

Local or Global: Selective Knowledge Assimilation for Federated Learning with Limited Labels

no code implementations ICCV 2023 Yae Jee Cho, Gauri Joshi, Dimitrios Dimitriadis

For both cross-device and cross-silo settings, we show that FedLabel outperforms other semi-supervised FL baselines by $8$-$24\%$, and even outperforms standard fully supervised FL baselines ($100\%$ labeled data) with only $5$-$20\%$ of labeled data.

Federated Learning Pseudo Label

Cannot find the paper you are looking for? You can Submit a new open access paper.