1 code implementation • 22 Apr 2024 • Enmao Diao, Qi Le, Suya Wu, Xinran Wang, Ali Anwar, Jie Ding, Vahid Tarokh
We introduce Collaborative Adaptation (ColA) with Gradient Learning (GL), a parameter-free, model-agnostic fine-tuning approach that decouples the computation of the gradient of hidden representations and parameters.
no code implementations • 27 Jan 2024 • Enmao Diao, Taposh Banerjee, Vahid Tarokh
We analyze the performance of this score-based hypothesis testing procedure and derive upper bounds on the probabilities of its Type I and II errors.
1 code implementation • 9 May 2023 • Enmao Diao, Eric W. Tramel, Jie Ding, Tao Zhang
Keyword Spotting (KWS) is a critical aspect of audio-based applications on mobile devices and virtual assistants.
1 code implementation • ICLR 2023 • Enmao Diao, Ganghua Wang, Jiawei Zhan, Yuhong Yang, Jie Ding, Vahid Tarokh
Our extensive experiments corroborate the hypothesis that for a generic pruning procedure, PQI decreases first when a large model is being effectively regularized and then increases when its compressibility reaches a limit that appears to correspond to the beginning of underfitting.
no code implementations • 1 Feb 2023 • Suya Wu, Enmao Diao, Taposh Banerjee, Jie Ding, Vahid Tarokh
This paper develops a new variant of the classical Cumulative Sum (CUSUM) algorithm for the quickest change detection.
no code implementations • 17 Dec 2022 • Qi Le, Enmao Diao, Xinran Wang, Ali Anwar, Vahid Tarokh, Jie Ding
Recommender Systems (RSs) have become increasingly important in many application domains, such as digital marketing.
1 code implementation • 10 Jan 2022 • Mohammadreza Momenifar, Enmao Diao, Vahid Tarokh, Andrew D. Bragg
In this study, we apply a physics-informed Deep Learning technique based on vector quantization to generate a discrete, low-dimensional representation of data from simulations of three-dimensional turbulent flows.
1 code implementation • 7 Dec 2021 • Mohammadreza Momenifar, Enmao Diao, Vahid Tarokh, Andrew D. Bragg
We use a data-driven approach to model a three-dimensional turbulent flow using cutting-edge Deep Learning techniques.
1 code implementation • 26 Oct 2021 • Enmao Diao, Vahid Tarokh, Jie Ding
Recommender Systems (RSs) are operated locally by different organizations in many realistic scenarios.
1 code implementation • 2 Jun 2021 • Enmao Diao, Jie Ding, Vahid Tarokh
However, the underlying organizations may have little interest in sharing their local data, models, and objective functions.
1 code implementation • 2 Jun 2021 • Enmao Diao, Jie Ding, Vahid Tarokh
Most existing results on Federated Learning (FL) assume the clients have ground-truth labels.
1 code implementation • 24 Dec 2020 • Jie Ding, Enmao Diao, Jiawei Zhou, Vahid Tarokh
We propose a generalized notion of Takeuchi's information criterion and prove that the proposed method can asymptotically achieve the optimal out-sample prediction loss under reasonable assumptions.
3 code implementations • ICLR 2021 • Enmao Diao, Jie Ding, Vahid Tarokh
In this work, we propose a new federated learning framework named HeteroFL to address heterogeneous clients equipped with very different computation and communication capabilities.
1 code implementation • 7 Feb 2020 • Enmao Diao, Jie Ding, Vahid Tarokh
In the absence of the controllers, our model reduces to non-conditional generative models.
no code implementations • 23 Oct 2019 • Suya Wu, Enmao Diao, Jie Ding, Vahid Tarokh
Motivated by the ever-increasing demands for limited communication bandwidth and low-power consumption, we propose a new methodology, named joint Variational Autoencoders with Bernoulli mixture models (VAB), for performing clustering in the compressed data domain.
no code implementations • 20 Oct 2019 • Jianyou Wang, Michael Xue, Ryan Culhane, Enmao Diao, Jie Ding, Vahid Tarokh
Speech Emotion Recognition (SER) has emerged as a critical component of the next generation human-machine interfacing technologies.
1 code implementation • 21 Aug 2019 • Enmao Diao, Jie Ding, Vahid Tarokh
Recurrent Neural Network (RNN) and its variations such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), have become standard building blocks for learning online data of sequential nature in many research areas, including natural language processing and speech data analysis.
1 code implementation • 23 Mar 2019 • Enmao Diao, Jie Ding, Vahid Tarokh
We propose a new architecture for distributed image compression from a group of distributed data sources.