no code implementations • 15 Jul 2024 • Sean Moushegian, Suya Wu, Enmao Diao, Jie Ding, Taposh Banerjee, Vahid Tarokh
This work considers the case where the pre- and post-change score functions are known only to correspond to distributions in two disjoint sets.
1 code implementation • 22 Apr 2024 • Enmao Diao, Qi Le, Suya Wu, Xinran Wang, Ali Anwar, Jie Ding, Vahid Tarokh
We introduce Collaborative Adaptation (ColA) with Gradient Learning (GL), a parameter-free, model-agnostic fine-tuning approach that decouples the computation of the gradient of hidden representations and parameters.
no code implementations • 1 Feb 2023 • Suya Wu, Enmao Diao, Taposh Banerjee, Jie Ding, Vahid Tarokh
This paper develops a new variant of the classical Cumulative Sum (CUSUM) algorithm for the quickest change detection.
no code implementations • 7 Nov 2022 • Bowen Li, Suya Wu, Erin E. Tripp, Ali Pezeshki, Vahid Tarokh
We develop a recursive least square (RLS) type algorithm with a minimax concave penalty (MCP) for adaptive identification of a sparse tap-weight vector that represents a communication channel.
no code implementations • 26 Jan 2022 • Mohammadreza Soltani, Suya Wu, Yuerong Li, Jie Ding, Vahid Tarokh
We propose a new structured pruning framework for compressing Deep Neural Networks (DNNs) with skip connections, based on measuring the statistical dependency of hidden layers and predicted outputs.
no code implementations • 22 Jan 2022 • Juncheng Dong, Suya Wu, Mohammadreza Sultani, Vahid Tarokh
In particular, by modeling the adversaries as learning agents, we show that the proposed MAAS is able to successfully choose the transmitted channel(s) and their respective allocated power(s) without any prior knowledge of the sender strategy.
1 code implementation • 1 Jan 2021 • Mohammadreza Soltani, Suya Wu, Yuerong Li, Jie Ding, Vahid Tarokh
We measure a new model-free information between the feature maps and the output of the network.
no code implementations • 23 Oct 2019 • Suya Wu, Enmao Diao, Jie Ding, Vahid Tarokh
Motivated by the ever-increasing demands for limited communication bandwidth and low-power consumption, we propose a new methodology, named joint Variational Autoencoders with Bernoulli mixture models (VAB), for performing clustering in the compressed data domain.