Search Results for author: Jie Ding

Found 57 papers, 12 papers with code

ActPerFL: Active Personalized Federated Learning

no code implementations FL4NLP (ACL) 2022 Huili Chen, Jie Ding, Eric Tramel, Shuang Wu, Anit Kumar Sahu, Salman Avestimehr, Tao Zhang

Inspired by Bayesian hierarchical models, we develop ActPerFL, a self-aware personalized FL method where each client can automatically balance the training of its local personal model and the global model that implicitly contributes to other clients’ training.

Personalized Federated Learning Uncertainty Quantification

RAW: A Robust and Agile Plug-and-Play Watermark Framework for AI-Generated Images with Provable Guarantees

no code implementations23 Jan 2024 Xun Xian, Ganghua Wang, Xuan Bi, Jayanth Srinivasa, Ashish Kundu, Mingyi Hong, Jie Ding

Subsequently, we employ a classifier that is jointly trained with the watermark to detect the presence of the watermark.

Demystifying Poisoning Backdoor Attacks from a Statistical Perspective

no code implementations16 Oct 2023 Ganghua Wang, Xun Xian, Jayanth Srinivasa, Ashish Kundu, Xuan Bi, Mingyi Hong, Jie Ding

The growing dependence on machine learning in real-world applications emphasizes the importance of understanding and ensuring its safety.

Backdoor Attack

A Graph Transformer-Driven Approach for Network Robustness Learning

no code implementations12 Jun 2023 Yu Zhang, Jia Li, Jie Ding, Xiang Li

Learning and analysis of network robustness, including controllability robustness and connectivity robustness, is critical for various networked systems against attacks.

A Framework for Incentivized Collaborative Learning

no code implementations26 May 2023 Xinran Wang, Qi Le, Ahmad Faraz Khan, Jie Ding, Ali Anwar

Collaborations among various entities, such as companies, research labs, AI agents, and edge devices, have become increasingly crucial for achieving machine learning tasks that cannot be accomplished by a single entity alone.

Federated Learning

Semi-Supervised Federated Learning for Keyword Spotting

no code implementations9 May 2023 Enmao Diao, Eric W. Tramel, Jie Ding, Tao Zhang

Keyword Spotting (KWS) is a critical aspect of audio-based applications on mobile devices and virtual assistants.

Federated Learning Keyword Spotting

Provable Identifiability of Two-Layer ReLU Neural Networks via LASSO Regularization

no code implementations7 May 2023 Gen Li, Ganghua Wang, Jie Ding

In this paper, the territory of LASSO is extended to two-layer ReLU neural networks, a fashionable and powerful nonlinear regression model.

regression Variable Selection +1

Pruning Deep Neural Networks from a Sparsity Perspective

2 code implementations ICLR 2023 Enmao Diao, Ganghua Wang, Jiawei Zhan, Yuhong Yang, Jie Ding, Vahid Tarokh

Our extensive experiments corroborate the hypothesis that for a generic pruning procedure, PQI decreases first when a large model is being effectively regularized and then increases when its compressibility reaches a limit that appears to correspond to the beginning of underfitting.

Network Pruning

Quickest Change Detection for Unnormalized Statistical Models

no code implementations1 Feb 2023 Suya Wu, Enmao Diao, Taposh Banerjee, Jie Ding, Vahid Tarokh

This paper develops a new variant of the classical Cumulative Sum (CUSUM) algorithm for the quickest change detection.

Change Detection

A Framework for Understanding Model Extraction Attack and Defense

no code implementations23 Jun 2022 Xun Xian, Mingyi Hong, Jie Ding

The privacy of machine learning models has become a significant concern in many emerging Machine-Learning-as-a-Service applications, where prediction services based on well-trained models are offered to users via pay-per-query.

Adversarial Attack BIG-bench Machine Learning +1

Self-Aware Personalized Federated Learning

no code implementations17 Apr 2022 Huili Chen, Jie Ding, Eric Tramel, Shuang Wu, Anit Kumar Sahu, Salman Avestimehr, Tao Zhang

In the context of personalized federated learning (FL), the critical challenge is to balance local model improvement and global model tuning when the personal and global objectives may not be exactly aligned.

Personalized Federated Learning Uncertainty Quantification

Federated Learning Challenges and Opportunities: An Outlook

no code implementations1 Feb 2022 Jie Ding, Eric Tramel, Anit Kumar Sahu, Shuang Wu, Salman Avestimehr, Tao Zhang

Federated learning (FL) has been developed as a promising framework to leverage the resources of edge devices, enhance customers' privacy, comply with regulations, and reduce development costs.

Federated Learning

On The Energy Statistics of Feature Maps in Pruning of Neural Networks with Skip-Connections

no code implementations26 Jan 2022 Mohammadreza Soltani, Suya Wu, Yuerong Li, Jie Ding, Vahid Tarokh

We propose a new structured pruning framework for compressing Deep Neural Networks (DNNs) with skip connections, based on measuring the statistical dependency of hidden layers and predicted outputs.

SPIDER: Searching Personalized Neural Architecture for Federated Learning

no code implementations27 Dec 2021 Erum Mushtaq, Chaoyang He, Jie Ding, Salman Avestimehr

However, given that clients' data are invisible to the server and data distributions are non-identical across clients, a predefined architecture discovered in a centralized setting may not be an optimal solution for all the clients in FL.

Federated Learning Neural Architecture Search

Characteristic Neural Ordinary Differential Equations

no code implementations25 Nov 2021 Xingzi Xu, Ali Hasan, Khalil Elkhalil, Jie Ding, Vahid Tarokh

While NODEs model the evolution of a latent variables as the solution to an ODE, C-NODE models the evolution of the latent variables as the solution of a family of first-order quasi-linear partial differential equations (PDEs) along curves on which the PDEs reduce to ODEs, referred to as characteristic curves.

Computational Efficiency Density Estimation

FedNAS: Federated Deep Learning via Neural Architecture Search

no code implementations29 Sep 2021 Chaoyang He, Erum Mushtaq, Jie Ding, Salman Avestimehr

Federated Learning (FL) is an effective learning framework used when data cannotbe centralized due to privacy, communication costs, and regulatory restrictions. While there have been many algorithmic advances in FL, significantly less effort hasbeen made on model development, and most works in FL employ predefined modelarchitectures discovered in the centralized environment.

Federated Learning Meta-Learning +1

Provable Identifiability of ReLU Neural Networks via Lasso Regularization

no code implementations29 Sep 2021 Gen Li, Ganghua Wang, Yuantao Gu, Jie Ding

In this paper, the territory of LASSO is extended to the neural network model, a fashionable and powerful nonlinear regression model.

regression Variable Selection

Assisted Learning for Organizations with Limited Imbalanced Data

no code implementations20 Sep 2021 Cheng Chen, Jiaying Zhou, Jie Ding, Yi Zhou

In this work, we develop an assisted learning framework for assisting organizations to improve their learning performance.

Decision Making

Targeted Cross-Validation

no code implementations14 Sep 2021 Jiawei Zhang, Jie Ding, Yuhong Yang

A standard approach is to find the globally best modeling method from a set of candidate methods.

The Rate of Convergence of Variation-Constrained Deep Neural Networks

no code implementations22 Jun 2021 Gen Li, Jie Ding

To the best of our knowledge, the rate of convergence of neural networks shown by existing works is bounded by at most the order of $n^{-1/4}$ for a sample size of $n$.

GAL: Gradient Assisted Learning for Decentralized Multi-Organization Collaborations

1 code implementation2 Jun 2021 Enmao Diao, Jie Ding, Vahid Tarokh

However, the underlying organizations may have little interest in sharing their local data, models, and objective functions.

Learning Time Series from Scale Information

no code implementations18 Mar 2021 Yuan Yang, Jie Ding

Based on that, we focus on a specific but important type of scale information, the resolution/sampling rate of time series data.

Time Series Time Series Analysis

Model-Free Energy Distance for Pruning DNNs

1 code implementation1 Jan 2021 Mohammadreza Soltani, Suya Wu, Yuerong Li, Jie Ding, Vahid Tarokh

We measure a new model-free information between the feature maps and the output of the network.

THE EFFICACY OF L1 REGULARIZATION IN NEURAL NETWORKS

no code implementations1 Jan 2021 Gen Li, Yuantao Gu, Jie Ding

A crucial problem in neural networks is to select the most appropriate number of hidden neurons and obtain tight statistical risk bounds.

On Statistical Efficiency in Learning

1 code implementation24 Dec 2020 Jie Ding, Enmao Diao, Jiawei Zhou, Vahid Tarokh

We propose a generalized notion of Takeuchi's information criterion and prove that the proposed method can asymptotically achieve the optimal out-sample prediction loss under reasonable assumptions.

Model Selection

ASCII: ASsisted Classification with Ignorance Interchange

no code implementations21 Oct 2020 Jiaying Zhou, Xun Xian, Na Li, Jie Ding

In this paper, we propose a method named ASCII for an agent to improve its classification performance through assistance from other agents.

Classification General Classification

HeteroFL: Computation and Communication Efficient Federated Learning for Heterogeneous Clients

3 code implementations ICLR 2021 Enmao Diao, Jie Ding, Vahid Tarokh

In this work, we propose a new federated learning framework named HeteroFL to address heterogeneous clients equipped with very different computation and communication capabilities.

Federated Learning

The Efficacy of $L_1$ Regularization in Two-Layer Neural Networks

no code implementations2 Oct 2020 Gen Li, Yuantao Gu, Jie Ding

A crucial problem in neural networks is to select the most appropriate number of hidden neurons and obtain tight statistical risk bounds.

Vocal Bursts Valence Prediction

Information Laundering for Model Privacy

no code implementations ICLR 2021 Xinran Wang, Yu Xiang, Jun Gao, Jie Ding

In this work, we propose information laundering, a novel framework for enhancing model privacy.

Forecasting with Multiple Seasonality

2 code implementations27 Aug 2020 Tianyang Xie, Jie Ding

An emerging number of modern applications involve forecasting time series data that exhibit both short-time dynamics and long-time seasonality.

Time Series Time Series Analysis

Subcarrier-wise Backscatter Communications over Ambient OFDM for Low Power IoT

no code implementations16 Jul 2020 Mahyar Nemati, Morteza Soltani, Jie Ding, Jinho Choi

Analytical and numerical evaluations provide a proof to see the performance of the proposed method in terms of BER, data rate, and interference.

Fisher Auto-Encoders

no code implementations12 Jul 2020 Khalil Elkhalil, Ali Hasan, Jie Ding, Sina Farsiu, Vahid Tarokh

It has been conjectured that the Fisher divergence is more robust to model uncertainty than the conventional Kullback-Leibler (KL) divergence.

Short-Range Ambient Backscatter Communication Using Reconfigurable Intelligent Surfaces

no code implementations14 Jun 2020 Mahyar Nemati, Jie Ding, Jinho Choi

In this paper, we propose a new AmBC model over ambient orthogonal-frequency-division-multiplexing (OFDM) subcarriers in the frequency domain in conjunction with RIS for short-range communication scenarios.

TAG

Machine Learning Enabled Preamble Collision Resolution in Distributed Massive MIMO

no code implementations8 Jun 2020 Jie Ding, Daiming Qu, Pei Liu, Jinho Choi

Preamble collision is a bottleneck that impairs the performance of random access (RA) user equipment (UE) in grant-free RA (GFRA).

BIG-bench Machine Learning Clustering

Meta Clustering for Collaborative Learning

no code implementations29 May 2020 Chenglong Ye, Reza Ghanadan, Jie Ding

We propose a framework named meta clustering to address the challenge.

Clustering Fairness

Model Linkage Selection for Cooperative Learning

no code implementations15 May 2020 Jiaying Zhou, Jie Ding, Kean Ming Tan, Vahid Tarokh

The main crux is to sequentially incorporate additional learners that can enhance the prediction accuracy of an existing joint model based on user-specified parameter sharing patterns across a set of learners.

Assisted Learning: A Framework for Multi-Organization Learning

no code implementations NeurIPS 2020 Xun Xian, Xinran Wang, Jie Ding, Reza Ghanadan

In an increasing number of AI scenarios, collaborations among different organizations or agents (e. g., human and robots, mobile units) are often essential to accomplish an organization-specific mission.

Multimodal Controller for Generative Models

1 code implementation7 Feb 2020 Enmao Diao, Jie Ding, Vahid Tarokh

In the absence of the controllers, our model reduces to non-conditional generative models.

Gradient Information for Representation and Modeling

no code implementations NeurIPS 2019 Jie Ding, Robert Calderbank, Vahid Tarokh

Motivated by Fisher divergence, in this paper we present a new set of information quantities which we refer to as gradient information.

Is a Classification Procedure Good Enough? A Goodness-of-Fit Assessment Tool for Classification Learning

1 code implementation8 Nov 2019 Jiawei Zhang, Jie Ding, Yuhong Yang

For testing parametric classification models, the BAGofT has a broader scope than the existing methods since it is not restricted to specific parametric models (e. g., logistic regression).

Classification General Classification

Variable Grouping Based Bayesian Additive Regression Tree

no code implementations3 Nov 2019 Yuhao Su, Jie Ding

We propose a two-stage method named variable grouping based Bayesian additive regression tree (GBART) with a well-developed python package gbart available.

regression

Deep Clustering of Compressed Variational Embeddings

no code implementations23 Oct 2019 Suya Wu, Enmao Diao, Jie Ding, Vahid Tarokh

Motivated by the ever-increasing demands for limited communication bandwidth and low-power consumption, we propose a new methodology, named joint Variational Autoencoders with Bernoulli mixture models (VAB), for performing clustering in the compressed data domain.

Clustering Deep Clustering

Perception-Distortion Trade-off with Restricted Boltzmann Machines

no code implementations21 Oct 2019 Chris Cannella, Jie Ding, Mohammadreza Soltani, Vahid Tarokh

In this work, we introduce a new procedure for applying Restricted Boltzmann Machines (RBMs) to missing data inference tasks, based on linearization of the effective energy function governing the distribution of observations.

Speech Emotion Recognition with Dual-Sequence LSTM Architecture

no code implementations20 Oct 2019 Jianyou Wang, Michael Xue, Ryan Culhane, Enmao Diao, Jie Ding, Vahid Tarokh

Speech Emotion Recognition (SER) has emerged as a critical component of the next generation human-machine interfacing technologies.

Speech Emotion Recognition

Supervised Encoding for Discrete Representation Learning

1 code implementation15 Oct 2019 Cat P. Le, Yi Zhou, Jie Ding, Vahid Tarokh

Classical supervised classification tasks search for a nonlinear mapping that maps each encoded feature directly to a probability mass over the labels.

Representation Learning Style Transfer

Restricted Recurrent Neural Networks

1 code implementation21 Aug 2019 Enmao Diao, Jie Ding, Vahid Tarokh

Recurrent Neural Network (RNN) and its variations such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), have become standard building blocks for learning online data of sequential nature in many research areas, including natural language processing and speech data analysis.

Language Modelling

DRASIC: Distributed Recurrent Autoencoder for Scalable Image Compression

1 code implementation23 Mar 2019 Enmao Diao, Jie Ding, Vahid Tarokh

We propose a new architecture for distributed image compression from a group of distributed data sources.

Image Compression

Model Selection Techniques -- An Overview

no code implementations22 Oct 2018 Jie Ding, Vahid Tarokh, Yuhong Yang

In the era of big data, analysts usually explore various statistical models or machine learning methods for observed data in order to facilitate scientific discoveries or gain predictive power.

Epidemiology Model Selection

Learning the Number of Autoregressive Mixtures in Time Series Using the Gap Statistics

no code implementations11 Sep 2015 Jie Ding, Mohammad Noshad, Vahid Tarokh

We define a new distance measure between stable AR filters and draw a reference curve that is used to measure how much adding a new AR filter improves the performance of the model, and then choose the number of AR filters that has the maximum gap with the reference curve.

Model Selection Time Series +1

Bridging AIC and BIC: a new criterion for autoregression

no code implementations11 Aug 2015 Jie Ding, Vahid Tarokh, Yuhong Yang

When the data is generated from a finite order autoregression, the Bayesian information criterion is known to be consistent, and so is the new criterion.

Model Selection Time Series +1

Data-Driven Learning of the Number of States in Multi-State Autoregressive Models

no code implementations6 Jun 2015 Jie Ding, Mohammad Noshad, Vahid Tarokh

In this work, we consider the class of multi-state autoregressive processes that can be used to model non-stationary time-series of interest.

Model Selection Time Series +1

Cannot find the paper you are looking for? You can Submit a new open access paper.