Search Results for author: Tao Lin

Found 38 papers, 12 papers with code

The Sample Complexity of Forecast Aggregation

no code implementations26 Jul 2022 YiLing Chen, Tao Lin

"samples" from the distribution, where each sample is a tuple of experts' reports (not signals) and the realization of the event.

Client Selection in Nonconvex Federated Learning: Improved Convergence Analysis for Optimal Unbiased Sampling Strategy

no code implementations27 May 2022 Lin Wang, Yongxin Guo, Tao Lin, Xiaoying Tang

Federated learning (FL) is a distributed machine learning paradigm that selects a subset of clients to participate in training to reduce communication burdens.

Federated Learning

FedAug: Reducing the Local Learning Bias Improves Federated Learning on Heterogeneous Data

no code implementations26 May 2022 Yongxin Guo, Tao Lin, Xiaoying Tang

Federated Learning (FL) is a machine learning paradigm that learns from data kept locally to safeguard the privacy of clients, whereas local SGD is typically employed on the clients' devices to improve communication efficiency.

Domain Generalization Federated Learning

Test-Time Robust Personalization for Federated Learning

no code implementations22 May 2022 Liangze Jiang, Tao Lin

Personalization on FL model additionally adapts the global model to different clients, achieving promising results on consistent local training & test distributions.

Federated Learning

Adversarial Training for High-Stakes Reliability

no code implementations3 May 2022 Daniel M. Ziegler, Seraphina Nix, Lawrence Chan, Tim Bauman, Peter Schmidt-Nielsen, Tao Lin, Adam Scherlis, Noa Nabeshima, Ben Weinstein-Raun, Daniel de Haas, Buck Shlegeris, Nate Thomas

We created a series of adversarial training techniques -- including a tool that assists human adversaries -- to find and eliminate failures in a classifier that filters text completions suggested by a generator.

Text Generation

Learning Disentangled Behaviour Patterns for Wearable-based Human Activity Recognition

1 code implementation15 Feb 2022 Jie Su, Zhenyu Wen, Tao Lin, Yu Guan

To address this issue, in this work, we proposed a Behaviour Pattern Disentanglement (BPD) framework, which can disentangle the behavior patterns from the irrelevant noises such as personal styles or environmental noises, etc.

Disentanglement Human Activity Recognition

An Improved Analysis of Gradient Tracking for Decentralized Machine Learning

no code implementations NeurIPS 2021 Anastasia Koloskova, Tao Lin, Sebastian U. Stich

We consider decentralized machine learning over a network where the training data is distributed across $n$ agents, each of which can compute stochastic model updates on their local data.

BIG-bench Machine Learning

Towards Federated Learning on Time-Evolving Heterogeneous Data

no code implementations25 Dec 2021 Yongxin Guo, Tao Lin, Xiaoying Tang

Federated Learning (FL) is an emerging learning paradigm that preserves privacy by ensuring client data locality on edge devices.

Federated Learning

Learning by Active Forgetting for Neural Networks

no code implementations21 Nov 2021 Jian Peng, Xian Sun, Min Deng, Chao Tao, Bo Tang, Wenbo Li, Guohua Wu, QingZhu, Yu Liu, Tao Lin, Haifeng Li

This paper presents a learning model by active forgetting mechanism with artificial neural networks.

RelaySum for Decentralized Deep Learning on Heterogeneous Data

1 code implementation NeurIPS 2021 Thijs Vogels, Lie He, Anastasia Koloskova, Tao Lin, Sai Praneeth Karimireddy, Sebastian U. Stich, Martin Jaggi

A key challenge, primarily in decentralized deep learning, remains the handling of differences between the workers' local data distributions.

Nash Convergence of Mean-Based Learning Algorithms in First Price Auctions

1 code implementation8 Oct 2021 Xiaotie Deng, Xinyan Hu, Tao Lin, Weiqiang Zheng

Specifically, the results depend on the number of bidders with the highest value: - If the number is at least three, the bidding dynamics almost surely converges to a Nash equilibrium of the auction, both in time-average and in last-iterate.

online learning

Representation Memorization for Fast Learning New Knowledge without Forgetting

no code implementations28 Aug 2021 Fei Mi, Tao Lin, Boi Faltings

In this paper, we consider scenarios that require learning new classes or data distributions quickly and incrementally over time, as it often occurs in real-world dynamic environments.

Image Classification Language Modelling

The Optimal Size of an Epistemic Congress

no code implementations2 Jul 2021 Manon Revel, Tao Lin, Daniel Halpern

We analyze the optimal size of a congress in a representative democracy.

Deep Learning for IoT

no code implementations12 Apr 2021 Tao Lin

Besides, this paper presents a research on data retrieval solution to avoid hacking by adversaries in the fields of adversary machine leaning.

BIG-bench Machine Learning

Quasi-Global Momentum: Accelerating Decentralized Deep Learning on Heterogeneous Data

1 code implementation9 Feb 2021 Tao Lin, Sai Praneeth Karimireddy, Sebastian U. Stich, Martin Jaggi

In this paper, we investigate and identify the limitation of several decentralized optimization algorithms for different degrees of data heterogeneity.

Consensus Control for Decentralized Deep Learning

no code implementations9 Feb 2021 Lingjing Kong, Tao Lin, Anastasia Koloskova, Martin Jaggi, Sebastian U. Stich

Decentralized training of deep learning models enables on-device learning over networks, as well as efficient scaling to large compute clusters.

On the Effect of Consensus in Decentralized Deep Learning

no code implementations1 Jan 2021 Tao Lin, Lingjing Kong, Anastasia Koloskova, Martin Jaggi, Sebastian U Stich

Decentralized training of deep learning models enables on-device learning over networks, as well as efficient scaling to large compute clusters.

A Game-Theoretic Analysis of the Empirical Revenue Maximization Algorithm with Endogenous Sampling

no code implementations NeurIPS 2020 Xiaotie Deng, Ron Lavi, Tao Lin, Qi Qi, Wenwei Wang, Xiang Yan

The Empirical Revenue Maximization (ERM) is one of the most important price learning algorithms in auction design: as the literature shows it can learn approximately optimal reserve prices for revenue-maximizing auctioneers in both repeated auctions and uniform-price auctions.

XCM: An Explainable Convolutional Neural Network for Multivariate Time Series Classification

2 code implementations10 Sep 2020 Kevin Fauvel, Tao Lin, Véronique Masson, Élisa Fromont, Alexandre Termier

Then, we illustrate how XCM reconciles performance and explainability on a synthetic dataset and show that XCM enables a more precise identification of the regions of the input data that are important for predictions compared to the current deep learning MTS classifier also providing faithful explainability.

General Classification Time Series Classification

Learning Utilities and Equilibria in Non-Truthful Auctions

no code implementations NeurIPS 2020 Hu Fu, Tao Lin

In non-truthful auctions, agents' utility for a strategy depends on the strategies of the opponents and also the prior distribution over their private types; the set of Bayes Nash equilibria generally has an intricate dependence on the prior.

Ensemble Distillation for Robust Model Fusion in Federated Learning

1 code implementation NeurIPS 2020 Tao Lin, Lingjing Kong, Sebastian U. Stich, Martin Jaggi

In most of the current training schemes the central model is refined by averaging the parameters of the server model and the updated parameters from the client side.

BIG-bench Machine Learning Federated Learning +1

Extrapolation for Large-batch Training in Deep Learning

no code implementations ICML 2020 Tao Lin, Lingjing Kong, Sebastian U. Stich, Martin Jaggi

Deep learning networks are typically trained by Stochastic Gradient Descent (SGD) methods that iteratively improve the model parameters by estimating a gradient on a very small fraction of the training data.

Masking as an Efficient Alternative to Finetuning for Pretrained Language Models

no code implementations EMNLP 2020 Mengjie Zhao, Tao Lin, Fei Mi, Martin Jaggi, Hinrich Schütze

We present an efficient method of utilizing pretrained language models, where we learn selective binary masks for pretrained weights in lieu of modifying them through finetuning.

Pretrained Language Models

Deep Collaborative Embedding for information cascade prediction

no code implementations18 Jan 2020 Yuhui Zhao, Ning Yang, Tao Lin, Philip S. Yu

First, the existing works often assume an underlying information diffusion model, which is impractical in real world due to the complexity of information diffusion.

Overcoming Long-term Catastrophic Forgetting through Adversarial Neural Pruning and Synaptic Consolidation

1 code implementation19 Dec 2019 Jian Peng, Bo Tang, Hao Jiang, Zhuo Li, Yinjie Lei, Tao Lin, Haifeng Li

It is due to two facts: first, as the model learns more tasks, the intersection of the low-error parameter subspace satisfying for these tasks becomes smaller or even does not exist; second, when the model learns a new task, the cumulative error keeps increasing as the model tries to protect the parameter configuration of previous tasks from interference.

Image Classification

Decentralized Deep Learning with Arbitrary Communication Compression

1 code implementation ICLR 2020 Anastasia Koloskova, Tao Lin, Sebastian U. Stich, Martin Jaggi

Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks, as well as for efficient scaling to large compute clusters.

Exploring Interpretable LSTM Neural Networks over Multi-Variable Data

3 code implementations28 May 2019 Tian Guo, Tao Lin, Nino Antulov-Fantulin

In this paper, we explore the structure of LSTM recurrent neural networks to learn variable-wise hidden states, with the aim to capture different dynamics in multi-variable time series and distinguish the contribution of variables to the prediction.

Time Series

T-GCN: A Temporal Graph ConvolutionalNetwork for Traffic Prediction

8 code implementations12 Nov 2018 Ling Zhao, Yujiao Song, Chao Zhang, Yu Liu, Pu Wang, Tao Lin, Min Deng, Haifeng Li

However, traffic forecasting has always been considered an open scientific issue, owing to the constraints of urban road network topological structure and the law of dynamic change with time, namely, spatial dependence and temporal dependence.

Management Traffic Prediction

Exploring the interpretability of LSTM neural networks over multi-variable data

no code implementations27 Sep 2018 Tian Guo, Tao Lin

In learning a predictive model over multivariate time series consisting of target and exogenous variables, the forecasting performance and interpretability of the model are both essential for deployment and uncovering knowledge behind the data.

Time Series

Don't Use Large Mini-Batches, Use Local SGD

2 code implementations ICLR 2020 Tao Lin, Sebastian U. Stich, Kumar Kshitij Patel, Martin Jaggi

Mini-batch stochastic gradient methods (SGD) are state of the art for distributed training of deep neural networks.

Multi-variable LSTM neural network for autoregressive exogenous model

no code implementations17 Jun 2018 Tian Guo, Tao Lin

In this paper, we propose multi-variable LSTM capable of accurate forecasting and variable importance interpretation for time series with exogenous variables.

Time Series

An interpretable LSTM neural network for autoregressive exogenous model

no code implementations14 Apr 2018 Tian Guo, Tao Lin, Yao Lu

In this paper, we propose an interpretable LSTM recurrent neural network, i. e., multi-variable LSTM for time series with exogenous variables.

Time Series

Training DNNs with Hybrid Block Floating Point

no code implementations NeurIPS 2018 Mario Drumond, Tao Lin, Martin Jaggi, Babak Falsafi

We identify block floating point (BFP) as a promising alternative representation since it exhibits wide dynamic range and enables the majority of DNN operations to be performed with fixed-point logic.

RubyStar: A Non-Task-Oriented Mixture Model Dialog System

no code implementations8 Nov 2017 Huiting Liu, Tao Lin, Hanfei Sun, Weijian Lin, Chih-Wei Chang, Teng Zhong, Alexander Rudnicky

RubyStar is a dialog system designed to create "human-like" conversation by combining different response generation strategies.

Question Answering Response Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.