Search Results for author: Ji Liu

Found 124 papers, 31 papers with code

Hand-Transformer: Non-Autoregressive Structured Modeling for 3D Hand Pose Estimation

no code implementations ECCV 2020 Lin Huang, Jianchao Tan, Ji Liu, Junsong Yuan

To address this issue, we connect this structured output learning problem with the structured modeling framework in sequence transduction field.

3D Hand Pose Estimation

Multi-Layer SIS Model with an Infrastructure Network

no code implementations20 Sep 2021 Philip E. Pare, Axel Janson, Sebin Gracy, Ji Liu, Henrik Sandberg, Karl H. Johansson

We develop a layered networked spread model for a susceptible-infected-susceptible (SIS) pathogen-borne disease spreading over a human contact network and an infrastructure network, and refer to it as a layered networked susceptible-infected-water-susceptible (SIWS) model.

SpeechNAS: Towards Better Trade-off between Latency and Accuracy for Large-Scale Speaker Verification

1 code implementation18 Sep 2021 Wentao Zhu, Tianlong Kong, Shun Lu, Jixiang Li, Dawei Zhang, Feng Deng, Xiaorui Wang, Sen yang, Ji Liu

Recently, x-vector has been a successful and popular approach for speaker verification, which employs a time delay neural network (TDNN) and statistics pooling to extract speaker characterizing embedding from variable-length utterances.

Neural Architecture Search Speaker Recognition +2

GDP: Stabilized Neural Network Pruning via Gates with Differentiable Polarization

no code implementations6 Sep 2021 Yi Guo, Huan Yuan, Jianchao Tan, Zhangyang Wang, Sen yang, Ji Liu

During the training process, the polarization effect will drive a subset of gates to smoothly decrease to exact zero, while other gates gradually stay away from zero by a large margin.

Model Compression Network Pruning

Shifted Chunk Transformer for Spatio-Temporal Representational Learning

no code implementations26 Aug 2021 Xuefan Zha, Wentao Zhu, Tingxun Lv, Sen yang, Ji Liu

Leveraging the recent efficient Transformer design in NLP, this shifted chunk Transformer can learn hierarchical spatio-temporal features from a local tiny patch to a global video clip.

Action Anticipation Action Recognition +4

PASTO: Strategic Parameter Optimization in Recommendation Systems -- Probabilistic is Better than Deterministic

no code implementations20 Aug 2021 Weicong Ding, Hanlin Tang, Jingshuo Feng, Lei Yuan, Sen yang, Guangxu Yang, Jie Zheng, Jing Wang, Qiang Su, Dong Zheng, Xuezhong Qiu, Yongqi Liu, Yuxuan Chen, Yang Liu, Chao Song, Dongying Kong, Kai Ren, Peng Jiang, Qiao Lian, Ji Liu

In this setting with multiple and constrained goals, this paper discovers that a probabilistic strategic parameter regime can achieve better value compared to the standard regime of finding a single deterministic parameter.

Recommendation Systems

POSO: Personalized Cold Start Modules for Large-scale Recommender Systems

no code implementations10 Aug 2021 Shangfeng Dai, Haobin Lin, Zhichen Zhao, Jianying Lin, Honghuan Wu, Zhe Wang, Sen yang, Ji Liu

Moreover, POSO can be further generalized to regular users, inactive users and returning users (+2%-3% on Watch Time), as well as item cold start (+3. 8% on Watch Time).

Recommendation Systems

ChemiRise: a data-driven retrosynthesis engine

no code implementations9 Aug 2021 Xiangyan Sun, Ke Liu, Yuquan Lin, Lingjie Wu, Haoming Xing, Minghong Gao, Ji Liu, Suocheng Tan, Zekun Ni, Qi Han, Junqiu Wu, Jie Fan

We have developed an end-to-end, retrosynthesis system, named ChemiRise, that can propose complete retrosynthesis routes for organic compounds rapidly and reliably.

Hand Image Understanding via Deep Multi-Task Learning

1 code implementation24 Jul 2021 Xiong Zhang, Hongsheng Huang, Jianchao Tan, Hongmin Xu, Cheng Yang, Guozhu Peng, Lei Wang, Ji Liu

To further improve the performance of these tasks, we propose a novel Hand Image Understanding (HIU) framework to extract comprehensive information of the hand object from a single RGB image, by jointly considering the relationships between these tasks.

3D Hand Pose Estimation Multi-Task Learning +1

MugRep: A Multi-Task Hierarchical Graph Representation Learning Framework for Real Estate Appraisal

no code implementations12 Jul 2021 Weijia Zhang, Hao liu, Lijun Zha, HengShu Zhu, Ji Liu, Dejing Dou, Hui Xiong

Real estate appraisal refers to the process of developing an unbiased opinion for real property's market value, which plays a vital role in decision-making for various players in the marketplace (e. g., real estate agents, appraisers, lenders, and buyers).

Decision Making Graph Representation Learning +1

RankDetNet: Delving Into Ranking Constraints for Object Detection

no code implementations CVPR 2021 Ji Liu, Dong Li, Rongzhang Zheng, Lu Tian, Yi Shan

To this end, we comprehensively investigate three types of ranking constraints, i. e., global ranking, class-specific ranking and IoU-guided ranking losses.

3D Object Detection Classification +1

DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning

2 code implementations11 Jun 2021 Daochen Zha, Jingru Xie, Wenye Ma, Sheng Zhang, Xiangru Lian, Xia Hu, Ji Liu

Games are abstractions of the real world, where artificial agents learn to compete and cooperate with other agents.

Game of Poker Multi-agent Reinforcement Learning

From Distributed Machine Learning to Federated Learning: A Survey

no code implementations29 Apr 2021 Ji Liu, Jizhou Huang, Yang Zhou, Xuhong LI, Shilei Ji, Haoyi Xiong, Dejing Dou

Because of laws or regulations, the distributed data and computing resources cannot be directly shared among different regions or organizations for machine learning tasks.

Federated Learning

Distributed Learning Systems with First-order Methods

no code implementations12 Apr 2021 Ji Liu, Ce Zhang

Scalable and efficient distributed learning is one of the main driving forces behind the recent rapid advancement of machine learning and artificial intelligence.


Interpretable Deep Learning: Interpretation, Interpretability, Trustworthiness, and Beyond

no code implementations19 Mar 2021 Xuhong LI, Haoyi Xiong, Xingjian Li, Xuanyu Wu, Xiao Zhang, Ji Liu, Jiang Bian, Dejing Dou

Then, to understand the results of interpretation, we also survey the performance metrics for evaluating interpretation algorithms.

On a Network SIS Epidemic Model with Cooperative and Antagonistic Opinion Dynamics

no code implementations25 Feb 2021 Baike She, Ji Liu, Shreyas Sundaram, Philip E. Paré

We propose a mathematical model to study coupled epidemic and opinion dynamics in a network of communities.

Influence of flux limitation on large time behavior in a three-dimensional chemotaxis-Stokes system modeling coral fertilization

no code implementations24 Feb 2021 Ji Liu

In this paper, we consider the following system $$\left\{\begin{array}{ll} n_t+u\cdot\nabla n&=\Delta n-\nabla\cdot(n\mathcal{S}(|\nabla c|^2)\nabla c)-nm,\\ c_t+u\cdot\nabla c&=\Delta c-c+m,\\ m_t+u\cdot\nabla m&=\Delta m-mn,\\ u_t&=\Delta u+\nabla P+(n+m)\nabla\Phi,\qquad \nabla\cdot u=0 \end{array}\right.$$ which models the process of coral fertilization, in a smoothly three-dimensional bounded domain, where $\mathcal{S}$ is a given function fulfilling $$|\mathcal{S}(\sigma)|\leq K_{\mathcal{S}}(1+\sigma)^{-\frac{\theta}{2}},\qquad \sigma\geq 0$$ with some $K_{\mathcal{S}}>0.$ Based on conditional estimates of the quantity $c$ and the gradients thereof, a relatively compressed argument as compared to that proceeding in related precedents shows that if $$\theta>0,$$ then for any initial data with proper regularity an associated initial-boundary problem under no-flux/no-flux/no-flux/Dirichlet boundary conditions admits a unique classical solution which is globally bounded, and which also enjoys the stabilization features in the sense that $$\|n(\cdot, t)-n_{\infty}\|_{L^{\infty}(\Omega)}+\|c(\cdot, t)-m_{\infty}\|_{W^{1,\infty}(\Omega)} +\|m(\cdot, t)-m_{\infty}\|_{W^{1,\infty}(\Omega)}+\|u(\cdot, t)\|_{L^{\infty}(\Omega)}\rightarrow0 \quad\textrm{as}~t\rightarrow \infty$$ with $n_{\infty}:=\frac{1}{|\Omega|}\left\{\int_{\Omega}n_0-\int_{\Omega}m_0\right\}_{+}$ and $m_{\infty}:=\frac{1}{|\Omega|}\left\{\int_{\Omega}m_0-\int_{\Omega}n_0\right\}_{+}.$

Analysis of PDEs

Gossip over Holonomic Graphs

no code implementations17 Feb 2021 Xudong Chen, Mohamed-Ali Belabbas, Ji Liu

A gossip process is an iterative process in a multi-agent system where only two neighboring agents communicate at each iteration and update their states.

Optimization and Control

1-bit Adam: Communication Efficient Large-Scale Training with Adam's Convergence Speed

1 code implementation4 Feb 2021 Hanlin Tang, Shaoduo Gan, Ammar Ahmad Awan, Samyam Rajbhandari, Conglong Li, Xiangru Lian, Ji Liu, Ce Zhang, Yuxiong He

One of the most effective methods is error-compensated compression, which offers robust convergence speed even under 1-bit compression.

Rank the Episodes: A Simple Approach for Exploration in Procedurally-Generated Environments

3 code implementations ICLR 2021 Daochen Zha, Wenye Ma, Lei Yuan, Xia Hu, Ji Liu

Unfortunately, methods based on intrinsic rewards often fall short in procedurally-generated environments, where a different environment is generated in each episode so that the agent is not likely to visit the same state more than once.

C-Watcher: A Framework for Early Detection of High-Risk Neighborhoods Ahead of COVID-19 Outbreak

no code implementations22 Dec 2020 Congxi Xiao, Jingbo Zhou, Jizhou Huang, An Zhuo, Ji Liu, Haoyi Xiong, Dejing Dou

Furthermore, to transfer the firsthand knowledge (witted in epicenters) to the target city before local outbreaks, we adopt a novel adversarial encoder framework to learn "city-invariant" representations from the mobility-related features for precise early detection of high-risk neighborhoods, even before any confirmed cases known, in the target city.

Relaxed Peephole Optimization: A Novel Compiler Optimization for Quantum Circuits

1 code implementation14 Dec 2020 Ji Liu, Luciano Bello, Huiyang Zhou

In this paper, we propose a novel quantum compiler optimization, named relaxed peephole optimization (RPO) for quantum computers.

Quantum Physics Programming Languages

Federated Bandit: A Gossiping Approach

no code implementations24 Oct 2020 Zhaowei Zhu, Jingxuan Zhu, Ji Liu, Yang Liu

Motivated by the proposal of federated learning, we aim for a solution with which agents will never share their local observations with a central entity, and will be allowed to only share a private copy of his/her own information with their neighbors.

Federated Learning

Ensemble Chinese End-to-End Spoken Language Understanding for Abnormal Event Detection from audio stream

no code implementations19 Oct 2020 Haoran Wei, Fei Tao, Runze Su, Sen yang, Ji Liu

Previous end-to-end SLU models are primarily used for English environment due to lacking large scale SLU dataset in Chines, and use only one ASR model to extract features from speech.

automatic-speech-recognition Event Detection +3

Themes Informed Audio-visual Correspondence Learning

no code implementations14 Sep 2020 Runze Su, Fei Tao, Xudong Liu, Hao-Ran Wei, Xiaorong Mei, Zhiyao Duan, Lei Yuan, Ji Liu, Yuying Xie

The applications of short-term user-generated video (UGV), such as Snapchat, and Youtube short-term videos, booms recently, raising lots of multimodal machine learning tasks.

Pose-Guided High-Resolution Appearance Transfer via Progressive Training

no code implementations27 Aug 2020 Ji Liu, Heshan Liu, Mang-Tik Chiu, Yu-Wing Tai, Chi-Keung Tang

We propose a novel pose-guided appearance transfer network for transferring a given reference appearance to a target pose in unprecedented image resolution (1024 * 1024), given respectively an image of the reference and target person.

Video Generation

APMSqueeze: A Communication Efficient Adam-Preconditioned Momentum SGD Algorithm

no code implementations26 Aug 2020 Hanlin Tang, Shaoduo Gan, Samyam Rajbhandari, Xiangru Lian, Ji Liu, Yuxiong He, Ce Zhang

Adam is the important optimization algorithm to guarantee efficiency and accuracy for training many important tasks such as BERT and ImageNet.

GAN Slimming: All-in-One GAN Compression by A Unified Optimization Framework

2 code implementations ECCV 2020 Haotao Wang, Shupeng Gui, Haichuan Yang, Ji Liu, Zhangyang Wang

Generative adversarial networks (GANs) have gained increasing popularity in various computer vision applications, and recently start to be deployed to resource-constrained mobile devices.

Image-to-Image Translation Model distillation +2

Streaming Probabilistic Deep Tensor Factorization

no code implementations14 Jul 2020 Shikai Fang, Zheng Wang, Zhimeng Pan, Ji Liu, Shandian Zhe

Our algorithm provides responsive incremental updates for the posterior of the latent factors and NN weights upon receiving new tensor entries, and meanwhile select and inhibit redundant/useless weights.

ResRep: Lossless CNN Pruning via Decoupling Remembering and Forgetting

6 code implementations7 Jul 2020 Xiaohan Ding, Tianxiang Hao, Jianchao Tan, Ji Liu, Jungong Han, Yuchen Guo, Guiguang Ding

Via training with regular SGD on the former but a novel update rule with penalty gradients on the latter, we realize structured sparsity.

On Effective Parallelization of Monte Carlo Tree Search

no code implementations15 Jun 2020 Anji Liu, Yitao Liang, Ji Liu, Guy Van Den Broeck, Jianshu Chen

Second, and more importantly, we demonstrate how the proposed necessary conditions can be adopted to design more effective parallel MCTS algorithms.

Atari Games

Neural Network Activation Quantization with Bitwise Information Bottlenecks

no code implementations9 Jun 2020 Xichuan Zhou, Kui Liu, Cong Shi, Haijun Liu, Ji Liu

Recent researches on information bottleneck shed new light on the continuous attempts to open the black box of neural signal encoding.


Proximal Gradient Temporal Difference Learning: Stable Reinforcement Learning with Polynomial Sample Complexity

1 code implementation6 Jun 2020 Bo Liu, Ian Gemp, Mohammad Ghavamzadeh, Ji Liu, Sridhar Mahadevan, Marek Petrik

In this paper, we introduce proximal gradient temporal difference learning, which provides a principled way of designing and analyzing true stochastic gradient temporal difference learning algorithms.

Finite-Sample Analysis of Proximal Gradient TD Algorithms

no code implementations6 Jun 2020 Bo Liu, Ji Liu, Mohammad Ghavamzadeh, Sridhar Mahadevan, Marek Petrik

In this paper, we analyze the convergence rate of the gradient temporal difference learning (GTD) family of algorithms.

Regularized Off-Policy TD-Learning

no code implementations NeurIPS 2012 Bo Liu, Sridhar Mahadevan, Ji Liu

We present a novel $l_1$ regularized off-policy convergent TD-learning method (termed RO-TD), which is able to learn sparse representations of value functions with low computational complexity.

Feature Selection

Data Poisoning Attacks on Federated Machine Learning

no code implementations19 Apr 2020 Gan Sun, Yang Cong, Jiahua Dong, Qiang Wang, Ji Liu

To the end, experimental results on real-world datasets show that federated multi-task learning model is very sensitive to poisoning attacks, when the attackers either directly poison the target nodes or indirectly poison the related nodes by exploiting the communication protocol.

Data Poisoning Federated Learning +1

Depth Edge Guided CNNs for Sparse Depth Upsampling

no code implementations23 Mar 2020 Yi Guo, Ji Liu

Inspired by the normalized convolution operation, we propose a guided convolutional layer to recover dense depth from sparse and irregular depth image with an depth edge image as guidance.

Stochastic Recursive Momentum for Policy Gradient Methods

no code implementations9 Mar 2020 Huizhuo Yuan, Xiangru Lian, Ji Liu, Yuren Zhou

In this paper, we propose a novel algorithm named STOchastic Recursive Momentum for Policy Gradient (STORM-PG), which operates a SARAH-type stochastic recursive variance-reduced policy gradient in an exponential moving average fashion.

Policy Gradient Methods

End-to-end Robustness for Sensing-Reasoning Machine Learning Pipelines

no code implementations28 Feb 2020 Zhuolin Yang, Zhikuan Zhao, Hengzhi Pei, Boxin Wang, Bojan Karlas, Ji Liu, Heng Guo, Bo Li, Ce Zhang

We show that for reasoning components such as MLN and a specific family of Bayesian networks it is possible to certify the robustness of the whole pipeline even with a large magnitude of perturbation which cannot be certified by existing work.

A novel tree-structured point cloud dataset for skeletonization algorithm evaluation

1 code implementation9 Jan 2020 Yan Lin, Ji Liu, Jianlin Zhou

Since the implicit surface is sufficiently expressive to retain the edges and details of the complex branches model, we use the implicit surface to model the triangular mesh.

Stochastic Recursive Variance Reduction for Efficient Smooth Non-Convex Compositional Optimization

no code implementations31 Dec 2019 Huizhuo Yuan, Xiangru Lian, Ji Liu

Such a complexity is known to be the best one among IFO complexity results for non-convex stochastic compositional optimization, and is believed to be optimal.

Stochastic Optimization

LIIR: Learning Individual Intrinsic Reward in Multi-Agent Reinforcement Learning

1 code implementation NeurIPS 2019 Yali Du, Lei Han, Meng Fang, Ji Liu, Tianhong Dai, DaCheng Tao

A great challenge in cooperative decentralized multi-agent reinforcement learning (MARL) is generating diversified behaviors for each individual agent when receiving only a team reward.

Multi-agent Reinforcement Learning Starcraft +1

Hierarchical Prototype Learning for Zero-Shot Recognition

no code implementations24 Oct 2019 Xingxing Zhang, Shupeng Gui, Zhenfeng Zhu, Yao Zhao, Ji Liu

Specifically, HPL is able to obtain discriminability on both seen and unseen class domains by learning visual prototypes respectively under the transductive setting.

Image Captioning Object Recognition +1

ATZSL: Defensive Zero-Shot Recognition in the Presence of Adversaries

no code implementations24 Oct 2019 Xingxing Zhang, Shupeng Gui, Zhenfeng Zhu, Yao Zhao, Ji Liu

In this paper, we take an initial attempt, and propose a generic formulation to provide a systematical solution (named ATZSL) for learning a robust ZSL model.

Image Captioning Object Recognition +1

Automatic Neural Network Compression by Sparsity-Quantization Joint Learning: A Constrained Optimization-based Approach

1 code implementation CVPR 2020 Haichuan Yang, Shupeng Gui, Yuhao Zhu, Ji Liu

A key parameter that all existing compression techniques are sensitive to is the compression ratio (e. g., pruning sparsity, quantization bitwidth) of each layer.

Neural Network Compression Quantization

Central Server Free Federated Learning over Single-sided Trust Social Networks

1 code implementation11 Oct 2019 Chaoyang He, Conghui Tan, Hanlin Tang, Shuang Qiu, Ji Liu

However, in many social network scenarios, centralized federated learning is not applicable (e. g., a central agent or server connecting all users may not exist, or the communication cost to the central server is not affordable).

Federated Learning

An Interactive Control Approach to 3D Shape Reconstruction

no code implementations7 Oct 2019 Bipul Islam, Ji Liu, Anthony Yezzi, Romeil Sandhu

The ability to accurately reconstruct the 3D facets of a scene is one of the key problems in robotic vision.

3D Reconstruction 3D Shape Reconstruction

Global Sparse Momentum SGD for Pruning Very Deep Neural Networks

4 code implementations NeurIPS 2019 Xiaohan Ding, Guiguang Ding, Xiangxin Zhou, Yuchen Guo, Jungong Han, Ji Liu

Deep Neural Network (DNN) is powerful but computationally expensive and memory intensive, thus impeding its practical usage on resource-constrained front-end devices.

Model Compression

Improving Adversarial Robustness via Attention and Adversarial Logit Pairing

no code implementations23 Aug 2019 Dou Goodman, Xingjian Li, Ji Liu, Dejing Dou, Tao Wei

Finally, we conduct extensive experiments using a wide range of datasets and the experiment results show that our AT+ALP achieves the state of the art defense performance.

$\texttt{DeepSqueeze}$: Decentralization Meets Error-Compensated Compression

no code implementations17 Jul 2019 Hanlin Tang, Xiangru Lian, Shuang Qiu, Lei Yuan, Ce Zhang, Tong Zhang, Ji Liu

Since the \emph{decentralized} training has been witnessed to be superior to the traditional \emph{centralized} training in the communication restricted scenario, therefore a natural question to ask is "how to apply the error-compensated technology to the decentralized learning to further reduce the communication cost."

A Convergence Result for Regularized Actor-Critic Methods

no code implementations13 Jul 2019 Wesley Suttle, Zhuoran Yang, Kaiqing Zhang, Ji Liu

In this paper, we present a probability one convergence proof, under suitable conditions, of a certain class of actor-critic algorithms for finding approximate solutions to entropy-regularized MDPs using the machinery of stochastic approximation.

A Communication-Efficient Multi-Agent Actor-Critic Algorithm for Distributed Reinforcement Learning

no code implementations6 Jul 2019 Yixuan Lin, Kaiqing Zhang, Zhuoran Yang, Zhaoran Wang, Tamer Başar, Romeil Sandhu, Ji Liu

This paper considers a distributed reinforcement learning problem in which a network of multiple agents aim to cooperatively maximize the globally averaged return through communication with only local neighbors.

DoubleSqueeze: Parallel Stochastic Gradient Descent with Double-Pass Error-Compensated Compression

no code implementations15 May 2019 Hanlin Tang, Xiangru Lian, Chen Yu, Tong Zhang, Ji Liu

For example, under the popular parameter server model for distributed learning, the worker nodes need to send the compressed local gradients to the parameter server, which performs the aggregation.

A Multi-Agent Off-Policy Actor-Critic Algorithm for Distributed Reinforcement Learning

1 code implementation15 Mar 2019 Wesley Suttle, Zhuoran Yang, Kaiqing Zhang, Zhaoran Wang, Tamer Basar, Ji Liu

This paper extends off-policy reinforcement learning to the multi-agent case in which a set of networked agents communicating with their neighbors according to a time-varying graph collaboratively evaluates and improves a target policy while following a distinct behavior policy.

Optimal Projection Guided Transfer Hashing for Image Retrieval

1 code implementation1 Mar 2019 Ji Liu, Lei Zhang

For most existing learning to hash methods, sufficient training images are required and used to learn precise hashing codes.

Image Retrieval Transfer Learning

SCEF: A Support-Confidence-aware Embedding Framework for Knowledge Graph Refinement

no code implementations18 Feb 2019 Yu Zhao, Ji Liu

Knowledge graph (KG) refinement mainly aims at KG completion and correction (i. e., error detection).

Model Compression with Adversarial Robustness: A Unified Optimization Framework

1 code implementation NeurIPS 2019 Shupeng Gui, Haotao Wang, Chen Yu, Haichuan Yang, Zhangyang Wang, Ji Liu

Deep model compression has been extensively studied, and state-of-the-art methods can now achieve high compression ratios with minimal accuracy loss.

Model Compression Quantization

Decentralized Online Learning: Take Benefits from Others' Data without Sharing Your Own to Track Global Trend

no code implementations29 Jan 2019 Yawei Zhao, Chen Yu, Peilin Zhao, Hanlin Tang, Shuang Qiu, Ji Liu

Decentralized Online Learning (online learning in decentralized networks) attracts more and more attention, since it is believed that Decentralized Online Learning can help the data providers cooperatively better solve their online problems without sharing their private data to a third party or other providers.

Monocular 3D Pose Recovery via Nonconvex Sparsity with Theoretical Analysis

no code implementations29 Dec 2018 Jianqiao Wangni, Dahua Lin, Ji Liu, Kostas Daniilidis, Jianbo Shi

For recovering 3D object poses from 2D images, a prevalent method is to pre-train an over-complete dictionary $\mathcal D=\{B_i\}_i^D$ of 3D basis poses.

ECC: Platform-Independent Energy-Constrained Deep Neural Network Compression via a Bilinear Regression Model

2 code implementations CVPR 2019 Haichuan Yang, Yuhao Zhu, Ji Liu

The energy estimate model allows us to formulate DNN compression as a constrained optimization that minimizes the DNN loss function over the energy constraint.

Neural Network Compression

Stochastic Primal-Dual Method for Empirical Risk Minimization with O(1) Per-Iteration Complexity

no code implementations NeurIPS 2018 Conghui Tan, Tong Zhang, Shiqian Ma, Ji Liu

Regularized empirical risk minimization problem with linear predictor appears frequently in machine learning.

Distributed Learning of Average Belief Over Networks Using Sequential Observations

no code implementations19 Nov 2018 Kaiqing Zhang, Yang Liu, Ji Liu, Mingyan Liu, Tamer Başar

This paper addresses the problem of distributed learning of average belief with sequential observations, in which a network of $n>1$ agents aim to reach a consensus on the average value of their beliefs, by exchanging information only with their neighbors.

Dantzig Selector with an Approximately Optimal Denoising Matrix and its Application to Reinforcement Learning

no code implementations2 Nov 2018 Bo Liu, Luwan Zhang, Ji Liu

To make the problem computationally tractable, we propose a novel algorithm, termed as Optimal Denoising Dantzig Selector (ODDS), to approximately estimate the optimal denoising matrix.

Denoising Feature Selection +1

Watch the Unobserved: A Simple Approach to Parallelizing Monte Carlo Tree Search

5 code implementations ICLR 2020 Anji Liu, Jianshu Chen, Mingze Yu, Yu Zhai, Xuewen Zhou, Ji Liu

Monte Carlo Tree Search (MCTS) algorithms have achieved great success on many challenging benchmarks (e. g., Computer Go).

Distributed Learning over Unreliable Networks

no code implementations17 Oct 2018 Chen Yu, Hanlin Tang, Cedric Renggli, Simon Kassing, Ankit Singla, Dan Alistarh, Ce Zhang, Ji Liu

Most of today's distributed machine learning systems assume {\em reliable networks}: whenever two machines exchange information (e. g., gradients or models), the network should guarantee the delivery of the message.

Parametrized Deep Q-Networks Learning: Reinforcement Learning with Discrete-Continuous Hybrid Action Space

2 code implementations10 Oct 2018 Jiechao Xiong, Qing Wang, Zhuoran Yang, Peng Sun, Lei Han, Yang Zheng, Haobo Fu, Tong Zhang, Ji Liu, Han Liu

Most existing deep reinforcement learning (DRL) frameworks consider either discrete action space or continuous action space solely.

Proximal Online Gradient is Optimum for Dynamic Regret

no code implementations8 Oct 2018 Yawei Zhao, Shuang Qiu, Ji Liu

While the online gradient method has been shown to be optimal for the static regret metric, the optimal algorithm for the dynamic regret remains unknown.

Fully Implicit Online Learning

no code implementations25 Sep 2018 Chaobing Song, Ji Liu, Han Liu, Yong Jiang, Tong Zhang

Regularized online learning is widely used in machine learning applications.

TStarBots: Defeating the Cheating Level Builtin AI in StarCraft II in the Full Game

3 code implementations19 Sep 2018 Peng Sun, Xinghai Sun, Lei Han, Jiechao Xiong, Qing Wang, Bo Li, Yang Zheng, Ji Liu, Yongsheng Liu, Han Liu, Tong Zhang

Both TStarBot1 and TStarBot2 are able to defeat the built-in AI agents from level 1 to level 10 in a full game (1v1 Zerg-vs-Zerg game on the AbyssalReef map), noting that level 8, level 9, and level 10 are cheating agents with unfair advantages such as full vision on the whole map and resource harvest boosting.

Decision Making Starcraft +1

Stochastically Controlled Stochastic Gradient for the Convex and Non-convex Composition problem

no code implementations6 Sep 2018 Liu Liu, Ji Liu, Cho-Jui Hsieh, DaCheng Tao

In this paper, we consider the convex and non-convex composition problem with the structure $\frac{1}{n}\sum\nolimits_{i = 1}^n {{F_i}( {G( x )} )}$, where $G( x )=\frac{1}{n}\sum\nolimits_{j = 1}^n {{G_j}( x )} $ is the inner function, and $F_i(\cdot)$ is the outer function.

$D^2$: Decentralized Training over Decentralized Data

no code implementations ICML 2018 Hanlin Tang, Xiangru Lian, Ming Yan, Ce Zhang, Ji Liu

While training a machine learning model using multiple workers, each of which collects data from its own data source, it would be useful when the data collected from different workers are unique and different.

Image Classification Multi-view Subspace Clustering

Marginal Policy Gradients: A Unified Family of Estimators for Bounded Action Spaces with Applications

1 code implementation ICLR 2019 Carson Eisenach, Haichuan Yang, Ji Liu, Han Liu

In the former, an agent learns a policy over $\mathbb{R}^d$ and in the latter, over a discrete set of actions each of which is parametrized by a continuous parameter.

Continuous Control

Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and Layer Input Masking

1 code implementation ICLR 2019 Haichuan Yang, Yuhao Zhu, Ji Liu

Deep Neural Networks (DNNs) are increasingly deployed in highly energy-constrained environments such as autonomous drones and wearable devices while at the same time must operate in real-time.

GESF: A Universal Discriminative Mapping Mechanism for Graph Representation Learning

no code implementations28 May 2018 Shupeng Gui, Xiangliang Zhang, Shuang Qiu, Mingrui Wu, Jieping Ye, Ji Liu

Graph embedding is a central problem in social network analysis and many other applications, aiming to learn the vector representation for each node.

Graph Embedding Graph Representation Learning

Parallel Computation of PDFs on Big Spatial Data Using Spark

no code implementations8 May 2018 Ji Liu, Noel Moreno Lemus, Esther Pacitti, Fabio Porto, Patrick Valduriez

We consider big spatial data, which is typically produced in scientific areas such as geological or seismic interpretation.

Seismic Interpretation

Learning Simple Thresholded Features with Sparse Support Recovery

no code implementations16 Apr 2018 Hongyu Xu, Zhangyang Wang, Haichuan Yang, Ding Liu, Ji Liu

The thresholded feature has recently emerged as an extremely efficient, yet rough empirical approximation, of the time-consuming sparse coding inference process.

Dictionary Learning

D$^2$: Decentralized Training over Decentralized Data

no code implementations19 Mar 2018 Hanlin Tang, Xiangru Lian, Ming Yan, Ce Zhang, Ji Liu

While training a machine learning model using multiple workers, each of which collects data from their own data sources, it would be most useful when the data collected from different workers can be {\em unique} and {\em different}.

Image Classification

A Robust AUC Maximization Framework with Simultaneous Outlier Detection and Feature Selection for Positive-Unlabeled Classification

no code implementations18 Mar 2018 Ke Ren, Haichuan Yang, Yu Zhao, Mingshan Xue, Hongyu Miao, Shuai Huang, Ji Liu

The positive-unlabeled (PU) classification is a common scenario in real-world applications such as healthcare, text classification, and bioinformatics, in which we only observe a few samples labeled as "positive" together with a large volume of "unlabeled" samples that may contain both positive and negative samples.

EEG Feature Selection +4

AutoML from Service Provider's Perspective: Multi-device, Multi-tenant Model Selection with GP-EI

no code implementations17 Mar 2018 Chen Yu, Bojan Karlas, Jie Zhong, Ce Zhang, Ji Liu

In this paper, we focus on the AutoML problem from the \emph{service provider's perspective}, motivated by the following practical consideration: When an AutoML service needs to serve {\em multiple users} with {\em multiple devices} at the same time, how can we allocate these devices to users in an efficient way?

AutoML Model Selection

Communication Compression for Decentralized Training

no code implementations NeurIPS 2018 Hanlin Tang, Shaoduo Gan, Ce Zhang, Tong Zhang, Ji Liu

In this paper, We explore a natural question: {\em can the combination of both techniques lead to a system that is robust to both bandwidth and latency?}

Variance Reduced methods for Non-convex Composition Optimization

no code implementations13 Nov 2017 Liu Liu, Ji Liu, DaCheng Tao

In this paper, we apply the variance-reduced technique to derive two variance reduced algorithms that significantly improve the query complexity if the number of inner component functions is large.

Accelerated Method for Stochastic Composition Optimization with Nonsmooth Regularization

no code implementations10 Nov 2017 Zhouyuan Huo, Bin Gu, Ji Liu, Heng Huang

To the best of our knowledge, our method admits the fastest convergence rate for stochastic composition optimization: for strongly convex composition problem, our algorithm is proved to admit linear convergence; for general composition problem, our algorithm significantly improves the state-of-the-art convergence rate from $O(T^{-1/2})$ to $O((n_1+n_2)^{{2}/{3}}T^{-1})$.

Gradient Sparsification for Communication-Efficient Distributed Optimization

no code implementations NeurIPS 2018 Jianqiao Wangni, Jialei Wang, Ji Liu, Tong Zhang

Modern large scale machine learning applications require stochastic optimization algorithms to be implemented on distributed computational architectures.

Distributed Optimization

Duality-free Methods for Stochastic Composition Optimization

no code implementations26 Oct 2017 Liu Liu, Ji Liu, DaCheng Tao

We consider the composition optimization with two expected-value functions in the form of $\frac{1}{n}\sum\nolimits_{i = 1}^n F_i(\frac{1}{m}\sum\nolimits_{j = 1}^m G_j(x))+R(x)$, { which formulates many important problems in statistical learning and machine learning such as solving Bellman equations in reinforcement learning and nonlinear embedding}.

Asynchronous Decentralized Parallel Stochastic Gradient Descent

1 code implementation ICML 2018 Xiangru Lian, Wei zhang, Ce Zhang, Ji Liu

Can we design an algorithm that is robust in a heterogeneous environment, while being communication efficient and maintaining the best-possible convergence rate?

Cascaded Region-based Densely Connected Network for Event Detection: A Seismic Application

no code implementations12 Sep 2017 Yue Wu, Youzuo Lin, Zheng Zhou, David Chas Bolton, Ji Liu, Paul Johnson

Because of the fact that some positive events are not correctly annotated, we further formulate the detection problem as a learning-from-noise problem.

2D Object Detection Abnormal Event Detection In Video +2 Towards Multi-tenant Resource Sharing for Machine Learning Workloads

no code implementations24 Aug 2017 Tian Li, Jie Zhong, Ji Liu, Wentao Wu, Ce Zhang

We ask, as a "service provider" that manages a shared cluster of machines among all our users running machine learning workloads, what is the resource allocation strategy that maximizes the global satisfaction of all our users?

Fairness Image Classification +2

ZipML: Training Linear Models with End-to-End Low Precision, and a Little Bit of Deep Learning

no code implementations ICML 2017 Hantian Zhang, Jerry Li, Kaan Kara, Dan Alistarh, Ji Liu, Ce Zhang

We examine training at reduced precision, both from a theoretical and practical perspective, and ask: is it possible to train models at end-to-end low precision with provable guarantees?


An Interactive Greedy Approach to Group Sparsity in High Dimensions

1 code implementation10 Jul 2017 Wei Qian, Wending Li, Yasuhiro Sogawa, Ryohei Fujimaki, Xitong Yang, Ji Liu

Sparsity learning with known grouping structure has received considerable attention due to wide modern applications in high-dimensional data analysis.

Activity Recognition

Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent

1 code implementation NeurIPS 2017 Xiangru Lian, Ce Zhang, huan zhang, Cho-Jui Hsieh, Wei zhang, Ji Liu

On network configurations with low bandwidth or high latency, D-PSGD can be up to one order of magnitude faster than its well-optimized centralized counterparts.

Lifelong Metric Learning

no code implementations3 May 2017 Gan Sun, Yang Cong, Ji Liu, Xiaowei Xu

In this paper, we consider lifelong learning problem to mimic "human learning", i. e., endowing a new capability to the learned metric for a new task from new online samples and incorporating previous experiences and knowledge.

Metric Learning

Asynchronous Parallel Empirical Variance Guided Algorithms for the Thresholding Bandit Problem

no code implementations15 Apr 2017 Jie Zhong, Yijun Huang, Ji Liu

This paper proposes an asynchronous parallel thresholding algorithm and its parameter-free version to improve the efficiency and the applicability.

On The Projection Operator to A Three-view Cardinality Constrained Set

no code implementations ICML 2017 Haichuan Yang, Shupeng Gui, Chuyang Ke, Daniel Stefankovic, Ryohei Fujimaki, Ji Liu

The cardinality constraint is an intrinsic way to restrict the solution structure in many domains, for example, sparse learning, feature selection, and compressed sensing.

Feature Selection Sparse Learning

Negative-Unlabeled Tensor Factorization for Location Category Inference from Highly Inaccurate Mobility Data

no code implementations21 Feb 2017 Jinfeng Yi, Qi Lei, Wesley Gifford, Ji Liu, Junchi Yan

In order to efficiently solve the proposed framework, we propose a parameter-free and scalable optimization algorithm by effectively exploring the sparse and low-rank structure of the tensor.

Asynchronous Parallel Greedy Coordinate Descent

no code implementations NeurIPS 2016 Yang You, Xiangru Lian, Ji Liu, Hsiang-Fu Yu, Inderjit S. Dhillon, James Demmel, Cho-Jui Hsieh

n this paper, we propose and study an Asynchronous parallel Greedy Coordinate Descent (Asy-GCD) algorithm for minimizing a smooth function with bounded constraints.

GaDei: On Scale-up Training As A Service For Deep Learning

no code implementations18 Nov 2016 Wei Zhang, Minwei Feng, Yunhui Zheng, Yufei Ren, Yandong Wang, Ji Liu, Peng Liu, Bing Xiang, Li Zhang, Bo-Wen Zhou, Fei Wang

By evaluating the NLC workloads, we show that only the conservative hyper-parameter setup (e. g., small mini-batch size and small learning rate) can guarantee acceptable model accuracy for a wide range of customers.

The ZipML Framework for Training Models with End-to-End Low Precision: The Cans, the Cannots, and a Little Bit of Deep Learning

1 code implementation16 Nov 2016 Hantian Zhang, Jerry Li, Kaan Kara, Dan Alistarh, Ji Liu, Ce Zhang

When applied to linear models together with double sampling, we save up to another 1. 7x in data movement compared with uniform quantization.


Prognostics of Surgical Site Infections using Dynamic Health Data

no code implementations12 Nov 2016 Chuyang Ke, Yan Jin, Heather Evans, Bill Lober, Xiaoning Qian, Ji Liu, Shuai Huang

Since existing prediction models of SSI have quite limited capacity to utilize the evolving clinical data, we develop the corresponding solution to equip these mHealth tools with decision-making capabilities for SSI prediction with a seamless assembly of several machine learning models to tackle the analytic challenges arising from the spatial-temporal data.

Decision Making Imputation +1

Efficient Estimation of Compressible State-Space Models with Application to Calcium Signal Deconvolution

no code implementations20 Oct 2016 Abbas Kazemipour, Ji Liu, Patrick Kanold, Min Wu, Behtash Babadi

In this paper, we consider linear state-space models with compressible innovations and convergent transition matrices in order to model spatiotemporally sparse transient events.

Infinite-Label Learning with Semantic Output Codes

no code implementations23 Aug 2016 Yang Zhang, Rupam Acharyya, Ji Liu, Boqing Gong

We develop a new statistical machine learning paradigm, named infinite-label learning, to annotate a data point with more than one relevant labels from a candidate set, which pools both the finite labels observed at training and a potentially infinite number of previously unseen labels.

Multi-Label Learning Zero-Shot Learning

Accelerating Stochastic Composition Optimization

no code implementations NeurIPS 2016 Mengdi Wang, Ji Liu, Ethan X. Fang

The ASC-PG is the first proximal gradient method for the stochastic composition problem that can deal with nonsmooth regularization penalty.

On Benefits of Selection Diversity via Bilevel Exclusive Sparsity

no code implementations CVPR 2016 Haichuan Yang, Yijun Huang, Lam Tran, Ji Liu, Shuai Huang

In this paper, we proposed a general bilevel exclusive sparsity formulation to pursue the diversity by restricting the overall sparsity and the sparsity in each group.

Feature Selection Image Classification

The Teaching Dimension of Linear Learners

no code implementations7 Dec 2015 Ji Liu, Xiaojin Zhu

Teaching dimension is a learning theoretic quantity that specifies the minimum training set size to teach a target model to a learner.

Staleness-aware Async-SGD for Distributed Deep Learning

1 code implementation18 Nov 2015 Wei Zhang, Suyog Gupta, Xiangru Lian, Ji Liu

Deep neural networks have been shown to achieve state-of-the-art performance in several machine learning tasks.

Distributed Computing Image Classification

Exclusive Sparsity Norm Minimization with Random Groups via Cone Projection

no code implementations27 Oct 2015 Yijun Huang, Ji Liu

To the best of our knowledge, this is the first time to guarantee such convergence rate for the general exclusive sparsity norm minimization; 2) When the group information is unavailable to define the exclusive sparsity norm, we propose to use the random grouping scheme to construct groups and prove that if the number of groups is appropriately chosen, the nonzeros (true features) would be grouped in the ideal way with high probability.

Feature Selection Multi-Task Learning

Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization

no code implementations NeurIPS 2015 Xiangru Lian, Yijun Huang, Yuncheng Li, Ji Liu

Asynchronous parallel implementations of stochastic gradient (SG) have been broadly used in solving deep neural network and received many successes in practice recently.

Proximal Reinforcement Learning: A New Theory of Sequential Decision Making in Primal-Dual Spaces

no code implementations26 May 2014 Sridhar Mahadevan, Bo Liu, Philip Thomas, Will Dabney, Steve Giguere, Nicholas Jacek, Ian Gemp, Ji Liu

In this paper, we set forth a new vision of reinforcement learning developed by us over the past few years, one that yields mathematically rigorous solutions to longstanding important questions that have remained unresolved: (i) how to design reliable, convergent, and robust reinforcement learning algorithms (ii) how to guarantee that reinforcement learning satisfies pre-specified "safety" guarantees, and remains in a stable region of the parameter space (iii) how to design "off-policy" temporal difference learning algorithms in a reliable and stable manner, and finally (iv) how to integrate the study of reinforcement learning into the rich theory of stochastic optimization.

Decision Making Stochastic Optimization

Forward-Backward Greedy Algorithms for General Convex Smooth Functions over A Cardinality Constraint

no code implementations31 Dec 2013 Ji Liu, Ryohei Fujimaki, Jieping Ye

Our new bounds are consistent with the bounds of a special case (least squares) and fills a previously existing theoretical gap for general convex smooth functions; 3) We show that the restricted strong convexity condition is satisfied if the number of independent samples is more than $\bar{k}\log d$ where $\bar{k}$ is the sparsity number and $d$ is the dimension of the variable; 4) We apply FoBa-gdt (with the conditional random field objective) to the sensor selection problem for human indoor activity recognition and our results show that FoBa-gdt outperforms other methods (including the ones based on forward greedy selection and L1-regularization).

Activity Recognition Feature Selection

An Approximate, Efficient LP Solver for LP Rounding

no code implementations NeurIPS 2013 Srikrishna Sridhar, Stephen Wright, Christopher Re, Ji Liu, Victor Bittorf, Ce Zhang

Many problems in machine learning can be solved by rounding the solution of an appropriate linear program.

Dictionary LASSO: Guaranteed Sparse Recovery under Linear Transformation

no code implementations30 Apr 2013 Ji Liu, Lei Yuan, Jieping Ye

Specifically, we show 1) in the noiseless case, if the condition number of $D$ is bounded and the measurement number $n\geq \Omega(s\log(p))$ where $s$ is the sparsity number, then the true solution can be recovered with high probability; and 2) in the noisy case, if the condition number of $D$ is bounded and the measurement increases faster than $s\log(p)$, that is, $s\log(p)=o(n)$, the estimate error converges to zero with probability 1 when $p$ and $s$ go to infinity.

Robust Dequantized Compressive Sensing

no code implementations3 Jul 2012 Ji Liu, Stephen J. Wright

We consider the reconstruction problem in compressed sensing in which the observations are recorded in a finite number of bits.

Compressive Sensing Quantization

Multi-Stage Dantzig Selector

no code implementations NeurIPS 2010 Ji Liu, Peter Wonka, Jieping Ye

We show that if $X$ obeys a certain condition, then with a large probability the difference between the solution $\hat\beta$ estimated by the proposed method and the true solution $\beta^*$ measured in terms of the $l_p$ norm ($p\geq 1$) is bounded as \begin{equation*} \|\hat\beta-\beta^*\|_p\leq \left(C(s-N)^{1/p}\sqrt{\log m}+\Delta\right)\sigma, \end{equation*} $C$ is a constant, $s$ is the number of nonzero entries in $\beta^*$, $\Delta$ is independent of $m$ and is much smaller than the first term, and $N$ is the number of entries of $\beta^*$ larger than a certain value in the order of $\mathcal{O}(\sigma\sqrt{\log m})$.

Feature Selection

Cannot find the paper you are looking for? You can Submit a new open access paper.