no code implementations • ECCV 2020 • Lin Huang, Jianchao Tan, Ji Liu, Junsong Yuan
To address this issue, we connect this structured output learning problem with the structured modeling framework in sequence transduction field.
no code implementations • 24 Nov 2022 • Ji Liu, Juncheng Jia, Beichen Ma, Chendi Zhou, Jingbo Zhou, Yang Zhou, Huaiyu Dai, Dejing Dou
The system model enables a parallel training process of multiple jobs, with a cost model based on the data fairness and the training time of diverse devices during the parallel training process.
no code implementations • 5 Nov 2022 • Shaojie Min, Ji Liu
Compared with network datasets, multi-dimensional data are much more common nowadays.
no code implementations • 3 Oct 2022 • Lili Wang, Ji Liu, A. Stephen Morse, Brian B. O. Anderson
This paper studies a distributed state estimation problem for both continuous- and discrete-time linear systems.
no code implementations • 25 Jul 2022 • Martin Figura, Yixuan Lin, Ji Liu, Vijay Gupta
In decentralized cooperative multi-agent reinforcement learning, agents can aggregate information from one another to learn policies that maximize a team-average objective function.
no code implementations • 23 Jul 2022 • Ji Liu, Dong Li, Zekun Li, Han Liu, Wenjing Ke, Lu Tian, Yi Shan
Sample assignment plays a prominent part in modern object detection approaches.
no code implementations • 23 Jul 2022 • Sebin Gracy, Philip E. Paré, Ji Liu, Henrik Sandberg, Carolyn L. Beck, Karl Henrik Johansson, Tamer Başar
We establish a sufficient condition and multiple necessary conditions for local exponential convergence to the boundary equilibrium (i. e., one virus persists, the other one dies out) of each virus.
1 code implementation • 14 Jul 2022 • Ji Liu, daxiang dong, Xi Wang, An Qin, Xingjian Li, Patrick Valduriez, Dejing Dou, dianhai yu
Although more layers and more parameters generally improve the accuracy of the models, such big models generally have high computational complexity and require big memory, which exceed the capacity of small devices for inference and incurs long training time.
no code implementations • 14 Jul 2022 • Jiayin Jin, Jiaxiang Ren, Yang Zhou, Lingjuan Lyu, Ji Liu, Dejing Dou
The federated learning (FL) framework enables edge clients to collaboratively learn a shared inference model while keeping privacy of training data on clients.
no code implementations • 21 Jun 2022 • Guanghao Li, Yue Hu, Miao Zhang, Ji Liu, Quanjun Yin, Yong Peng, Dejing Dou
As the efficiency of training in the ring topology prefers devices with homogeneous resources, the classification based on the computing capacity mitigates the impact of straggler effects.
1 code implementation • 12 Jun 2022 • Lijie Xu, Shuang Qiu, Binhang Yuan, Jiawei Jiang, Cedric Renggli, Shaoduo Gan, Kaan Kara, Guoliang Li, Ji Liu, Wentao Wu, Jieping Ye, Ce Zhang
In this paper, we first conduct a systematic empirical study on existing data shuffling strategies, which reveals that all existing strategies have room for improvement -- they all suffer in terms of I/O performance or convergence rate.
1 code implementation • 5 Jun 2022 • Zhenyu Hu, Zhenyu Wu, Pengcheng Pi, Yunhe Xue, Jiayi Shen, Jianchao Tan, Xiangru Lian, Zhangyang Wang, Ji Liu
Unmanned Aerial Vehicles (UAVs) based video text spotting has been extensively used in civil and military domains.
no code implementations • 1 Jun 2022 • Yi Guo, Zhaocheng Liu, Jianchao Tan, Chao Liao, Sen yang, Lei Yuan, Dongying Kong, Zhi Chen, Ji Liu
When training is finished, some gates are exact zero, while others are around one, which is particularly favored by the practical hot-start training in the industry, due to no damage to the model performance before and after removing the features corresponding to exact-zero gates.
no code implementations • CVPR 2022 • Qinghang Hong, Fengming Liu, Dong Li, Ji Liu, Lu Tian, Yi Shan
Sparse R-CNN is a recent strong object detection baseline by set prediction on sparse, learnable proposal boxes and proposal features.
no code implementations • CVPR 2022 • Haowei Zhu, Wenjing Ke, Dong Li, Ji Liu, Lu Tian, Yi Shan
First, we propose global-local cross-attention (GLCA) to enhance the interactions between global images and local high-response regions, which can help reinforce the spatial-wise discriminative clues for recognition.
Ranked #4 on
Fine-Grained Image Classification
on CUB-200-2011
Fine-Grained Image Classification
Fine-Grained Visual Categorization
no code implementations • 25 Apr 2022 • Hong Zhang, Ji Liu, Juncheng Jia, Yang Zhou, Huaiyu Dai, Dejing Dou
Despite achieving remarkable performance, Federated Learning (FL) suffers from two critical challenges, i. e., limited computational resources and low training efficiency.
no code implementations • 20 Apr 2022 • Ji Liu, Zheng Xu, Yanmei Zhang, Wei Dai, Hao Wu, Shiping Chen
Since the emergence of blockchain technology, its application in the financial market has always been an area of focus and exploration by all parties.
no code implementations • 26 Mar 2022 • Jingxuan Zhu, Yixuan Lin, Alvaro Velasquez, Ji Liu
This paper considers a resilient high-dimensional constrained consensus problem and studies a resilient distributed algorithm for complete graphs.
1 code implementation • ICLR 2022 • Shixing Yu, Tianlong Chen, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen yang, Ji Liu, Zhangyang Wang
Vision transformers (ViTs) have gained popularity recently.
no code implementations • 26 Feb 2022 • Wentao Zhu, Hang Shang, Tingxun Lv, Chao Liao, Sen yang, Ji Liu
Recently, learning from vast unlabeled data, especially self-supervised learning, has been emerging and attracted widespread attention.
1 code implementation • 1 Feb 2022 • Yu Zhao, Shaopeng Wei, Yu Guo, Qing Yang, Xingyan Chen, Qing Li, Fuzhen Zhuang, Ji Liu, Gang Kou
This study for the first time considers both types of risk and their joint effects in bankruptcy prediction.
no code implementations • 21 Jan 2022 • Wesley A. Suttle, Alec Koppel, Ji Liu
We develop a new measure of the exploration/exploitation trade-off in infinite-horizon reinforcement learning problems called the occupancy information ratio (OIR), which is comprised of a ratio between the infinite-horizon average cost of a policy and the entropy of its long-term state occupancy measure.
no code implementations • 18 Jan 2022 • Yang Li, Yu Shen, Huaijun Jiang, Wentao Zhang, Jixiang Li, Ji Liu, Ce Zhang, Bin Cui
The ever-growing demand and complexity of machine learning are putting pressure on hyper-parameter tuning systems: while the evaluation cost of models continues to increase, the scalability of state-of-the-arts starts to become a crucial bottleneck.
1 code implementation • 11 Jan 2022 • Yu Zhao, Huaming Du, Ying Liu, Shaopeng Wei, Xingyan Chen, Fuzhen Zhuang, Qing Li, Ji Liu, Gang Kou
Stock Movement Prediction (SMP) aims at predicting listed companies' stock future price trend, which is a challenging task due to the volatile nature of financial markets.
1 code implementation • 24 Dec 2021 • Yu Zhao, Shaopeng Wei, Huaming Du, Xingyan Chen, Qing Li, Fuzhen Zhuang, Ji Liu, Gang Kou
To address this issue, we propose a novel Dual Hierarchical Attention Networks (DHAN) based on the bi-typed multi-relational heterogeneous graphs to learn comprehensive node representations with the intra-class and inter-class attention-based encoder under a hierarchical mechanism.
no code implementations • 11 Dec 2021 • Chendi Zhou, Ji Liu, Juncheng Jia, Jingbo Zhou, Yang Zhou, Huaiyu Dai, Dejing Dou
However, the scheduling of devices for multiple jobs with FL remains a critical and open problem.
no code implementations • NeurIPS 2021 • Shun Lu, Jixiang Li, Jianchao Tan, Sen yang, Ji Liu
Predictor-based Neural Architecture Search (NAS) continues to be an important topic because it aims to mitigate the time-consuming search procedure of traditional NAS methods.
Ranked #21 on
Neural Architecture Search
on CIFAR-10
no code implementations • NeurIPS 2021 • Zeru Zhang, Jiayin Jin, Zijie Zhang, Yang Zhou, Xin Zhao, Jiaxiang Ren, Ji Liu, Lingfei Wu, Ruoming Jin, Dejing Dou
Despite achieving remarkable efficiency, traditional network pruning techniques often follow manually-crafted heuristics to generate pruned sparse networks.
no code implementations • NeurIPS 2021 • Yixuan Lin, Vijay Gupta, Ji Liu
In the case when the convex combination is required to be a straight average and interaction between any pair of neighboring agents may be uni-directional, so that doubly stochastic matrices cannot be implemented in a distributed manner, the paper proposes a push-sum-type distributed stochastic approximation algorithm and provides its finite-time bound for the time-varying step-size case by leveraging the analysis for the consensus-type algorithm with stochastic matrices and developing novel properties of the push-sum algorithm.
no code implementations • NeurIPS 2021 • Jingxuan Zhu, Ethan Mulle, Christopher Salomon Smith, Ji Liu
This paper studies a decentralized multi-armed bandit problem in a multi-agent network.
1 code implementation • 20 Nov 2021 • Ji Liu, Zhihua Wu, dianhai yu, Yanjun Ma, Danlei Feng, Minxu Zhang, Xinxuan Wu, Xuefeng Yao, Dejing Dou
The training process generally exploits distributed computing resources to reduce training time.
1 code implementation • 12 Nov 2021 • Martin Figura, Yixuan Lin, Ji Liu, Vijay Gupta
We show that in the presence of Byzantine agents, whose estimation and communication strategies are completely arbitrary, the estimates of the cooperative agents converge to a bounded consensus value with probability one, provided that there are at most $H$ Byzantine agents in the neighborhood of each cooperative agent and the network is $(2H+1)$-robust.
Multi-agent Reinforcement Learning
reinforcement-learning
+1
1 code implementation • 10 Nov 2021 • Xiangru Lian, Binhang Yuan, XueFeng Zhu, Yulong Wang, Yongjun He, Honghuan Wu, Lei Sun, Haodong Lyu, Chengjun Liu, Xing Dong, Yiqiao Liao, Mingnan Luo, Congfei Zhang, Jingru Xie, Haonan Li, Lei Chen, Renjie Huang, Jianying Lin, Chengchun Shu, Xuezhong Qiu, Zhishan Liu, Dongying Kong, Lei Yuan, Hai Yu, Sen yang, Ce Zhang, Ji Liu
Specifically, in order to ensure both the training efficiency and the training accuracy, we design a novel hybrid training algorithm, where the embedding layer and the dense neural network are handled by different synchronization mechanisms; then we build a system called Persia (short for parallel recommendation training system with hybrid acceleration) to support this hybrid training algorithm.
no code implementations • 29 Oct 2021 • Yu Zhao, Jia Song, Huali Feng, Fuzhen Zhuang, Qing Li, Xiaojie Wang, Ji Liu
Keyphrase provides accurate information of document content that is highly compact, concise, full of meanings, and widely used for discourse comprehension, organization, and text retrieval.
no code implementations • 20 Oct 2021 • Yu Shen, Jian Zheng, Yang Li, Peng Yao, Jixiang Li, Sen yang, Ji Liu, Wentao Zhang, Bin Cui
Designing neural architectures requires immense manual efforts.
1 code implementation • 15 Oct 2021 • Tianli Zhao, Xi Sheryl Zhang, Wentao Zhu, Jiaxing Wang, Sen yang, Ji Liu, Jian Cheng
In this paper, we present a unified framework with Joint Channel pruning and Weight pruning (JCW), and achieves a better Pareto-frontier between the latency and accuracy than previous model compression approaches.
no code implementations • 7 Oct 2021 • Haiyan Jiang, Haoyi Xiong, Dongrui Wu, Ji Liu, Dejing Dou
Principal component analysis (PCA) has been widely used as an effective technique for feature extraction and dimension reduction.
no code implementations • 20 Sep 2021 • Philip E. Pare, Axel Janson, Sebin Gracy, Ji Liu, Henrik Sandberg, Karl H. Johansson
We develop a layered networked spread model for a susceptible-infected-susceptible (SIS) pathogen-borne disease spreading over a human contact network and an infrastructure network, and refer to it as a layered networked susceptible-infected-water-susceptible (SIWS) model.
1 code implementation • 18 Sep 2021 • Wentao Zhu, Tianlong Kong, Shun Lu, Jixiang Li, Dawei Zhang, Feng Deng, Xiaorui Wang, Sen yang, Ji Liu
Recently, x-vector has been a successful and popular approach for speaker verification, which employs a time delay neural network (TDNN) and statistics pooling to extract speaker characterizing embedding from variable-length utterances.
no code implementations • ICCV 2021 • Yi Guo, Huan Yuan, Jianchao Tan, Zhangyang Wang, Sen yang, Ji Liu
During the training process, the polarization effect will drive a subset of gates to smoothly decrease to exact zero, while other gates gradually stay away from zero by a large margin.
no code implementations • NeurIPS 2021 • Xuefan Zha, Wentao Zhu, Tingxun Lv, Sen yang, Ji Liu
However, the pure-Transformer based spatio-temporal learning can be prohibitively costly on memory and computation to extract fine-grained features from a tiny patch.
no code implementations • 20 Aug 2021 • Weicong Ding, Hanlin Tang, Jingshuo Feng, Lei Yuan, Sen yang, Guangxu Yang, Jie Zheng, Jing Wang, Qiang Su, Dong Zheng, Xuezhong Qiu, Yongqi Liu, Yuxuan Chen, Yang Liu, Chao Song, Dongying Kong, Kai Ren, Peng Jiang, Qiao Lian, Ji Liu
In this setting with multiple and constrained goals, this paper discovers that a probabilistic strategic parameter regime can achieve better value compared to the standard regime of finding a single deterministic parameter.
no code implementations • 10 Aug 2021 • Shangfeng Dai, Haobin Lin, Zhichen Zhao, Jianying Lin, Honghuan Wu, Zhe Wang, Sen yang, Ji Liu
Moreover, POSO can be further generalized to regular users, inactive users and returning users (+2%-3% on Watch Time), as well as item cold start (+3. 8% on Watch Time).
no code implementations • 9 Aug 2021 • Xiangyan Sun, Ke Liu, Yuquan Lin, Lingjie Wu, Haoming Xing, Minghong Gao, Ji Liu, Suocheng Tan, Zekun Ni, Qi Han, Junqiu Wu, Jie Fan
We have developed an end-to-end, retrosynthesis system, named ChemiRise, that can propose complete retrosynthesis routes for organic compounds rapidly and reliably.
1 code implementation • ICCV 2021 • Xiong Zhang, Hongsheng Huang, Jianchao Tan, Hongmin Xu, Cheng Yang, Guozhu Peng, Lei Wang, Ji Liu
To further improve the performance of these tasks, we propose a novel Hand Image Understanding (HIU) framework to extract comprehensive information of the hand object from a single RGB image, by jointly considering the relationships between these tasks.
no code implementations • 12 Jul 2021 • Weijia Zhang, Hao liu, Lijun Zha, HengShu Zhu, Ji Liu, Dejing Dou, Hui Xiong
Real estate appraisal refers to the process of developing an unbiased opinion for real property's market value, which plays a vital role in decision-making for various players in the marketplace (e. g., real estate agents, appraisers, lenders, and buyers).
1 code implementation • 3 Jul 2021 • Shaoduo Gan, Xiangru Lian, Rui Wang, Jianbin Chang, Chengjun Liu, Hongmei Shi, Shengzhuo Zhang, Xianghong Li, Tengxu Sun, Jiawei Jiang, Binhang Yuan, Sen yang, Ji Liu, Ce Zhang
Recent years have witnessed a growing list of systems for distributed data-parallel training.
no code implementations • CVPR 2021 • Ji Liu, Dong Li, Rongzhang Zheng, Lu Tian, Yi Shan
To this end, we comprehensively investigate three types of ranking constraints, i. e., global ranking, class-specific ranking and IoU-guided ranking losses.
2 code implementations • 11 Jun 2021 • Daochen Zha, Jingru Xie, Wenye Ma, Sheng Zhang, Xiangru Lian, Xia Hu, Ji Liu
Games are abstractions of the real world, where artificial agents learn to compete and cooperate with other agents.
no code implementations • 29 Apr 2021 • Ji Liu, Jizhou Huang, Yang Zhou, Xuhong LI, Shilei Ji, Haoyi Xiong, Dejing Dou
Because of laws or regulations, the distributed data and computing resources cannot be directly shared among different regions or organizations for machine learning tasks.
1 code implementation • 12 Apr 2021 • Ji Liu, Ce Zhang
Scalable and efficient distributed learning is one of the main driving forces behind the recent rapid advancement of machine learning and artificial intelligence.
1 code implementation • 19 Mar 2021 • Xuhong LI, Haoyi Xiong, Xingjian Li, Xuanyu Wu, Xiao Zhang, Ji Liu, Jiang Bian, Dejing Dou
Then, to understand the interpretation results, we also survey the performance metrics for evaluating interpretation algorithms.
no code implementations • 25 Feb 2021 • Baike She, Ji Liu, Shreyas Sundaram, Philip E. Paré
We propose a mathematical model to study coupled epidemic and opinion dynamics in a network of communities.
no code implementations • 24 Feb 2021 • Ji Liu
In this paper, we consider the following system $$\left\{\begin{array}{ll} n_t+u\cdot\nabla n&=\Delta n-\nabla\cdot(n\mathcal{S}(|\nabla c|^2)\nabla c)-nm,\\ c_t+u\cdot\nabla c&=\Delta c-c+m,\\ m_t+u\cdot\nabla m&=\Delta m-mn,\\ u_t&=\Delta u+\nabla P+(n+m)\nabla\Phi,\qquad \nabla\cdot u=0 \end{array}\right.$$ which models the process of coral fertilization, in a smoothly three-dimensional bounded domain, where $\mathcal{S}$ is a given function fulfilling $$|\mathcal{S}(\sigma)|\leq K_{\mathcal{S}}(1+\sigma)^{-\frac{\theta}{2}},\qquad \sigma\geq 0$$ with some $K_{\mathcal{S}}>0.$ Based on conditional estimates of the quantity $c$ and the gradients thereof, a relatively compressed argument as compared to that proceeding in related precedents shows that if $$\theta>0,$$ then for any initial data with proper regularity an associated initial-boundary problem under no-flux/no-flux/no-flux/Dirichlet boundary conditions admits a unique classical solution which is globally bounded, and which also enjoys the stabilization features in the sense that $$\|n(\cdot, t)-n_{\infty}\|_{L^{\infty}(\Omega)}+\|c(\cdot, t)-m_{\infty}\|_{W^{1,\infty}(\Omega)} +\|m(\cdot, t)-m_{\infty}\|_{W^{1,\infty}(\Omega)}+\|u(\cdot, t)\|_{L^{\infty}(\Omega)}\rightarrow0 \quad\textrm{as}~t\rightarrow \infty$$ with $n_{\infty}:=\frac{1}{|\Omega|}\left\{\int_{\Omega}n_0-\int_{\Omega}m_0\right\}_{+}$ and $m_{\infty}:=\frac{1}{|\Omega|}\left\{\int_{\Omega}m_0-\int_{\Omega}n_0\right\}_{+}.$
Analysis of PDEs
no code implementations • 17 Feb 2021 • Xudong Chen, Mohamed-Ali Belabbas, Ji Liu
A gossip process is an iterative process in a multi-agent system where only two neighboring agents communicate at each iteration and update their states.
Optimization and Control
2 code implementations • 4 Feb 2021 • Hanlin Tang, Shaoduo Gan, Ammar Ahmad Awan, Samyam Rajbhandari, Conglong Li, Xiangru Lian, Ji Liu, Ce Zhang, Yuxiong He
One of the most effective methods is error-compensated compression, which offers robust convergence speed even under 1-bit compression.
3 code implementations • ICLR 2021 • Daochen Zha, Wenye Ma, Lei Yuan, Xia Hu, Ji Liu
Unfortunately, methods based on intrinsic rewards often fall short in procedurally-generated environments, where a different environment is generated in each episode so that the agent is not likely to visit the same state more than once.
no code implementations • ICLR 2021 • Jiayi Shen, Haotao Wang, Shupeng Gui, Jianchao Tan, Zhangyang Wang, Ji Liu
The recommendation system (RS) plays an important role in the content recommendation and retrieval scenarios.
no code implementations • ICCV 2021 • Tiantian Han, Dong Li, Ji Liu, Lu Tian, Yi Shan
Such bin regularization (BR) mechanism encourages the weight distribution of each quantization bin to be sharp and approximate to a Dirac delta distribution ideally.
no code implementations • 22 Dec 2020 • Congxi Xiao, Jingbo Zhou, Jizhou Huang, An Zhuo, Ji Liu, Haoyi Xiong, Dejing Dou
Furthermore, to transfer the firsthand knowledge (witted in epicenters) to the target city before local outbreaks, we adopt a novel adversarial encoder framework to learn "city-invariant" representations from the mobility-related features for precise early detection of high-risk neighborhoods, even before any confirmed cases known, in the target city.
1 code implementation • 14 Dec 2020 • Ji Liu, Luciano Bello, Huiyang Zhou
In this paper, we propose a novel quantum compiler optimization, named relaxed peephole optimization (RPO) for quantum computers.
Quantum Physics Programming Languages
no code implementations • 24 Oct 2020 • Zhaowei Zhu, Jingxuan Zhu, Ji Liu, Yang Liu
Motivated by the proposal of federated learning, we aim for a solution with which agents will never share their local observations with a central entity, and will be allowed to only share a private copy of his/her own information with their neighbors.
no code implementations • 23 Oct 2020 • Yunjie Zhang, Fei Tao, Xudong Liu, Runze Su, Xiaorong Mei, Weicong Ding, Zhichen Zhao, Lei Yuan, Ji Liu
In this paper, we proposed a novel end-to-end self-organizing framework for user behavior prediction.
1 code implementation • NeurIPS 2020 • Haotao Wang, Tianlong Chen, Shupeng Gui, Ting-Kuei Hu, Ji Liu, Zhangyang Wang
The trained model could be adjusted among different standard and robust accuracies "for free" at testing time.
no code implementations • 19 Oct 2020 • Haoran Wei, Fei Tao, Runze Su, Sen yang, Ji Liu
Previous end-to-end SLU models are primarily used for English environment due to lacking large scale SLU dataset in Chines, and use only one ASR model to extract features from speech.
no code implementations • 14 Sep 2020 • Runze Su, Fei Tao, Xudong Liu, Hao-Ran Wei, Xiaorong Mei, Zhiyao Duan, Lei Yuan, Ji Liu, Yuying Xie
The applications of short-term user-generated video (UGV), such as Snapchat, and Youtube short-term videos, booms recently, raising lots of multimodal machine learning tasks.
no code implementations • 27 Aug 2020 • Ji Liu, Heshan Liu, Mang-Tik Chiu, Yu-Wing Tai, Chi-Keung Tang
We propose a novel pose-guided appearance transfer network for transferring a given reference appearance to a target pose in unprecedented image resolution (1024 * 1024), given respectively an image of the reference and target person.
no code implementations • 26 Aug 2020 • Hanlin Tang, Shaoduo Gan, Samyam Rajbhandari, Xiangru Lian, Ji Liu, Yuxiong He, Ce Zhang
Adam is the important optimization algorithm to guarantee efficiency and accuracy for training many important tasks such as BERT and ImageNet.
2 code implementations • ECCV 2020 • Haotao Wang, Shupeng Gui, Haichuan Yang, Ji Liu, Zhangyang Wang
Generative adversarial networks (GANs) have gained increasing popularity in various computer vision applications, and recently start to be deployed to resource-constrained mobile devices.
no code implementations • 14 Jul 2020 • Shikai Fang, Zheng Wang, Zhimeng Pan, Ji Liu, Shandian Zhe
Our algorithm provides responsive incremental updates for the posterior of the latent factors and NN weights upon receiving new tensor entries, and meanwhile select and inhibit redundant/useless weights.
7 code implementations • ICCV 2021 • Xiaohan Ding, Tianxiang Hao, Jianchao Tan, Ji Liu, Jungong Han, Yuchen Guo, Guiguang Ding
Via training with regular SGD on the former but a novel update rule with penalty gradients on the latter, we realize structured sparsity.
no code implementations • 15 Jun 2020 • Anji Liu, Yitao Liang, Ji Liu, Guy Van Den Broeck, Jianshu Chen
Second, and more importantly, we demonstrate how the proposed necessary conditions can be adopted to design more effective parallel MCTS algorithms.
1 code implementation • 9 Jun 2020 • Xichuan Zhou, Kui Liu, Cong Shi, Haijun Liu, Ji Liu
Recent researches on information bottleneck shed new light on the continuous attempts to open the black box of neural signal encoding.
no code implementations • 6 Jun 2020 • Bo Liu, Ji Liu, Mohammad Ghavamzadeh, Sridhar Mahadevan, Marek Petrik
In this paper, we analyze the convergence rate of the gradient temporal difference learning (GTD) family of algorithms.
1 code implementation • 6 Jun 2020 • Bo Liu, Ian Gemp, Mohammad Ghavamzadeh, Ji Liu, Sridhar Mahadevan, Marek Petrik
In this paper, we introduce proximal gradient temporal difference learning, which provides a principled way of designing and analyzing true stochastic gradient temporal difference learning algorithms.
no code implementations • NeurIPS 2012 • Bo Liu, Sridhar Mahadevan, Ji Liu
We present a novel $l_1$ regularized off-policy convergent TD-learning method (termed RO-TD), which is able to learn sparse representations of value functions with low computational complexity.
no code implementations • 19 Apr 2020 • Gan Sun, Yang Cong, Jiahua Dong, Qiang Wang, Ji Liu
To the end, experimental results on real-world datasets show that federated multi-task learning model is very sensitive to poisoning attacks, when the attackers either directly poison the target nodes or indirectly poison the related nodes by exploiting the communication protocol.
no code implementations • 23 Mar 2020 • Yi Guo, Ji Liu
Inspired by the normalized convolution operation, we propose a guided convolutional layer to recover dense depth from sparse and irregular depth image with an depth edge image as guidance.
no code implementations • 9 Mar 2020 • Huizhuo Yuan, Xiangru Lian, Ji Liu, Yuren Zhou
In this paper, we propose a novel algorithm named STOchastic Recursive Momentum for Policy Gradient (STORM-PG), which operates a SARAH-type stochastic recursive variance-reduced policy gradient in an exponential moving average fashion.
1 code implementation • CVPR 2020 • Hui Chen, Guiguang Ding, Xudong Liu, Zijia Lin, Ji Liu, Jungong Han
Existing methods leverage the attention mechanism to explore such correspondence in a fine-grained manner.
Ranked #14 on
Cross-Modal Retrieval
on Flickr30k
1 code implementation • 28 Feb 2020 • Zhuolin Yang, Zhikuan Zhao, Boxin Wang, Jiawei Zhang, Linyi Li, Hengzhi Pei, Bojan Karlas, Ji Liu, Heng Guo, Ce Zhang, Bo Li
Intensive algorithmic efforts have been made to enable the rapid improvements of certificated robustness for complex ML models recently.
1 code implementation • 9 Jan 2020 • Yan Lin, Ji Liu, Jianlin Zhou
Since the implicit surface is sufficiently expressive to retain the edges and details of the complex branches model, we use the implicit surface to model the triangular mesh.
no code implementations • 31 Dec 2019 • Huizhuo Yuan, Xiangru Lian, Ji Liu
Such a complexity is known to be the best one among IFO complexity results for non-convex stochastic compositional optimization, and is believed to be optimal.
1 code implementation • NeurIPS 2019 • Yali Du, Lei Han, Meng Fang, Ji Liu, Tianhong Dai, DaCheng Tao
A great challenge in cooperative decentralized multi-agent reinforcement learning (MARL) is generating diversified behaviors for each individual agent when receiving only a team reward.
Multi-agent Reinforcement Learning
reinforcement-learning
+3
no code implementations • NeurIPS 2019 • Huizhuo Yuan, Xiangru Lian, Chris Junchi Li, Ji Liu, Wenqing Hu
Stochastic compositional optimization arises in many important machine learning tasks such as reinforcement learning and portfolio management.
no code implementations • 24 Oct 2019 • Xingxing Zhang, Shupeng Gui, Zhenfeng Zhu, Yao Zhao, Ji Liu
In this paper, we take an initial attempt, and propose a generic formulation to provide a systematical solution (named ATZSL) for learning a robust ZSL model.
no code implementations • 24 Oct 2019 • Xingxing Zhang, Shupeng Gui, Zhenfeng Zhu, Yao Zhao, Ji Liu
Specifically, HPL is able to obtain discriminability on both seen and unseen class domains by learning visual prototypes respectively under the transductive setting.
1 code implementation • CVPR 2020 • Haichuan Yang, Shupeng Gui, Yuhao Zhu, Ji Liu
A key parameter that all existing compression techniques are sensitive to is the compression ratio (e. g., pruning sparsity, quantization bitwidth) of each layer.
1 code implementation • 11 Oct 2019 • Chaoyang He, Conghui Tan, Hanlin Tang, Shuang Qiu, Ji Liu
However, in many social network scenarios, centralized federated learning is not applicable (e. g., a central agent or server connecting all users may not exist, or the communication cost to the central server is not affordable).
no code implementations • 7 Oct 2019 • Bipul Islam, Ji Liu, Anthony Yezzi, Romeil Sandhu
The ability to accurately reconstruct the 3D facets of a scene is one of the key problems in robotic vision.
4 code implementations • NeurIPS 2019 • Xiaohan Ding, Guiguang Ding, Xiangxin Zhou, Yuchen Guo, Jungong Han, Ji Liu
Deep Neural Network (DNN) is powerful but computationally expensive and memory intensive, thus impeding its practical usage on resource-constrained front-end devices.
no code implementations • 25 Sep 2019 • Liu Liu, Ji Liu, Cho-Jui Hsieh, DaCheng Tao
The strategy is also accompanied by a mini-batch version of the proposed method that improves query complexity with respect to the size of the mini-batch.
no code implementations • 25 Sep 2019 • Shupeng Gui, Xiangliang Zhang, Pan Zhong, Shuang Qiu, Mingrui Wu, Jieping Ye, Zhengdao Wang, Ji Liu
The key problem in graph node embedding lies in how to define the dependence to neighbors.
no code implementations • 23 Aug 2019 • Dou Goodman, Xingjian Li, Ji Liu, Dejing Dou, Tao Wei
Finally, we conduct extensive experiments using a wide range of datasets and the experiment results show that our AT+ALP achieves the state of the art defense performance.
no code implementations • 17 Jul 2019 • Hanlin Tang, Xiangru Lian, Shuang Qiu, Lei Yuan, Ce Zhang, Tong Zhang, Ji Liu
Since the \emph{decentralized} training has been witnessed to be superior to the traditional \emph{centralized} training in the communication restricted scenario, therefore a natural question to ask is "how to apply the error-compensated technology to the decentralized learning to further reduce the communication cost."
no code implementations • 13 Jul 2019 • Wesley Suttle, Zhuoran Yang, Kaiqing Zhang, Ji Liu
In this paper, we present a probability one convergence proof, under suitable conditions, of a certain class of actor-critic algorithms for finding approximate solutions to entropy-regularized MDPs using the machinery of stochastic approximation.
no code implementations • 6 Jul 2019 • Yixuan Lin, Kaiqing Zhang, Zhuoran Yang, Zhaoran Wang, Tamer Başar, Romeil Sandhu, Ji Liu
This paper considers a distributed reinforcement learning problem in which a network of multiple agents aim to cooperatively maximize the globally averaged return through communication with only local neighbors.
no code implementations • 15 May 2019 • Hanlin Tang, Xiangru Lian, Chen Yu, Tong Zhang, Ji Liu
For example, under the popular parameter server model for distributed learning, the worker nodes need to send the compressed local gradients to the parameter server, which performs the aggregation.
1 code implementation • 15 Mar 2019 • Wesley Suttle, Zhuoran Yang, Kaiqing Zhang, Zhaoran Wang, Tamer Basar, Ji Liu
This paper extends off-policy reinforcement learning to the multi-agent case in which a set of networked agents communicating with their neighbors according to a time-varying graph collaboratively evaluates and improves a target policy while following a distinct behavior policy.
1 code implementation • 1 Mar 2019 • Ji Liu, Lei Zhang
For most existing learning to hash methods, sufficient training images are required and used to learn precise hashing codes.
no code implementations • 18 Feb 2019 • Yu Zhao, Ji Liu
Knowledge graph (KG) refinement mainly aims at KG completion and correction (i. e., error detection).
2 code implementations • NeurIPS 2019 • Shupeng Gui, Haotao Wang, Chen Yu, Haichuan Yang, Zhangyang Wang, Ji Liu
Deep model compression has been extensively studied, and state-of-the-art methods can now achieve high compression ratios with minimal accuracy loss.
no code implementations • 29 Jan 2019 • Yawei Zhao, Chen Yu, Peilin Zhao, Hanlin Tang, Shuang Qiu, Ji Liu
Decentralized Online Learning (online learning in decentralized networks) attracts more and more attention, since it is believed that Decentralized Online Learning can help the data providers cooperatively better solve their online problems without sharing their private data to a third party or other providers.
no code implementations • 29 Dec 2018 • Jianqiao Wangni, Dahua Lin, Ji Liu, Kostas Daniilidis, Jianbo Shi
For recovering 3D object poses from 2D images, a prevalent method is to pre-train an over-complete dictionary $\mathcal D=\{B_i\}_i^D$ of 3D basis poses.
2 code implementations • CVPR 2019 • Haichuan Yang, Yuhao Zhu, Ji Liu
The energy estimate model allows us to formulate DNN compression as a constrained optimization that minimizes the DNN loss function over the energy constraint.
no code implementations • NeurIPS 2018 • Conghui Tan, Tong Zhang, Shiqian Ma, Ji Liu
Regularized empirical risk minimization problem with linear predictor appears frequently in machine learning.
no code implementations • 19 Nov 2018 • Kaiqing Zhang, Yang Liu, Ji Liu, Mingyan Liu, Tamer Başar
This paper addresses the problem of distributed learning of average belief with sequential observations, in which a network of $n>1$ agents aim to reach a consensus on the average value of their beliefs, by exchanging information only with their neighbors.
no code implementations • NeurIPS 2018 • Conghui Tan, Tong Zhang, Shiqian Ma, Ji Liu
Regularized empirical risk minimization problem with linear predictor appears frequently in machine learning.
no code implementations • 2 Nov 2018 • Bo Liu, Luwan Zhang, Ji Liu
To make the problem computationally tractable, we propose a novel algorithm, termed as Optimal Denoising Dantzig Selector (ODDS), to approximately estimate the optimal denoising matrix.
4 code implementations • ICLR 2020 • Anji Liu, Jianshu Chen, Mingze Yu, Yu Zhai, Xuewen Zhou, Ji Liu
Monte Carlo Tree Search (MCTS) algorithms have achieved great success on many challenging benchmarks (e. g., Computer Go).
no code implementations • 17 Oct 2018 • Chen Yu, Hanlin Tang, Cedric Renggli, Simon Kassing, Ankit Singla, Dan Alistarh, Ce Zhang, Ji Liu
Most of today's distributed machine learning systems assume {\em reliable networks}: whenever two machines exchange information (e. g., gradients or models), the network should guarantee the delivery of the message.
no code implementations • 15 Oct 2018 • Xiangru Lian, Ji Liu
We show when BN works and when BN does not work by analyzing the optimization problem.
4 code implementations • 10 Oct 2018 • Jiechao Xiong, Qing Wang, Zhuoran Yang, Peng Sun, Lei Han, Yang Zheng, Haobo Fu, Tong Zhang, Ji Liu, Han Liu
Most existing deep reinforcement learning (DRL) frameworks consider either discrete action space or continuous action space solely.
no code implementations • 8 Oct 2018 • Yawei Zhao, Shuang Qiu, Ji Liu
While the online gradient method has been shown to be optimal for the static regret metric, the optimal algorithm for the dynamic regret remains unknown.
no code implementations • 27 Sep 2018 • Shupeng Gui, Xiangliang Zhang, Shuang Qiu, Mingrui Wu, Jieping Ye, Ji Liu
Our method can 1) learn an arbitrary form of the representation function from the neighborhood, without losing any potential dependence structures, 2) automatically decide the significance of neighbors at different distances, and 3) be applicable to both homogeneous and heterogeneous graph embedding, which may contain multiple types of nodes.
no code implementations • 25 Sep 2018 • Chaobing Song, Ji Liu, Han Liu, Yong Jiang, Tong Zhang
Regularized online learning is widely used in machine learning applications.
3 code implementations • 19 Sep 2018 • Peng Sun, Xinghai Sun, Lei Han, Jiechao Xiong, Qing Wang, Bo Li, Yang Zheng, Ji Liu, Yongsheng Liu, Han Liu, Tong Zhang
Both TStarBot1 and TStarBot2 are able to defeat the built-in AI agents from level 1 to level 10 in a full game (1v1 Zerg-vs-Zerg game on the AbyssalReef map), noting that level 8, level 9, and level 10 are cheating agents with unfair advantages such as full vision on the whole map and resource harvest boosting.
no code implementations • 6 Sep 2018 • Liu Liu, Ji Liu, Cho-Jui Hsieh, DaCheng Tao
In this paper, we consider the convex and non-convex composition problem with the structure $\frac{1}{n}\sum\nolimits_{i = 1}^n {{F_i}( {G( x )} )}$, where $G( x )=\frac{1}{n}\sum\nolimits_{j = 1}^n {{G_j}( x )} $ is the inner function, and $F_i(\cdot)$ is the outer function.
no code implementations • ICML 2018 • Hanlin Tang, Xiangru Lian, Ming Yan, Ce Zhang, Ji Liu
While training a machine learning model using multiple workers, each of which collects data from its own data source, it would be useful when the data collected from different workers are unique and different.
Ranked #3 on
Multi-view Subspace Clustering
on ORL
1 code implementation • ICLR 2019 • Carson Eisenach, Haichuan Yang, Ji Liu, Han Liu
In the former, an agent learns a policy over $\mathbb{R}^d$ and in the latter, over a discrete set of actions each of which is parametrized by a continuous parameter.
1 code implementation • ICLR 2019 • Haichuan Yang, Yuhao Zhu, Ji Liu
Deep Neural Networks (DNNs) are increasingly deployed in highly energy-constrained environments such as autonomous drones and wearable devices while at the same time must operate in real-time.
no code implementations • 28 May 2018 • Shupeng Gui, Xiangliang Zhang, Shuang Qiu, Mingrui Wu, Jieping Ye, Ji Liu
Graph embedding is a central problem in social network analysis and many other applications, aiming to learn the vector representation for each node.
no code implementations • 8 May 2018 • Ji Liu, Noel Moreno Lemus, Esther Pacitti, Fabio Porto, Patrick Valduriez
We consider big spatial data, which is typically produced in scientific areas such as geological or seismic interpretation.
no code implementations • 16 Apr 2018 • Hongyu Xu, Zhangyang Wang, Haichuan Yang, Ding Liu, Ji Liu
The thresholded feature has recently emerged as an extremely efficient, yet rough empirical approximation, of the time-consuming sparse coding inference process.
no code implementations • 19 Mar 2018 • Hanlin Tang, Xiangru Lian, Ming Yan, Ce Zhang, Ji Liu
While training a machine learning model using multiple workers, each of which collects data from their own data sources, it would be most useful when the data collected from different workers can be {\em unique} and {\em different}.
no code implementations • 18 Mar 2018 • Ke Ren, Haichuan Yang, Yu Zhao, Mingshan Xue, Hongyu Miao, Shuai Huang, Ji Liu
The positive-unlabeled (PU) classification is a common scenario in real-world applications such as healthcare, text classification, and bioinformatics, in which we only observe a few samples labeled as "positive" together with a large volume of "unlabeled" samples that may contain both positive and negative samples.
no code implementations • NeurIPS 2018 • Hanlin Tang, Shaoduo Gan, Ce Zhang, Tong Zhang, Ji Liu
In this paper, We explore a natural question: {\em can the combination of both techniques lead to a system that is robust to both bandwidth and latency?}
no code implementations • 17 Mar 2018 • Chen Yu, Bojan Karlas, Jie Zhong, Ce Zhang, Ji Liu
In this paper, we focus on the AutoML problem from the \emph{service provider's perspective}, motivated by the following practical consideration: When an AutoML service needs to serve {\em multiple users} with {\em multiple devices} at the same time, how can we allocate these devices to users in an efficient way?
1 code implementation • ICLR 2018 • Jiechao Xiong, Qing Wang, Zhuoran Yang, Peng Sun, Yang Zheng, Lei Han, Haobo Fu, Xiangru Lian, Carson Eisenach, Haichuan Yang, Emmanuel Ekwedike, Bei Peng, Haoyue Gao, Tong Zhang, Ji Liu, Han Liu
Most existing deep reinforcement learning (DRL) frameworks consider action spaces that are either discrete or continuous space.
no code implementations • 13 Nov 2017 • Liu Liu, Ji Liu, DaCheng Tao
In this paper, we apply the variance-reduced technique to derive two variance reduced algorithms that significantly improve the query complexity if the number of inner component functions is large.
no code implementations • 10 Nov 2017 • Zhouyuan Huo, Bin Gu, Ji Liu, Heng Huang
To the best of our knowledge, our method admits the fastest convergence rate for stochastic composition optimization: for strongly convex composition problem, our algorithm is proved to admit linear convergence; for general composition problem, our algorithm significantly improves the state-of-the-art convergence rate from $O(T^{-1/2})$ to $O((n_1+n_2)^{{2}/{3}}T^{-1})$.
no code implementations • NeurIPS 2018 • Jianqiao Wangni, Jialei Wang, Ji Liu, Tong Zhang
Modern large scale machine learning applications require stochastic optimization algorithms to be implemented on distributed computational architectures.
no code implementations • 26 Oct 2017 • Liu Liu, Ji Liu, DaCheng Tao
We consider the composition optimization with two expected-value functions in the form of $\frac{1}{n}\sum\nolimits_{i = 1}^n F_i(\frac{1}{m}\sum\nolimits_{j = 1}^m G_j(x))+R(x)$, { which formulates many important problems in statistical learning and machine learning such as solving Bellman equations in reinforcement learning and nonlinear embedding}.
2 code implementations • ICML 2018 • Xiangru Lian, Wei zhang, Ce Zhang, Ji Liu
Can we design an algorithm that is robust in a heterogeneous environment, while being communication efficient and maintaining the best-possible convergence rate?
no code implementations • 12 Sep 2017 • Yue Wu, Youzuo Lin, Zheng Zhou, David Chas Bolton, Ji Liu, Paul Johnson
Because of the fact that some positive events are not correctly annotated, we further formulate the detection problem as a learning-from-noise problem.
no code implementations • 24 Aug 2017 • Tian Li, Jie Zhong, Ji Liu, Wentao Wu, Ce Zhang
We ask, as a "service provider" that manages a shared cluster of machines among all our users running machine learning workloads, what is the resource allocation strategy that maximizes the global satisfaction of all our users?
no code implementations • ICML 2017 • Hantian Zhang, Jerry Li, Kaan Kara, Dan Alistarh, Ji Liu, Ce Zhang
We examine training at reduced precision, both from a theoretical and practical perspective, and ask: is it possible to train models at end-to-end low precision with provable guarantees?
1 code implementation • 10 Jul 2017 • Wei Qian, Wending Li, Yasuhiro Sogawa, Ryohei Fujimaki, Xitong Yang, Ji Liu
Sparsity learning with known grouping structure has received considerable attention due to wide modern applications in high-dimensional data analysis.
2 code implementations • NeurIPS 2017 • Xiangru Lian, Ce Zhang, huan zhang, Cho-Jui Hsieh, Wei zhang, Ji Liu
On network configurations with low bandwidth or high latency, D-PSGD can be up to one order of magnitude faster than its well-optimized centralized counterparts.
no code implementations • 3 May 2017 • Gan Sun, Yang Cong, Ji Liu, Xiaowei Xu
In this paper, we consider lifelong learning problem to mimic "human learning", i. e., endowing a new capability to the learned metric for a new task from new online samples and incorporating previous experiences and knowledge.
no code implementations • 15 Apr 2017 • Jie Zhong, Yijun Huang, Ji Liu
This paper proposes an asynchronous parallel thresholding algorithm and its parameter-free version to improve the efficiency and the applicability.
no code implementations • ICML 2017 • Haichuan Yang, Shupeng Gui, Chuyang Ke, Daniel Stefankovic, Ryohei Fujimaki, Ji Liu
The cardinality constraint is an intrinsic way to restrict the solution structure in many domains, for example, sparse learning, feature selection, and compressed sensing.
no code implementations • 21 Feb 2017 • Jinfeng Yi, Qi Lei, Wesley Gifford, Ji Liu, Junchi Yan
In order to efficiently solve the proposed framework, we propose a parameter-free and scalable optimization algorithm by effectively exploring the sparse and low-rank structure of the tensor.
no code implementations • NeurIPS 2016 • Yang You, Xiangru Lian, Ji Liu, Hsiang-Fu Yu, Inderjit S. Dhillon, James Demmel, Cho-Jui Hsieh
n this paper, we propose and study an Asynchronous parallel Greedy Coordinate Descent (Asy-GCD) algorithm for minimizing a smooth function with bounded constraints.
no code implementations • 18 Nov 2016 • Wei Zhang, Minwei Feng, Yunhui Zheng, Yufei Ren, Yandong Wang, Ji Liu, Peng Liu, Bing Xiang, Li Zhang, Bo-Wen Zhou, Fei Wang
By evaluating the NLC workloads, we show that only the conservative hyper-parameter setup (e. g., small mini-batch size and small learning rate) can guarantee acceptable model accuracy for a wide range of customers.
1 code implementation • 16 Nov 2016 • Hantian Zhang, Jerry Li, Kaan Kara, Dan Alistarh, Ji Liu, Ce Zhang
When applied to linear models together with double sampling, we save up to another 1. 7x in data movement compared with uniform quantization.
no code implementations • 12 Nov 2016 • Chuyang Ke, Yan Jin, Heather Evans, Bill Lober, Xiaoning Qian, Ji Liu, Shuai Huang
Since existing prediction models of SSI have quite limited capacity to utilize the evolving clinical data, we develop the corresponding solution to equip these mHealth tools with decision-making capabilities for SSI prediction with a seamless assembly of several machine learning models to tackle the analytic challenges arising from the spatial-temporal data.
no code implementations • 20 Oct 2016 • Abbas Kazemipour, Ji Liu, Patrick Kanold, Min Wu, Behtash Babadi
In this paper, we consider linear state-space models with compressible innovations and convergent transition matrices in order to model spatiotemporally sparse transient events.
no code implementations • 23 Aug 2016 • Yang Zhang, Rupam Acharyya, Ji Liu, Boqing Gong
We develop a new statistical machine learning paradigm, named infinite-label learning, to annotate a data point with more than one relevant labels from a candidate set, which pools both the finite labels observed at training and a potentially infinite number of previously unseen labels.
no code implementations • NeurIPS 2016 • Mengdi Wang, Ji Liu, Ethan X. Fang
The ASC-PG is the first proximal gradient method for the stochastic composition problem that can deal with nonsmooth regularization penalty.
no code implementations • CVPR 2016 • Haichuan Yang, Yijun Huang, Lam Tran, Ji Liu, Shuai Huang
In this paper, we proposed a general bilevel exclusive sparsity formulation to pursue the diversity by restricting the overall sparsity and the sparsity in each group.
no code implementations • 7 Dec 2015 • Ji Liu, Xiaojin Zhu
Teaching dimension is a learning theoretic quantity that specifies the minimum training set size to teach a target model to a learner.
1 code implementation • 18 Nov 2015 • Wei Zhang, Suyog Gupta, Xiangru Lian, Ji Liu
Deep neural networks have been shown to achieve state-of-the-art performance in several machine learning tasks.
no code implementations • 27 Oct 2015 • Yijun Huang, Ji Liu
To the best of our knowledge, this is the first time to guarantee such convergence rate for the general exclusive sparsity norm minimization; 2) When the group information is unavailable to define the exclusive sparsity norm, we propose to use the random grouping scheme to construct groups and prove that if the number of groups is appropriately chosen, the nonzeros (true features) would be grouped in the ideal way with high probability.
no code implementations • NeurIPS 2015 • Xiangru Lian, Yijun Huang, Yuncheng Li, Ji Liu
Asynchronous parallel implementations of stochastic gradient (SG) have been broadly used in solving deep neural network and received many successes in practice recently.
no code implementations • NeurIPS 2014 • Deguang Kong, Ryohei Fujimaki, Ji Liu, Feiping Nie, Chris Ding
Group lasso is widely used to enforce the structural sparsity, which achieves the sparsity at inter-group level.
no code implementations • 26 May 2014 • Sridhar Mahadevan, Bo Liu, Philip Thomas, Will Dabney, Steve Giguere, Nicholas Jacek, Ian Gemp, Ji Liu
In this paper, we set forth a new vision of reinforcement learning developed by us over the past few years, one that yields mathematically rigorous solutions to longstanding important questions that have remained unresolved: (i) how to design reliable, convergent, and robust reinforcement learning algorithms (ii) how to guarantee that reinforcement learning satisfies pre-specified "safety" guarantees, and remains in a stable region of the parameter space (iii) how to design "off-policy" temporal difference learning algorithms in a reliable and stable manner, and finally (iv) how to integrate the study of reinforcement learning into the rich theory of stochastic optimization.
no code implementations • 31 Dec 2013 • Ji Liu, Ryohei Fujimaki, Jieping Ye
Our new bounds are consistent with the bounds of a special case (least squares) and fills a previously existing theoretical gap for general convex smooth functions; 3) We show that the restricted strong convexity condition is satisfied if the number of independent samples is more than $\bar{k}\log d$ where $\bar{k}$ is the sparsity number and $d$ is the dimension of the variable; 4) We apply FoBa-gdt (with the conditional random field objective) to the sensor selection problem for human indoor activity recognition and our results show that FoBa-gdt outperforms other methods (including the ones based on forward greedy selection and L1-regularization).
no code implementations • NeurIPS 2013 • Srikrishna Sridhar, Stephen Wright, Christopher Re, Ji Liu, Victor Bittorf, Ce Zhang
Many problems in machine learning can be solved by rounding the solution of an appropriate linear program.
no code implementations • 30 Apr 2013 • Ji Liu, Lei Yuan, Jieping Ye
Specifically, we show 1) in the noiseless case, if the condition number of $D$ is bounded and the measurement number $n\geq \Omega(s\log(p))$ where $s$ is the sparsity number, then the true solution can be recovered with high probability; and 2) in the noisy case, if the condition number of $D$ is bounded and the measurement increases faster than $s\log(p)$, that is, $s\log(p)=o(n)$, the estimate error converges to zero with probability 1 when $p$ and $s$ go to infinity.
no code implementations • 3 Jul 2012 • Ji Liu, Stephen J. Wright
We consider the reconstruction problem in compressed sensing in which the observations are recorded in a finite number of bits.
no code implementations • NeurIPS 2010 • Ji Liu, Peter Wonka, Jieping Ye
We show that if $X$ obeys a certain condition, then with a large probability the difference between the solution $\hat\beta$ estimated by the proposed method and the true solution $\beta^*$ measured in terms of the $l_p$ norm ($p\geq 1$) is bounded as \begin{equation*} \|\hat\beta-\beta^*\|_p\leq \left(C(s-N)^{1/p}\sqrt{\log m}+\Delta\right)\sigma, \end{equation*} $C$ is a constant, $s$ is the number of nonzero entries in $\beta^*$, $\Delta$ is independent of $m$ and is much smaller than the first term, and $N$ is the number of entries of $\beta^*$ larger than a certain value in the order of $\mathcal{O}(\sigma\sqrt{\log m})$.