Search Results for author: Jian Peng

Found 68 papers, 21 papers with code

A Chance-Constrained Generative Framework for Sequence Optimization

no code implementations ICML 2020 Xianggen Liu, Jian Peng, Qiang Liu, Sen Song

Deep generative modeling has achieved many successes for continuous data generation, such as producing realistic images and controlling their properties (e. g., styles).

Coordinate-wise Control Variates for Deep Policy Gradients

no code implementations11 Jul 2021 Yuanyi Zhong, Yuan Zhou, Jian Peng

The control variates (CV) method is widely used in policy gradient estimation to reduce the variance of the gradient estimators in practice.

Continuous Control

Off-Policy Reinforcement Learning with Delayed Rewards

no code implementations22 Jun 2021 Beining Han, Zhizhou Ren, Zuofan Wu, Yuan Zhou, Jian Peng

We study deep reinforcement learning (RL) algorithms with delayed rewards.

DAP: Detection-Aware Pre-training with Weak Supervision

1 code implementation CVPR 2021 Yuanyi Zhong, JianFeng Wang, Lijuan Wang, Jian Peng, Yu-Xiong Wang, Lei Zhang

This paper presents a detection-aware pre-training (DAP) approach, which leverages only weakly-labeled classification-style datasets (e. g., ImageNet) for pre-training, but is specifically tailored to benefit object detection tasks.

Classification General Classification +3

Learning Neural Generative Dynamics for Molecular Conformation Generation

3 code implementations ICLR 2021 Minkai Xu, Shitong Luo, Yoshua Bengio, Jian Peng, Jian Tang

Inspired by the recent progress in deep generative models, in this paper, we propose a novel probabilistic framework to generate valid and diverse conformations given a molecular graph.

Harnessing Distribution Ratio Estimators for Learning Agents with Quality and Diversity

1 code implementation5 Nov 2020 Tanmay Gangwani, Jian Peng, Yuan Zhou

Quality-Diversity (QD) is a concept from Neuroevolution with some intriguing applications to Reinforcement Learning.

Off-Policy Interval Estimation with Lipschitz Value Iteration

no code implementations NeurIPS 2020 Ziyang Tang, Yihao Feng, Na Zhang, Jian Peng, Qiang Liu

Off-policy evaluation provides an essential tool for evaluating the effects of different policies or treatments using only observed data.

Decision Making Medical Diagnosis

Hunting for Dark Matter Subhalos in Strong Gravitational Lensing with Neural Networks

no code implementations24 Oct 2020 Joshua Yao-Yu Lin, Hang Yu, Warren Morningstar, Jian Peng, Gilbert Holder

Dark matter substructures are interesting since they can reveal the properties of dark matter.

Cosmology and Nongalactic Astrophysics Computational Physics

Learning Guidance Rewards with Trajectory-space Smoothing

1 code implementation NeurIPS 2020 Tanmay Gangwani, Yuan Zhou, Jian Peng

To make credit assignment easier, recent works have proposed algorithms to learn dense "guidance" rewards that could be used in place of the sparse or delayed environmental rewards.

Q-Learning

Efficient Competitive Self-Play Policy Optimization

no code implementations13 Sep 2020 Yuanyi Zhong, Yuan Zhou, Jian Peng

Reinforcement learning from self-play has recently reported many successes.

Pre-training of Graph Neural Network for Modeling Effects of Mutations on Protein-Protein Binding Affinity

no code implementations28 Aug 2020 Xianggen Liu, Yunan Luo, Sen Song, Jian Peng

Modeling the effects of mutations on the binding affinity plays a crucial role in protein engineering and drug design.

Boosting Weakly Supervised Object Detection with Progressive Knowledge Transfer

1 code implementation ECCV 2020 Yuanyi Zhong, Jian-Feng Wang, Jian Peng, Lei Zhang

In this paper, we propose an effective knowledge transfer framework to boost the weakly supervised object detection accuracy with the help of an external fully-annotated source dataset, whose categories may not overlap with the target domain.

Transfer Learning Weakly Supervised Object Detection

Accelerating Nonconvex Learning via Replica Exchange Langevin Diffusion

no code implementations ICLR 2019 Yi Chen, Jinglin Chen, Jing Dong, Jian Peng, Zhaoran Wang

To attain the advantages of both regimes, we propose to use replica exchange, which swaps between two Langevin diffusions with different temperatures.

Mutual Information Based Knowledge Transfer Under State-Action Dimension Mismatch

1 code implementation12 Jun 2020 Michael Wan, Tanmay Gangwani, Jian Peng

In this paper, we propose a new framework for transfer learning where the teacher and the student can have arbitrarily different state- and action-spaces.

Decision Making Transfer Learning

Stein Variational Inference for Discrete Distributions

no code implementations1 Mar 2020 Jun Han, Fan Ding, Xianglong Liu, Lorenzo Torresani, Jian Peng, Qiang Liu

In addition, such transform can be straightforwardly employed in gradient-free kernelized Stein discrepancy to perform goodness-of-fit (GOF) test on discrete distributions.

Variational Inference

State-only Imitation with Transition Dynamics Mismatch

1 code implementation ICLR 2020 Tanmay Gangwani, Jian Peng

Imitation Learning (IL) is a popular paradigm for training agents to achieve complicated goals by leveraging expert behavior, rather than dealing with the hardships of designing a correct reward function.

Imitation Learning OpenAI Gym

Disentangling Controllable Object through Video Prediction Improves Visual Reinforcement Learning

no code implementations21 Feb 2020 Yuanyi Zhong, Alexander Schwing, Jian Peng

In many vision-based reinforcement learning (RL) problems, the agent controls a movable object in its visual field, e. g., the player's avatar in video games and the robotic arm in visual grasping and manipulation.

Atari Games Video Prediction

cube2net: Efficient Query-Specific Network Construction with Data Cube Organization

no code implementations18 Jan 2020 Carl Yang, Mengxiong Liu, Frank He, Jian Peng, Jiawei Han

With extensive experiments of two classic network mining tasks on different real-world large datasets, we show that our proposed cube2net pipeline is general, and much more effective and efficient in query-specific network construction, compared with other methods without the leverage of data cube or reinforcement learning.

Overcoming Long-term Catastrophic Forgetting through Adversarial Neural Pruning and Synaptic Consolidation

1 code implementation19 Dec 2019 Jian Peng, Bo Tang, Hao Jiang, Zhuo Li, Yinjie Lei, Tao Lin, Haifeng Li

It is due to two facts: first, as the model learns more tasks, the intersection of the low-error parameter subspace satisfying for these tasks becomes smaller or even does not exist; second, when the model learns a new task, the cumulative error keeps increasing as the model tries to protect the parameter configuration of previous tasks from interference.

Image Classification

$\sqrt{n}$-Regret for Learning in Markov Decision Processes with Function Approximation and Low Bellman Rank

no code implementations5 Sep 2019 Kefan Dong, Jian Peng, Yining Wang, Yuan Zhou

Our learning algorithm, Adaptive Value-function Elimination (AVE), is inspired by the policy elimination algorithm proposed in (Jiang et al., 2017), known as OLIVE.

Efficient Exploration

Characterizing Attacks on Deep Reinforcement Learning

no code implementations21 Jul 2019 Chaowei Xiao, Xinlei Pan, Warren He, Jian Peng, Ming-Jie Sun, Jin-Feng Yi, Mingyan Liu, Bo Li, Dawn Song

In addition to current observation based attacks against DRL, we propose the first targeted attacks based on action space and environment dynamics.

Autonomous Driving

Learning Belief Representations for Imitation Learning in POMDPs

1 code implementation22 Jun 2019 Tanmay Gangwani, Joel Lehman, Qiang Liu, Jian Peng

We consider the problem of imitation learning from expert demonstrations in partially observable Markov decision processes (POMDPs).

Continuous Control Imitation Learning +2

Exploration via Hindsight Goal Generation

1 code implementation NeurIPS 2019 Zhizhou Ren, Kefan Dong, Yuan Zhou, Qiang Liu, Jian Peng

Goal-oriented reinforcement learning has recently been a practical framework for robotic manipulation tasks, in which an agent is required to reach a certain goal defined by a function on the state space.

A gradual, semi-discrete approach to generative network training via explicit Wasserstein minimization

no code implementations8 Jun 2019 Yu-cheng Chen, Matus Telgarsky, Chao Zhang, Bolton Bailey, Daniel Hsu, Jian Peng

This paper provides a simple procedure to fit generative networks to target distributions, with the goal of a small Wasserstein distance (or other optimal transport costs).

Sequence Modeling of Temporal Credit Assignment for Episodic Reinforcement Learning

1 code implementation31 May 2019 Yang Liu, Yunan Luo, Yuanyi Zhong, Xi Chen, Qiang Liu, Jian Peng

Recent advances in deep reinforcement learning algorithms have shown great potential and success for solving many challenging real-world problems, including Go game and robotic applications.

Thresholding Bandit with Optimal Aggregate Regret

no code implementations NeurIPS 2019 Chao Tao, Saùl Blanco, Jian Peng, Yuan Zhou

We consider the thresholding bandit problem, whose goal is to find arms of mean rewards above a given threshold $\theta$, with a fixed budget of $T$ trials.

Stochastic Variance Reduction for Deep Q-learning

no code implementations20 May 2019 Wei-Ye Zhao, Xi-Ya Guan, Yang Liu, Xiaoming Zhao, Jian Peng

Recent advances in deep reinforcement learning have achieved human-level performance on a variety of real-world applications.

Q-Learning

Knowledge Flow: Improve Upon Your Teachers

no code implementations ICLR 2019 Iou-Jen Liu, Jian Peng, Alexander G. Schwing

A zoo of deep nets is available these days for almost any given task, and it is increasingly unclear which net to start with when addressing a new task, or which net to use as an initialization for fine-tuning a new model.

Understanding the Importance of Single Directions via Representative Substitution

no code implementations20 Jan 2019 Li Chen, Hailun Ding, Qi Li, Zhuo Li, Jian Peng, Haifeng Li

Understanding the internal representations of deep neural networks (DNNs) is crucal to explain their behavior.

A Data-driven Adversarial Examples Recognition Framework via Adversarial Feature Genome

no code implementations25 Dec 2018 Li Chen, Hailun Ding, Qi Li, Jiawei Zhu, Jian Peng, Haifeng Li

Inspired by AFS, we propose a defense framework based on Adversarial Feature Genome (AFG), which can detect and correctly classify adversarial examples into original classes simultaneously.

General Classification Multi-Label Classification

Overcoming Catastrophic Forgetting by Soft Parameter Pruning

1 code implementation4 Dec 2018 Jian Peng, Jiang Hao, Zhuo Li, Enqiang Guo, Xiaohong Wan, Deng Min, Qing Zhu, Haifeng Li

In this paper, we proposed a Soft Parameters Pruning (SPP) strategy to reach the trade-off between short-term and long-term profit of a learning model by freeing those parameters less contributing to remember former task domain knowledge to learn future tasks, and preserving memories about previous tasks via those parameters effectively encoding knowledge about tasks at the same time.

Continual Learning

Anchor Box Optimization for Object Detection

no code implementations2 Dec 2018 Yuanyi Zhong, Jian-Feng Wang, Jian Peng, Lei Zhang

In this paper, we propose a general approach to optimize anchor boxes for object detection.

Object Detection

Understanding the Importance of Single Directions via Representative Substitution

no code implementations27 Nov 2018 Li Chen, Hailun Ding, Qi Li, Zhuo Li, Jian Peng, Haifeng Li

Understanding the internal representations of deep neural networks (DNNs) is crucal to explain their behavior.

emrQA: A Large Corpus for Question Answering on Electronic Medical Records

2 code implementations EMNLP 2018 Anusri Pampari, Preethi Raghavan, Jennifer Liang, Jian Peng

We propose a novel methodology to generate domain-specific large-scale question answering (QA) datasets by re-purposing existing annotations for other NLP tasks.

Question Answering

Learning to Explore via Meta-Policy Gradient

no code implementations ICML 2018 Tianbing Xu, Qiang Liu, Liang Zhao, Jian Peng

The performance of off-policy learning, including deep Q-learning and deep deterministic policy gradient (DDPG), critically depends on the choice of the exploration policy.

Continuous Control Q-Learning

Large-Margin Classification in Hyperbolic Space

2 code implementations1 Jun 2018 Hyunghoon Cho, Benjamin DeMeo, Jian Peng, Bonnie Berger

Representing data in hyperbolic space can effectively capture latent hierarchical relationships.

Classification General Classification

Learning Self-Imitating Diverse Policies

no code implementations ICLR 2019 Tanmay Gangwani, Qiang Liu, Jian Peng

Improving the efficiency of RL algorithms in real-world problems with sparse or episodic rewards is therefore a pressing need.

Continuous Control Decision Making +3

Efficient Contextualized Representation: Language Model Pruning for Sequence Labeling

1 code implementation EMNLP 2018 Liyuan Liu, Xiang Ren, Jingbo Shang, Jian Peng, Jiawei Han

Many efforts have been made to facilitate natural language processing tasks with pre-trained language models (LMs), and brought significant improvements to various applications.

Language Modelling Named Entity Recognition

Learning to Explore with Meta-Policy Gradient

no code implementations13 Mar 2018 Tianbing Xu, Qiang Liu, Liang Zhao, Jian Peng

The performance of off-policy learning, including deep Q-learning and deep deterministic policy gradient (DDPG), critically depends on the choice of the exploration policy.

Q-Learning

The Importance of Norm Regularization in Linear Graph Embedding: Theoretical Analysis and Empirical Demonstration

no code implementations ICLR 2019 Yihan Gao, Chao Zhang, Jian Peng, Aditya Parameswaran

Both theoretical and empirical evidence are provided to support this argument: (a) we prove that the generalization error of these methods can be bounded by limiting the norm of vectors, regardless of the embedding dimension; (b) we show that the generalization performance of linear graph embedding methods is correlated with the norm of embedding vectors, which is small due to the early stopping of SGD and the vanishing gradients.

Graph Embedding

Policy Optimization by Genetic Distillation

no code implementations ICLR 2018 Tanmay Gangwani, Jian Peng

GPO uses imitation learning for policy crossover in the state space and applies policy gradient methods for mutation.

Imitation Learning Policy Gradient Methods

Action-depedent Control Variates for Policy Optimization via Stein's Identity

2 code implementations30 Oct 2017 Hao Liu, Yihao Feng, Yi Mao, Dengyong Zhou, Jian Peng, Qiang Liu

Policy gradient methods have achieved remarkable successes in solving challenging reinforcement learning problems.

Policy Gradient Methods

Efficient Localized Inference for Large Graphical Models

no code implementations28 Oct 2017 Jinglin Chen, Jian Peng, Qiang Liu

We propose a new localized inference algorithm for answering marginalization queries in large graphical models with the correlation decay property.

Stochastic Variance Reduction for Policy Gradient Estimation

no code implementations17 Oct 2017 Tianbing Xu, Qiang Liu, Jian Peng

Recent advances in policy gradient methods and deep learning have demonstrated their applicability for complex reinforcement learning problems.

Continuous Control Policy Gradient Methods

Energy-efficient Amortized Inference with Cascaded Deep Classifiers

no code implementations10 Oct 2017 Jiaqi Guan, Yang Liu, Qiang Liu, Jian Peng

Deep neural networks have been remarkable successful in various AI tasks but often cast high computation and energy cost for energy-constrained applications such as mobile sensing.

Image Classification

Empower Sequence Labeling with Task-Aware Neural Language Model

3 code implementations13 Sep 2017 Liyuan Liu, Jingbo Shang, Frank F. Xu, Xiang Ren, Huan Gui, Jian Peng, Jiawei Han

In this study, we develop a novel neural framework to extract abundant knowledge hidden in raw texts to empower the sequence labeling task.

Language Modelling Named Entity Recognition +4

On the Selective and Invariant Representation of DCNN for High-Resolution Remote Sensing Image Recognition

no code implementations4 Aug 2017 Jie Chen, Chao Yuan, Min Deng, Chao Tao, Jian Peng, Haifeng Li

Owing to its superiority in feature representation, DCNN has exhibited remarkable performance in scene recognition of high-resolution remote sensing (HRRS) images and classification of hyper-spectral remote sensing images.

Classification General Classification +1

RSI-CB: A Large Scale Remote Sensing Image Classification Benchmark via Crowdsource Data

1 code implementation30 May 2017 Haifeng Li, Xin Dou, Chao Tao, Zhixiang Hou, Jie Chen, Jian Peng, Min Deng, Ling Zhao

In this paper, we propose a remote sensing image classification benchmark (RSI-CB) based on massive, scalable, and diverse crowdsource data.

Classification General Classification +2

What do We Learn by Semantic Scene Understanding for Remote Sensing imagery in CNN framework?

no code implementations19 May 2017 Haifeng Li, Jian Peng, Chao Tao, Jie Chen, Min Deng

Is the DCNN recognition mechanism centered on object recognition still applicable to the scenarios of remote sensing scene understanding?

Object Recognition Scene Recognition +1

Stein Variational Policy Gradient

no code implementations7 Apr 2017 Yang Liu, Prajit Ramachandran, Qiang Liu, Jian Peng

Policy gradient methods have been successfully applied to many complex reinforcement learning problems.

Bayesian Inference Continuous Control +1

Learning to Play in a Day: Faster Deep Reinforcement Learning by Optimality Tightening

1 code implementation5 Nov 2016 Frank S. He, Yang Liu, Alexander G. Schwing, Jian Peng

We propose a novel training algorithm for reinforcement learning which combines the strength of deep Q-learning with a constrained optimization approach to tighten optimality and encourage faster reward propagation.

Atari Games Q-Learning

DPPred: An Effective Prediction Framework with Concise Discriminative Patterns

no code implementations31 Oct 2016 Jingbo Shang, Meng Jiang, Wenzhu Tong, Jinfeng Xiao, Jian Peng, Jiawei Han

In the literature, two series of models have been proposed to address prediction problems including classification and regression.

On the Interpretability of Conditional Probability Estimates in the Agnostic Setting

no code implementations9 Jun 2015 Yihan Gao, Aditya Parameswaran, Jian Peng

We study the interpretability of conditional probability estimates for binary classification under the agnostic setting or scenario.

General Classification

Diffusion Component Analysis: Unraveling Functional Topology in Biological Networks

no code implementations10 Apr 2015 Hyunghoon Cho, Bonnie Berger, Jian Peng

In this paper, we introduce diffusion component analysis (DCA), a framework that plugs in a diffusion model and learns a low-dimensional vector representation of each node to encode the topological properties of a network.

Dimensionality Reduction

Exact Hybrid Covariance Thresholding for Joint Graphical Lasso

no code implementations7 Mar 2015 Qingming Tang, Chao Yang, Jian Peng, Jinbo Xu

This paper proposes a novel hybrid covariance thresholding algorithm that can effectively identify zero entries in the precision matrices and split a large joint graphical lasso problem into small subproblems.

Conditional Neural Fields

no code implementations NeurIPS 2009 Jian Peng, Liefeng Bo, Jinbo Xu

To model the nonlinear relationship between input features and outputs we propose Conditional Neural Fields (CNF), a new conditional probabilistic graphical model for sequence labeling.

Handwriting Recognition Hyperparameter Optimization +1

Cannot find the paper you are looking for? You can Submit a new open access paper.