no code implementations • ICML 2020 • Xianggen Liu, Jian Peng, Qiang Liu, Sen Song
Deep generative modeling has achieved many successes for continuous data generation, such as producing realistic images and controlling their properties (e. g., styles).
1 code implementation • 26 Feb 2024 • Jiaqi Guan, Xiangxin Zhou, Yuwei Yang, Yu Bao, Jian Peng, Jianzhu Ma, Qiang Liu, Liang Wang, Quanquan Gu
Designing 3D ligands within a target binding site is a fundamental task in drug discovery.
1 code implementation • NeurIPS 2023 • Chaoran Cheng, Jian Peng
We propose a general architecture that combines the coefficient learning scheme with a residual operator layer for learning mappings between continuous functions in the 3D Euclidean space.
1 code implementation • 12 Sep 2023 • Xingchao Liu, Xiwen Zhang, Jianzhu Ma, Jian Peng, Qiang Liu
Leveraging our new pipeline, we create, to the best of our knowledge, the first one-step diffusion-based text-to-image generator with SD-level image quality, achieving an FID (Frechet Inception Distance) of $23. 3$ on MS COCO 2017-5k, surpassing the previous state-of-the-art technique, progressive distillation, by a significant margin ($37. 2$ $\rightarrow$ $23. 3$ in FID).
2 code implementations • 6 Mar 2023 • Jiaqi Guan, Wesley Wei Qian, Xingang Peng, Yufeng Su, Jian Peng, Jianzhu Ma
Rich data and powerful machine learning models allow us to design drugs for a specific protein target \textit{in silico}.
no code implementations • 5 Feb 2023 • Daniel D Kim, Rajat S Chandra, Jian Peng, Jing Wu, Xue Feng, Michael Atalay, Chetan Bettegowda, Craig Jones, Haris Sair, Wei-Hua Liao, Chengzhang Zhu, Beiji Zou, Li Yang, Anahita Fathi Kazerooni, Ali Nabavizadeh, Harrison X Bai, Zhicheng Jiao
We investigated uncertainty sampling, annotation redundancy restriction, and initial dataset selection techniques.
1 code implementation • 20 Nov 2022 • Zhizhou Ren, Anji Liu, Yitao Liang, Jian Peng, Jianzhu Ma
To bridge this gap, we study the problem of few-shot adaptation in the context of human-in-the-loop reinforcement learning.
1 code implementation • 12 Jul 2022 • Julong Young, Junhui Chen, Feihu Huang, Jian Peng
This, for fine-grained time series, leads to a bottleneck in information input and prediction output, which is mortal to long-term series forecasting.
no code implementations • 10 Jun 2022 • Yuanyi Zhong, Haoran Tang, Junkun Chen, Jian Peng, Yu-Xiong Wang
Our insight has implications in improving the downstream robustness of supervised learning.
1 code implementation • Immunity 2022 • Yiquan Wang, Meng Yuan, Huibin Lv, Jian Peng, Ian A. Wilson, Nicholas C. Wu
Global research to combat the COVID-19 pandemic has led to the isolation and characterization of thousands of human antibodies to the SARS-CoV-2 spike protein, providing an unprecedented opportunity to study the antibody response to a single antigen.
3 code implementations • 15 May 2022 • Xingang Peng, Shitong Luo, Jiaqi Guan, Qi Xie, Jian Peng, Jianzhu Ma
Deep generative models have achieved tremendous success in designing novel drug molecules in recent years.
1 code implementation • ICLR 2022 • Tanmay Gangwani, Yuan Zhou, Jian Peng
In this work, we propose an algorithm that trains an intermediary policy in the learner environment and uses it as a surrogate expert for the learner.
1 code implementation • CVPR 2022 • Shitong Luo, Jiahan Li, Jiaqi Guan, Yufeng Su, Chaoran Cheng, Jian Peng, Jianzhu Ma
In this work, we propose a novel and simple framework to achieve equivariance for point cloud analysis based on the message passing (graph neural network) scheme.
3 code implementations • NeurIPS 2021 • Shitong Luo, Jiaqi Guan, Jianzhu Ma, Jian Peng
In this paper, we propose a 3D generative model that generates molecules given a designated 3D protein binding site.
1 code implementation • 2 Mar 2022 • Shenggan Cheng, Xuanlei Zhao, Guangyang Lu, Jiarui Fang, Zhongming Yu, Tian Zheng, Ruidong Wu, Xiwen Zhang, Jian Peng, Yang You
In this work, we present FastFold, an efficient implementation of AlphaFold for both training and inference.
no code implementations • 4 Dec 2021 • Jian Peng, Dingqi Ye, Bo Tang, Yinjie Lei, Yu Liu, Haifeng Li
This work proposes a general framework named Cycled Memory Networks (CMN) to address the anterograde forgetting in neural networks for lifelong learning.
1 code implementation • ICLR 2022 • Zhizhou Ren, Ruihan Guo, Yuan Zhou, Jian Peng
Based on this framework, this paper proposes a novel reward redistribution algorithm, randomized return decomposition (RRD), to learn a proxy reward function for episodic reinforcement learning.
no code implementations • 23 Nov 2021 • Yifan Chang, Wenbo Li, Jian Peng, Bo Tang, Yu Kang, Yinjie Lei, Yuanmiao Gui, Qing Zhu, Yu Liu, Haifeng Li
Different from previous reviews that mainly focus on the catastrophic forgetting phenomenon in CL, this paper surveys CL from a more macroscopic perspective based on the Stability Versus Plasticity mechanism.
no code implementations • 21 Nov 2021 • Jian Peng, Xian Sun, Min Deng, Chao Tao, Bo Tang, Wenbo Li, Guohua Wu, QingZhu, Yu Liu, Tao Lin, Haifeng Li
This paper presents a learning model by active forgetting mechanism with artificial neural networks.
no code implementations • ICLR 2022 • Jiaqi Guan, Wesley Wei Qian, Qiang Liu, Wei-Ying Ma, Jianzhu Ma, Jian Peng
Assuming different forms of the underlying potential energy function, we can not only reinterpret and unify many of the existing models but also derive new variants of SE(3)-equivariant neural networks in a principled manner.
1 code implementation • ICLR 2022 • Michael Wan, Jian Peng, Tanmay Gangwani
Meta-reinforcement learning (meta-RL) algorithms allow for agents to learn new behaviors from small amounts of experience, mitigating the sample inefficiency problem in RL.
no code implementations • ICCV 2021 • Yuanyi Zhong, Bodi Yuan, Hong Wu, Zhiqiang Yuan, Jian Peng, Yu-Xiong Wang
We leverage the pixel-level L2 loss and the pixel contrastive loss for the two purposes respectively.
no code implementations • 11 Jul 2021 • Yuanyi Zhong, Yuan Zhou, Jian Peng
The control variates (CV) method is widely used in policy gradient estimation to reduce the variance of the gradient estimators in practice.
no code implementations • 22 Jun 2021 • Beining Han, Zhizhou Ren, Zuofan Wu, Yuan Zhou, Jian Peng
We study deep reinforcement learning (RL) algorithms with delayed rewards.
1 code implementation • CVPR 2021 • Yuanyi Zhong, JianFeng Wang, Lijuan Wang, Jian Peng, Yu-Xiong Wang, Lei Zhang
This paper presents a detection-aware pre-training (DAP) approach, which leverages only weakly-labeled classification-style datasets (e. g., ImageNet) for pre-training, but is specifically tailored to benefit object detection tasks.
3 code implementations • ICLR 2021 • Minkai Xu, Shitong Luo, Yoshua Bengio, Jian Peng, Jian Tang
Inspired by the recent progress in deep generative models, in this paper, we propose a novel probabilistic framework to generate valid and diverse conformations given a molecular graph.
1 code implementation • 5 Nov 2020 • Tanmay Gangwani, Jian Peng, Yuan Zhou
Quality-Diversity (QD) is a concept from Neuroevolution with some intriguing applications to Reinforcement Learning.
no code implementations • NeurIPS 2020 • Ziyang Tang, Yihao Feng, Na Zhang, Jian Peng, Qiang Liu
Off-policy evaluation provides an essential tool for evaluating the effects of different policies or treatments using only observed data.
no code implementations • 24 Oct 2020 • Joshua Yao-Yu Lin, Hang Yu, Warren Morningstar, Jian Peng, Gilbert Holder
Dark matter substructures are interesting since they can reveal the properties of dark matter.
Cosmology and Nongalactic Astrophysics Computational Physics
2 code implementations • NeurIPS 2020 • Tanmay Gangwani, Yuan Zhou, Jian Peng
To make credit assignment easier, recent works have proposed algorithms to learn dense "guidance" rewards that could be used in place of the sparse or delayed environmental rewards.
no code implementations • 13 Sep 2020 • Yuanyi Zhong, Yuan Zhou, Jian Peng
Reinforcement learning from self-play has recently reported many successes.
no code implementations • 28 Aug 2020 • Xianggen Liu, Yunan Luo, Sen Song, Jian Peng
Modeling the effects of mutations on the binding affinity plays a crucial role in protein engineering and drug design.
1 code implementation • ECCV 2020 • Yuanyi Zhong, Jian-Feng Wang, Jian Peng, Lei Zhang
In this paper, we propose an effective knowledge transfer framework to boost the weakly supervised object detection accuracy with the help of an external fully-annotated source dataset, whose categories may not overlap with the target domain.
no code implementations • ICLR 2019 • Yi Chen, Jinglin Chen, Jing Dong, Jian Peng, Zhaoran Wang
To attain the advantages of both regimes, we propose to use replica exchange, which swaps between two Langevin diffusions with different temperatures.
1 code implementation • 12 Jun 2020 • Michael Wan, Tanmay Gangwani, Jian Peng
In this paper, we propose a new framework for transfer learning where the teacher and the student can have arbitrarily different state- and action-spaces.
2 code implementations • 7 Mar 2020 • Jie Chen, Ziyang Yuan, Jian Peng, Li Chen, Haozhe Huang, Jiawei Zhu, Yu Liu, Haifeng Li
However, the available methods focus mainly on the difference information between multitemporal remote sensing images and lack robustness to pseudo-change information.
no code implementations • 1 Mar 2020 • Jun Han, Fan Ding, Xianglong Liu, Lorenzo Torresani, Jian Peng, Qiang Liu
In addition, such transform can be straightforwardly employed in gradient-free kernelized Stein discrepancy to perform goodness-of-fit (GOF) test on discrete distributions.
1 code implementation • ICLR 2020 • Tanmay Gangwani, Jian Peng
Imitation Learning (IL) is a popular paradigm for training agents to achieve complicated goals by leveraging expert behavior, rather than dealing with the hardships of designing a correct reward function.
no code implementations • 21 Feb 2020 • Yuanyi Zhong, Alexander Schwing, Jian Peng
In many vision-based reinforcement learning (RL) problems, the agent controls a movable object in its visual field, e. g., the player's avatar in video games and the robotic arm in visual grasping and manipulation.
no code implementations • 27 Jan 2020 • Jie Chen, Haozhe Huang, Jian Peng, Jiawei Zhu, Li Chen, Wenbo Li, Binyu Sun, Haifeng Li
The feature-learning procedure of CNN largely depends on the architecture of CNN.
no code implementations • 18 Jan 2020 • Carl Yang, Mengxiong Liu, Frank He, Jian Peng, Jiawei Han
With extensive experiments of two classic network mining tasks on different real-world large datasets, we show that our proposed cube2net pipeline is general, and much more effective and efficient in query-specific network construction, compared with other methods without the leverage of data cube or reinforcement learning.
1 code implementation • 19 Dec 2019 • Jian Peng, Bo Tang, Hao Jiang, Zhuo Li, Yinjie Lei, Tao Lin, Haifeng Li
It is due to two facts: first, as the model learns more tasks, the intersection of the low-error parameter subspace satisfying for these tasks becomes smaller or even does not exist; second, when the model learns a new task, the cumulative error keeps increasing as the model tries to protect the parameter configuration of previous tasks from interference.
1 code implementation • 9 Nov 2019 • Ke Xu, Kaiyu Guan, Jian Peng, Yunan Luo, Sibo Wang
The average accuracy is 93. 56%, compared with 85. 36% from CFMask.
no code implementations • 7 Sep 2019 • Yu He, Yangqiu Song, Jian-Xin Li, Cheng Ji, Jian Peng, Hao Peng
Heterogeneous information network (HIN) embedding has gained increasing interests recently.
no code implementations • 5 Sep 2019 • Kefan Dong, Jian Peng, Yining Wang, Yuan Zhou
Our learning algorithm, Adaptive Value-function Elimination (AVE), is inspired by the policy elimination algorithm proposed in (Jiang et al., 2017), known as OLIVE.
1 code implementation • 21 Jul 2019 • Xinlei Pan, Chaowei Xiao, Warren He, Shuang Yang, Jian Peng, MingJie Sun, JinFeng Yi, Zijiang Yang, Mingyan Liu, Bo Li, Dawn Song
To the best of our knowledge, we are the first to apply adversarial attacks on DRL systems to physical robots.
1 code implementation • 22 Jun 2019 • Tanmay Gangwani, Joel Lehman, Qiang Liu, Jian Peng
We consider the problem of imitation learning from expert demonstrations in partially observable Markov decision processes (POMDPs).
1 code implementation • NeurIPS 2019 • Zhizhou Ren, Kefan Dong, Yuan Zhou, Qiang Liu, Jian Peng
Goal-oriented reinforcement learning has recently been a practical framework for robotic manipulation tasks, in which an agent is required to reach a certain goal defined by a function on the state space.
no code implementations • 8 Jun 2019 • Yu-cheng Chen, Matus Telgarsky, Chao Zhang, Bolton Bailey, Daniel Hsu, Jian Peng
This paper provides a simple procedure to fit generative networks to target distributions, with the goal of a small Wasserstein distance (or other optimal transport costs).
1 code implementation • 31 May 2019 • Yang Liu, Yunan Luo, Yuanyi Zhong, Xi Chen, Qiang Liu, Jian Peng
Recent advances in deep reinforcement learning algorithms have shown great potential and success for solving many challenging real-world problems, including Go game and robotic applications.
no code implementations • NeurIPS 2019 • Chao Tao, Saùl Blanco, Jian Peng, Yuan Zhou
We consider the thresholding bandit problem, whose goal is to find arms of mean rewards above a given threshold $\theta$, with a fixed budget of $T$ trials.
no code implementations • 20 May 2019 • Wei-Ye Zhao, Xi-Ya Guan, Yang Liu, Xiaoming Zhao, Jian Peng
Recent advances in deep reinforcement learning have achieved human-level performance on a variety of real-world applications.
no code implementations • ICLR 2019 • Iou-Jen Liu, Jian Peng, Alexander G. Schwing
A zoo of deep nets is available these days for almost any given task, and it is increasingly unclear which net to start with when addressing a new task, or which net to use as an initialization for fine-tuning a new model.
no code implementations • 20 Jan 2019 • Li Chen, Hailun Ding, Qi Li, Zhuo Li, Jian Peng, Haifeng Li
Understanding the internal representations of deep neural networks (DNNs) is crucal to explain their behavior.
1 code implementation • 4 Dec 2018 • Jian Peng, Jiang Hao, Zhuo Li, Enqiang Guo, Xiaohong Wan, Deng Min, Qing Zhu, Haifeng Li
In this paper, we proposed a Soft Parameters Pruning (SPP) strategy to reach the trade-off between short-term and long-term profit of a learning model by freeing those parameters less contributing to remember former task domain knowledge to learn future tasks, and preserving memories about previous tasks via those parameters effectively encoding knowledge about tasks at the same time.
no code implementations • 2 Dec 2018 • Yuanyi Zhong, Jian-Feng Wang, Jian Peng, Lei Zhang
In this paper, we propose a general approach to optimize anchor boxes for object detection.
no code implementations • 27 Nov 2018 • Li Chen, Hailun Ding, Qi Li, Zhuo Li, Jian Peng, Haifeng Li
Understanding the internal representations of deep neural networks (DNNs) is crucal to explain their behavior.
no code implementations • EMNLP 2018 • Zexuan Zhong, Jiaqi Guo, Wei Yang, Jian Peng, Tao Xie, Jian-Guang Lou, Ting Liu, Dongmei Zhang
Recent research proposes syntax-based approaches to address the problem of generating programs from natural language specifications.
no code implementations • 27 Sep 2018 • Yihao Feng, Hao liu, Jian Peng, Qiang Liu
Deep reinforcement learning has achieved remarkable successes in solving various challenging artificial intelligence tasks.
3 code implementations • EMNLP 2018 • Anusri Pampari, Preethi Raghavan, Jennifer Liang, Jian Peng
We propose a novel methodology to generate domain-specific large-scale question answering (QA) datasets by re-purposing existing annotations for other NLP tasks.
no code implementations • ICLR 2019 • Yuan Xie, Boyi Liu, Qiang Liu, Zhaoran Wang, Yuan Zhou, Jian Peng
Such an error reduction phenomenon is somewhat surprising as the estimated surrogate policy is less accurate than the given historical policy.
no code implementations • ICML 2018 • Tianbing Xu, Qiang Liu, Liang Zhao, Jian Peng
The performance of off-policy learning, including deep Q-learning and deep deterministic policy gradient (DDPG), critically depends on the choice of the exploration policy.
2 code implementations • 1 Jun 2018 • Hyunghoon Cho, Benjamin DeMeo, Jian Peng, Bonnie Berger
Representing data in hyperbolic space can effectively capture latent hierarchical relationships.
no code implementations • ICLR 2019 • Tanmay Gangwani, Qiang Liu, Jian Peng
Improving the efficiency of RL algorithms in real-world problems with sparse or episodic rewards is therefore a pressing need.
1 code implementation • EMNLP 2018 • Liyuan Liu, Xiang Ren, Jingbo Shang, Jian Peng, Jiawei Han
Many efforts have been made to facilitate natural language processing tasks with pre-trained language models (LMs), and brought significant improvements to various applications.
Ranked #47 on Named Entity Recognition (NER) on CoNLL 2003 (English)
no code implementations • 13 Mar 2018 • Tianbing Xu, Qiang Liu, Liang Zhao, Jian Peng
The performance of off-policy learning, including deep Q-learning and deep deterministic policy gradient (DDPG), critically depends on the choice of the exploration policy.
no code implementations • ICLR 2019 • Yihan Gao, Chao Zhang, Jian Peng, Aditya Parameswaran
Both theoretical and empirical evidence are provided to support this argument: (a) we prove that the generalization error of these methods can be bounded by limiting the norm of vectors, regardless of the embedding dimension; (b) we show that the generalization performance of linear graph embedding methods is correlated with the norm of embedding vectors, which is small due to the early stopping of SGD and the vanishing gradients.
no code implementations • ICLR 2018 • Hao Liu*, Yihao Feng*, Yi Mao, Dengyong Zhou, Jian Peng, Qiang Liu
Policy gradient methods have achieved remarkable successes in solving challenging reinforcement learning problems.
no code implementations • ICLR 2018 • Keyi Yu, Yang Liu, Alexander G. Schwing, Jian Peng
Recent advances in recurrent neural nets (RNNs) have shown much promise in many applications in natural language processing.
no code implementations • ICLR 2018 • Tanmay Gangwani, Jian Peng
GPO uses imitation learning for policy crossover in the state space and applies policy gradient methods for mutation.
2 code implementations • 30 Oct 2017 • Hao Liu, Yihao Feng, Yi Mao, Dengyong Zhou, Jian Peng, Qiang Liu
Policy gradient methods have achieved remarkable successes in solving challenging reinforcement learning problems.
no code implementations • 28 Oct 2017 • Jinglin Chen, Jian Peng, Qiang Liu
We propose a new localized inference algorithm for answering marginalization queries in large graphical models with the correlation decay property.
no code implementations • 17 Oct 2017 • Tianbing Xu, Qiang Liu, Jian Peng
Recent advances in policy gradient methods and deep learning have demonstrated their applicability for complex reinforcement learning problems.
no code implementations • 10 Oct 2017 • Jiaqi Guan, Yang Liu, Qiang Liu, Jian Peng
Deep neural networks have been remarkable successful in various AI tasks but often cast high computation and energy cost for energy-constrained applications such as mobile sensing.
3 code implementations • 13 Sep 2017 • Liyuan Liu, Jingbo Shang, Frank F. Xu, Xiang Ren, Huan Gui, Jian Peng, Jiawei Han
In this study, we develop a novel neural framework to extract abundant knowledge hidden in raw texts to empower the sequence labeling task.
Ranked #13 on Part-Of-Speech Tagging on Penn Treebank
no code implementations • 4 Aug 2017 • Jie Chen, Chao Yuan, Min Deng, Chao Tao, Jian Peng, Haifeng Li
Owing to its superiority in feature representation, DCNN has exhibited remarkable performance in scene recognition of high-resolution remote sensing (HRRS) images and classification of hyper-spectral remote sensing images.
1 code implementation • 30 May 2017 • Haifeng Li, Xin Dou, Chao Tao, Zhixiang Hou, Jie Chen, Jian Peng, Min Deng, Ling Zhao
In this paper, we propose a remote sensing image classification benchmark (RSI-CB) based on massive, scalable, and diverse crowdsource data.
no code implementations • 19 May 2017 • Haifeng Li, Jian Peng, Chao Tao, Jie Chen, Min Deng
Is the DCNN recognition mechanism centered on object recognition still applicable to the scenarios of remote sensing scene understanding?
no code implementations • 7 Apr 2017 • Yang Liu, Prajit Ramachandran, Qiang Liu, Jian Peng
Policy gradient methods have been successfully applied to many complex reinforcement learning problems.
1 code implementation • 5 Nov 2016 • Frank S. He, Yang Liu, Alexander G. Schwing, Jian Peng
We propose a novel training algorithm for reinforcement learning which combines the strength of deep Q-learning with a constrained optimization approach to tighten optimality and encourage faster reward propagation.
no code implementations • 31 Oct 2016 • Jingbo Shang, Meng Jiang, Wenzhu Tong, Jinfeng Xiao, Jian Peng, Jiawei Han
In the literature, two series of models have been proposed to address prediction problems including classification and regression.
1 code implementation • 31 Oct 2016 • Jingbo Shang, Meng Qu, Jialu Liu, Lance M. Kaplan, Jiawei Han, Jian Peng
It models vertices as low-dimensional vectors to explore network structure-embedded similarity.
1 code implementation • 2 Dec 2015 • Sheng Wang, Jian Peng, Jianzhu Ma, Jinbo Xu
Protein secondary structure (SS) prediction is important for studying protein structure and function.
Ranked #1 on Protein Secondary Structure Prediction on CullPDB
no code implementations • 9 Jun 2015 • Yihan Gao, Aditya Parameswaran, Jian Peng
We study the interpretability of conditional probability estimates for binary classification under the agnostic setting or scenario.
no code implementations • 10 Apr 2015 • Hyunghoon Cho, Bonnie Berger, Jian Peng
In this paper, we introduce diffusion component analysis (DCA), a framework that plugs in a diffusion model and learns a low-dimensional vector representation of each node to encode the topological properties of a network.
no code implementations • 7 Mar 2015 • Qingming Tang, Chao Yang, Jian Peng, Jinbo Xu
This paper proposes a novel hybrid covariance thresholding algorithm that can effectively identify zero entries in the precision matrices and split a large joint graphical lasso problem into small subproblems.
no code implementations • NeurIPS 2012 • Qiang Liu, Jian Peng, Alexander T. Ihler
Crowdsourcing has become a popular paradigm for labeling large datasets.
no code implementations • NeurIPS 2009 • Jian Peng, Liefeng Bo, Jinbo Xu
To model the nonlinear relationship between input features and outputs we propose Conditional Neural Fields (CNF), a new conditional probabilistic graphical model for sequence labeling.