Search Results for author: Junfeng Wen

Found 7 papers, 1 papers with code

Batch Stationary Distribution Estimation

1 code implementation ICML 2020 Junfeng Wen, Bo Dai, Lihong Li, Dale Schuurmans

We consider the problem of approximating the stationary distribution of an ergodic Markov chain given a set of sampled transitions.

Universal Successor Features for Transfer Reinforcement Learning

no code implementations ICLR 2019 Chen Ma, Dylan R. Ashley, Junfeng Wen, Yoshua Bengio

Transfer in Reinforcement Learning (RL) refers to the idea of applying knowledge gained from previous tasks to solving related tasks.

Transfer Reinforcement Learning

Domain Aggregation Networks for Multi-Source Domain Adaptation

no code implementations ICML 2020 Junfeng Wen, Russell Greiner, Dale Schuurmans

In many real-world applications, we want to exploit multiple source datasets of similar tasks to learn a model for a different but related target dataset -- e. g., recognizing characters of a new font using a set of different fonts.

Domain Adaptation Sentiment Analysis

Few-Shot Self Reminder to Overcome Catastrophic Forgetting

no code implementations3 Dec 2018 Junfeng Wen, Yanshuai Cao, Ruitong Huang

We demonstrate the superiority of our method to the previous ones in two different continual learning settings on popular benchmarks, as well as a new continual learning problem where tasks are designed to be more dissimilar.

Continual Learning

Universal Successor Representations for Transfer Reinforcement Learning

no code implementations11 Apr 2018 Chen Ma, Junfeng Wen, Yoshua Bengio

The objective of transfer reinforcement learning is to generalize from a set of previous tasks to unseen new tasks.

Transfer Reinforcement Learning

Convex Two-Layer Modeling with Latent Structure

no code implementations NeurIPS 2016 Vignesh Ganapathiraman, Xinhua Zhang, Yao-Liang Yu, Junfeng Wen

Unsupervised learning of structured predictors has been a long standing pursuit in machine learning.

Graph Matching

Cannot find the paper you are looking for? You can Submit a new open access paper.