Search Results for author: Hongyu Zang

Found 13 papers, 7 papers with code

Routing Enforced Generative Model for Recipe Generation

no code implementations EMNLP 2020 Zhiwei Yu, Hongyu Zang, Xiaojun Wan

One of the most challenging part of recipe generation is to deal with the complex restrictions among the input ingredients.

Recipe Generation

Behavior Prior Representation learning for Offline Reinforcement Learning

1 code implementation2 Nov 2022 Hongyu Zang, Xin Li, Jie Yu, Chen Liu, Riashat Islam, Remi Tachet des Combes, Romain Laroche

Our method, Behavior Prior Representation (BPR), learns state representations with an easy-to-integrate objective based on behavior cloning of the dataset: we first learn a state representation by mimicking actions from the dataset, and then train a policy on top of the fixed representation, using any off-the-shelf Offline RL algorithm.

Offline RL reinforcement-learning +2

Discrete Factorial Representations as an Abstraction for Goal Conditioned Reinforcement Learning

no code implementations1 Nov 2022 Riashat Islam, Hongyu Zang, Anirudh Goyal, Alex Lamb, Kenji Kawaguchi, Xin Li, Romain Laroche, Yoshua Bengio, Remi Tachet des Combes

Goal-conditioned reinforcement learning (RL) is a promising direction for training agents that are capable of solving multiple tasks and reach a diverse set of objectives.

reinforcement-learning Reinforcement Learning (RL)

Agent-Controller Representations: Principled Offline RL with Rich Exogenous Information

1 code implementation31 Oct 2022 Riashat Islam, Manan Tomar, Alex Lamb, Yonathan Efroni, Hongyu Zang, Aniket Didolkar, Dipendra Misra, Xin Li, Harm van Seijen, Remi Tachet des Combes, John Langford

We find that contemporary representation learning techniques can fail on datasets where the noise is a complex and time dependent process, which is prevalent in practical applications.

Offline RL Reinforcement Learning (RL) +1

SimSR: Simple Distance-based State Representation for Deep Reinforcement Learning

2 code implementations31 Dec 2021 Hongyu Zang, Xin Li, Mingzhong Wang

This work explores how to learn robust and generalizable state representation from image-based observations with deep reinforcement learning methods.

reinforcement-learning Reinforcement Learning (RL)

TEAC: Intergrating Trust Region and Max Entropy Actor Critic for Continuous Control

1 code implementation1 Jan 2021 Hongyu Zang, Xin Li, Li Zhang, Peiyao Zhao, Mingzhong Wang

Trust region methods and maximum entropy methods are two state-of-the-art branches used in reinforcement learning (RL) for the benefits of stability and exploration in continuous environments, respectively.

Continuous Control Reinforcement Learning (RL)

Automated Chess Commentator Powered by Neural Chess Engine

2 code implementations ACL 2019 Hongyu Zang, Zhiwei Yu, Xiaojun Wan

In this paper, we explore a new approach for automated chess commentary generation, which aims to generate chess commentary texts in different categories (e. g., description, comparison, planning, etc.).

Text Generation

Massive Styles Transfer with Limited Labeled Data

1 code implementation3 Jun 2019 Hongyu Zang, Xiaojun Wan

In this paper, we propose a multi-agent style transfer system (MAST) for addressing multiple style transfer tasks with limited labeled data, by leveraging abundant unlabeled data and the mutual benefit among the multiple styles.

Denoising Style Transfer +1

A Semi-Supervised Approach for Low-Resourced Text Generation

1 code implementation3 Jun 2019 Hongyu Zang, Xiaojun Wan

The low-resource (of labeled data) problem is quite common in different task generation tasks, but unlabeled data are usually abundant.

Denoising Language Modelling +2

Cannot find the paper you are looking for? You can Submit a new open access paper.