Search Results for author: Yuguang Yue

Found 7 papers, 3 papers with code

On hyperparameter tuning in general clustering problemsm

no code implementations ICML 2020 Xinjie Fan, Yuguang Yue, Purnamrita Sarkar, Y. X. Rachel Wang

Tuning hyperparameters for unsupervised learning problems is difficult in general due to the lack of ground truth for validation.

Community Detection Model Selection

Implicit Distributional Reinforcement Learning

3 code implementations NeurIPS 2020 Yuguang Yue, Zhendong Wang, Mingyuan Zhou

To improve the sample efficiency of policy-gradient based reinforcement learning algorithms, we propose implicit distributional actor-critic (IDAC) that consists of a distributional critic, built on two deep generator networks (DGNs), and a semi-implicit actor (SIA), powered by a flexible policy distribution.

Distributional Reinforcement Learning OpenAI Gym

Discrete Action On-Policy Learning with Action-Value Critic

1 code implementation10 Feb 2020 Yuguang Yue, Yunhao Tang, Mingzhang Yin, Mingyuan Zhou

Reinforcement learning (RL) in discrete action space is ubiquitous in real-world applications, but its complexity grows exponentially with the action-space dimension, making it challenging to apply existing on-policy gradient based deep RL algorithms efficiently.

OpenAI Gym

A Unified Framework for Tuning Hyperparameters in Clustering Problems

no code implementations17 Oct 2019 Xinjie Fan, Yuguang Yue, Purnamrita Sarkar, Y. X. Rachel Wang

In this paper, we provide a framework with provable guarantees for selecting hyperparameters in a number of distinct models.

Community Detection Model Selection

ARSM: Augment-REINFORCE-Swap-Merge Estimator for Gradient Backpropagation Through Categorical Variables

1 code implementation4 May 2019 Mingzhang Yin, Yuguang Yue, Mingyuan Zhou

To address the challenge of backpropagating the gradient through categorical variables, we propose the augment-REINFORCE-swap-merge (ARSM) gradient estimator that is unbiased and has low variance.

Cannot find the paper you are looking for? You can Submit a new open access paper.