Search Results for author: Ming Yin

Found 30 papers, 3 papers with code

基于自动识别的委婉语历时性发展变化与社会共变研究(A Study on the Diachronic Development and Social Covariance of Euphemism Based on Automatic Recognition)

no code implementations CCL 2021 Chenlin Zhang, Mingwen Wang, Yiming Tan, Ming Yin, Xinyi Zhang

“本文主要以汉语委婉语作为研究对象, 基于大量人工标注, 借助机器学习有监督分类方法, 实现了较高精度的委婉语自动识别, 并基于此对1946年-2017年的《人民日报》中的委婉语历时变化发展情况进行量化统计分析。从大规模数据的角度探讨委婉语历时性发展变化、委婉语与社会之间的共变关系, 验证了语言的格雷什姆规律与更新规律。”

Posterior Sampling with Delayed Feedback for Reinforcement Learning with Linear Function Approximation

no code implementations29 Oct 2023 Nikki Lijing Kuang, Ming Yin, Mengdi Wang, Yu-Xiang Wang, Yi-An Ma

We provide the first analysis for posterior sampling algorithms with delayed feedback in RL and show our algorithm achieves $\widetilde{O}(\sqrt{d^3H^3 T} + d^2H^2 E[\tau])$ worst-case regret in the presence of unknown stochastic delays.

Reinforcement Learning (RL)

Synthetic Data Generation with Large Language Models for Text Classification: Potential and Limitations

no code implementations11 Oct 2023 Zhuoyan Li, Hangxiao Zhu, Zhuoran Lu, Ming Yin

The collection and curation of high-quality training data is crucial for developing text classification models with superior performance, but it is often associated with significant costs and time investment.

Synthetic Data Generation text-classification +1

Model-Free Algorithm with Improved Sample Efficiency for Zero-Sum Markov Games

no code implementations17 Aug 2023 Songtao Feng, Ming Yin, Yu-Xiang Wang, Jing Yang, Yingbin Liang

In this work, we propose a model-free stage-based Q-learning algorithm and show that it achieves the same sample complexity as the best model-based algorithm, and hence for the first time demonstrate that model-free algorithms can enjoy the same optimality in the $H$ dependence as model-based algorithms.

Multi-agent Reinforcement Learning Q-Learning +1

Offline Policy Evaluation for Reinforcement Learning with Adaptively Collected Data

no code implementations24 Jun 2023 Sunil Madhow, Dan Xiao, Ming Yin, Yu-Xiang Wang

Developing theoretical guarantees on the sample complexity of offline RL methods is an important step towards making data-hungry RL algorithms practically viable.

Offline RL reinforcement-learning

Non-stationary Reinforcement Learning under General Function Approximation

no code implementations1 Jun 2023 Songtao Feng, Ming Yin, Ruiquan Huang, Yu-Xiang Wang, Jing Yang, Yingbin Liang

To the best of our knowledge, this is the first dynamic regret analysis in non-stationary MDPs with general function approximation.

reinforcement-learning Reinforcement Learning (RL)

TheoremQA: A Theorem-driven Question Answering dataset

2 code implementations21 May 2023 Wenhu Chen, Ming Yin, Max Ku, Pan Lu, Yixin Wan, Xueguang Ma, Jianyu Xu, Xinyi Wang, Tony Xia

We evaluate a wide spectrum of 16 large language and code models with different prompting strategies like Chain-of-Thoughts and Program-of-Thoughts.

Question Answering

No-Regret Linear Bandits beyond Realizability

no code implementations26 Feb 2023 Chong Liu, Ming Yin, Yu-Xiang Wang

It achieves a near-optimal $\sqrt{T}$ regret for problems that the best-known regret is almost linear in time horizon $T$.

Offline Reinforcement Learning with Closed-Form Policy Improvement Operators

no code implementations29 Nov 2022 Jiachen Li, Edwin Zhang, Ming Yin, Qinxun Bai, Yu-Xiang Wang, William Yang Wang

Behavior constrained policy optimization has been demonstrated to be a successful paradigm for tackling Offline Reinforcement Learning.

D4RL Offline RL +2

On Instance-Dependent Bounds for Offline Reinforcement Learning with Linear Function Approximation

no code implementations23 Nov 2022 Thanh Nguyen-Tang, Ming Yin, Sunil Gupta, Svetha Venkatesh, Raman Arora

To the best of our knowledge, these are the first $\tilde{\mathcal{O}}(\frac{1}{K})$ bound and absolute zero sub-optimality bound respectively for offline RL with linear function approximation from adaptive data with partial coverage.

Offline RL reinforcement-learning +1

Offline Reinforcement Learning with Differentiable Function Approximation is Provably Efficient

no code implementations3 Oct 2022 Ming Yin, Mengdi Wang, Yu-Xiang Wang

Offline reinforcement learning, which aims at optimizing sequential decision-making strategies with historical data, has been extensively applied in real-life applications.

Decision Making Offline RL +3

Why Quantization Improves Generalization: NTK of Binary Weight Neural Networks

no code implementations13 Jun 2022 Kaiqi Zhang, Ming Yin, Yu-Xiang Wang

We propose a quasi neural network to approximate the distribution propagation, which is a neural network with continuous parameters and smooth activation function.


Offline Stochastic Shortest Path: Learning, Evaluation and Towards Optimality

no code implementations10 Jun 2022 Ming Yin, Wenjing Chen, Mengdi Wang, Yu-Xiang Wang

Goal-oriented Reinforcement Learning, where the agent needs to reach the goal state while simultaneously minimizing the cost, has received significant attention in real-world applications.

Near-optimal Offline Reinforcement Learning with Linear Representation: Leveraging Variance Information with Pessimism

no code implementations11 Mar 2022 Ming Yin, Yaqi Duan, Mengdi Wang, Yu-Xiang Wang

However, a precise understanding of the statistical limits with function representations, remains elusive, even when such a representation is linear.

Decision Making reinforcement-learning +1

Sample-Efficient Reinforcement Learning with loglog(T) Switching Cost

no code implementations13 Feb 2022 Dan Qiao, Ming Yin, Ming Min, Yu-Xiang Wang

In this paper, we propose a new algorithm based on stage-wise exploration and adaptive policy elimination that achieves a regret of $\widetilde{O}(\sqrt{H^4S^2AT})$ while requiring a switching cost of $O(HSA \log\log T)$.

reinforcement-learning Reinforcement Learning (RL)

Towards Instance-Optimal Offline Reinforcement Learning with Pessimism

no code implementations NeurIPS 2021 Ming Yin, Yu-Xiang Wang

We study the offline reinforcement learning (offline RL) problem, where the goal is to learn a reward-maximizing policy in an unknown Markov Decision Process (MDP) using the data coming from a policy $\mu$.

Offline RL reinforcement-learning +1

Optimal Uniform OPE and Model-based Offline Reinforcement Learning in Time-Homogeneous, Reward-Free and Task-Agnostic Settings

no code implementations NeurIPS 2021 Ming Yin, Yu-Xiang Wang

This work studies the statistical limits of uniform convergence for offline policy evaluation (OPE) problems with model-based methods (for episodic MDP) and provides a unified framework towards optimal learning for several well-motivated offline tasks.

Offline RL

Near-Optimal Offline Reinforcement Learning via Double Variance Reduction

no code implementations NeurIPS 2021 Ming Yin, Yu Bai, Yu-Xiang Wang

Our main result shows that OPDVR provably identifies an $\epsilon$-optimal policy with $\widetilde{O}(H^2/d_m\epsilon^2)$ episodes of offline data in the finite-horizon stationary transition setting, where $H$ is the horizon length and $d_m$ is the minimal marginal state-action distribution induced by the behavior policy.

Offline RL reinforcement-learning +1

Near-Optimal Provable Uniform Convergence in Offline Policy Evaluation for Reinforcement Learning

no code implementations7 Jul 2020 Ming Yin, Yu Bai, Yu-Xiang Wang

The problem of Offline Policy Evaluation (OPE) in Reinforcement Learning (RL) is a critical step towards applying RL in real-life applications.

Offline RL reinforcement-learning +1

Asymptotically Efficient Off-Policy Evaluation for Tabular Reinforcement Learning

no code implementations29 Jan 2020 Ming Yin, Yu-Xiang Wang

We consider the problem of off-policy evaluation for reinforcement learning, where the goal is to estimate the expected reward of a target policy $\pi$ using offline data collected by running a logging policy $\mu$.

Off-policy evaluation reinforcement-learning

Shared Generative Latent Representation Learning for Multi-view Clustering

1 code implementation23 Jul 2019 Ming Yin, Weitian Huang, Junbin Gao

Clustering multi-view data has been a fundamental research topic in the computer vision community.

Clustering Representation Learning

Low-rank Multi-view Clustering in Third-Order Tensor Space

no code implementations30 Aug 2016 Ming Yin, Junbin Gao, Shengli Xie, Yi Guo

Multi-view subspace clustering is based on the fact that the multi-view data are generated from a latent subspace.

Clustering Multi-view Subspace Clustering

Neighborhood Preserved Sparse Representation for Robust Classification on Symmetric Positive Definite Matrices

no code implementations27 Jan 2016 Ming Yin, Shengli Xie, Yi Guo, Junbin Gao, Yun Zhang

Due to its promising classification performance, sparse representation based classification(SRC) algorithm has attracted great attention in the past few years.

Classification General Classification +2

Kernel Sparse Subspace Clustering on Symmetric Positive Definite Manifolds

no code implementations CVPR 2016 Ming Yin, Yi Guo, Junbin Gao, Zhaoshui He, Shengli Xie

Sparse subspace clustering (SSC), as one of the most successful subspace clustering methods, has achieved notable clustering accuracy in computer vision tasks.


Supervised learning of sparse context reconstruction coefficients for data representation and classification

no code implementations18 Aug 2015 Xuejie Liu, Jingbin Wang, Ming Yin, Benjamin Edwards, Peijuan Xu

Context of data points, which is usually defined as the other data points in a data set, has been found to play important roles in data representation and classification.

Classification General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.