Search Results for author: Qingyuan Wu

Found 6 papers, 1 papers with code

Boosting Long-Delayed Reinforcement Learning with Auxiliary Short-Delayed Task

no code implementations5 Feb 2024 Qingyuan Wu, Simon Sinong Zhan, YiXuan Wang, Chung-Wei Lin, Chen Lv, Qi Zhu, Chao Huang

Reinforcement learning is challenging in delayed scenarios, a common real-world situation where observations and interactions occur with delays.

reinforcement-learning

State-Wise Safe Reinforcement Learning With Pixel Observations

1 code implementation3 Nov 2023 Simon Sinong Zhan, YiXuan Wang, Qingyuan Wu, Ruochen Jiao, Chao Huang, Qi Zhu

In the context of safe exploration, Reinforcement Learning (RL) has long grappled with the challenges of balancing the tradeoff between maximizing rewards and minimizing safety violations, particularly in complex environments with contact-rich or non-smooth dynamics, and when dealing with high-dimensional pixel observations.

reinforcement-learning Reinforcement Learning (RL) +2

Learning Downstream Task by Selectively Capturing Complementary Knowledge from Multiple Self-supervisedly Learning Pretexts

no code implementations11 Apr 2022 Jiayu Yao, Qingyuan Wu, Quan Feng, Songcan Chen

Self-supervised learning (SSL), as a newly emerging unsupervised representation learning paradigm, generally follows a two-stage learning pipeline: 1) learning invariant and discriminative representations with auto-annotation pretext(s), then 2) transferring the representations to assist downstream task(s).

Representation Learning Self-Supervised Learning

Topic Driven Adaptive Network for Cross-Domain Sentiment Classification

no code implementations28 Nov 2021 Yicheng Zhu, Yiqiao Qiu, Qingyuan Wu, Fu Lee Wang, Yanghui Rao

In this vein, most approaches utilized domain adaptation that maps data from different domains into a common feature space.

Classification Domain Adaptation +3

Kuramoto model based analysis reveals oxytocin effects on brain network dynamics

no code implementations18 May 2021 Shuhan Zheng, Zhichao Liang, Youzhi Qu, Qingyuan Wu, Haiyan Wu, Quanying Liu

Here, we propose a physics-based framework of Kuramoto model to investigate oxytocin effects on the phase dynamic neural coupling in DMN and FPN.

Greedy-Step Off-Policy Reinforcement Learning

no code implementations23 Feb 2021 Yuhui Wang, Qingyuan Wu, Pengcheng He, Xiaoyang Tan

Most of the policy evaluation algorithms are based on the theories of Bellman Expectation and Optimality Equation, which derive two popular approaches - Policy Iteration (PI) and Value Iteration (VI).

Q-Learning reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.