Search Results for author: Qingyuan Wu

Found 4 papers, 0 papers with code

Learning Downstream Task by Selectively Capturing Complementary Knowledge from Multiple Self-supervisedly Learning Pretexts

no code implementations11 Apr 2022 Jiayu Yao, Qingyuan Wu, Quan Feng, Songcan Chen

Self-supervised learning (SSL), as a newly emerging unsupervised representation learning paradigm, generally follows a two-stage learning pipeline: 1) learning invariant and discriminative representations with auto-annotation pretext(s), then 2) transferring the representations to assist downstream task(s).

Representation Learning Self-Supervised Learning

Topic Driven Adaptive Network for Cross-Domain Sentiment Classification

no code implementations28 Nov 2021 Yicheng Zhu, Yiqiao Qiu, Qingyuan Wu, Fu Lee Wang, Yanghui Rao

In this vein, most approaches utilized domain adaptation that maps data from different domains into a common feature space.

Classification Domain Adaptation +3

Kuramoto model based analysis reveals oxytocin effects on brain network dynamics

no code implementations18 May 2021 Shuhan Zheng, Zhichao Liang, Youzhi Qu, Qingyuan Wu, Haiyan Wu, Quanying Liu

Here, we propose a physics-based framework of Kuramoto model to investigate oxytocin effects on the phase dynamic neural coupling in DMN and FPN.

Greedy-Step Off-Policy Reinforcement Learning

no code implementations23 Feb 2021 Yuhui Wang, Qingyuan Wu, Pengcheng He, Xiaoyang Tan

Most of the policy evaluation algorithms are based on the theories of Bellman Expectation and Optimality Equation, which derive two popular approaches - Policy Iteration (PI) and Value Iteration (VI).

Q-Learning reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.