Search Results for author: Dongqi Wang

Found 9 papers, 5 papers with code

GLAT: Glancing at Latent Variables for Parallel Text Generation

1 code implementation ACL 2022 Yu Bao, Hao Zhou, ShuJian Huang, Dongqi Wang, Lihua Qian, Xinyu Dai, Jiajun Chen, Lei LI

Recently, parallel text generation has received widespread attention due to its success in generation efficiency.

Text Generation

Movement Enhancement toward Multi-Scale Video Feature Representation for Temporal Action Detection

no code implementations ICCV 2023 Zixuan Zhao, Dongqi Wang, Xu Zhao

First, the submergence of movement feature, i. e. the movement information in a snippet is covered by the scene information.

Action Detection

Bridging the gap between target-based and cell-based drug discovery with a graph generative multi-task model

no code implementations9 Aug 2022 Fan Hu, Dongqi Wang, Huazhen Huang, Yishen Hu, Peng Yin

Based on these findings, we utilized a monte carlo based reinforcement learning generative model to generate novel multi-property compounds with both in vitro and in vivo efficacy, thus bridging the gap between target-based and cell-based drug discovery.

Drug Discovery

Reinforcement learning on graphs: A survey

2 code implementations13 Apr 2022 Mingshuo Nie, Dongming Chen, Dongqi Wang

In this survey, we provide a comprehensive overview of RL and graph mining methods and generalize these methods to Graph Reinforcement Learning (GRL) as a unified formulation.

Graph Mining reinforcement-learning +1

$\textit{latent}$-GLAT: Glancing at Latent Variables for Parallel Text Generation

1 code implementation5 Apr 2022 Yu Bao, Hao Zhou, ShuJian Huang, Dongqi Wang, Lihua Qian, Xinyu Dai, Jiajun Chen, Lei LI

Recently, parallel text generation has received widespread attention due to its success in generation efficiency.

Text Generation

A Novel Architecture Slimming Method for Network Pruning and Knowledge Distillation

no code implementations21 Feb 2022 Dongqi Wang, Shengyu Zhang, Zhipeng Di, Xin Lin, Weihua Zhou, Fei Wu

A common problem in both pruning and distillation is to determine compressed architecture, i. e., the exact number of filters per layer and layer configuration, in order to preserve most of the original model capacity.

Knowledge Distillation Model Compression +1

Non-Parametric Online Learning from Human Feedback for Neural Machine Translation

1 code implementation23 Sep 2021 Dongqi Wang, Haoran Wei, Zhirui Zhang, ShuJian Huang, Jun Xie, Jiajun Chen

We study the problem of online learning with human feedback in the human-in-the-loop machine translation, in which the human translators revise the machine-generated translations and then the corrected translations are used to improve the neural machine translation (NMT) system.

Machine Translation NMT +1

Cannot find the paper you are looking for? You can Submit a new open access paper.