2 code implementations • 18 Apr 2024 • Yanru Qu, Keyue Qiu, Yuxuan Song, Jingjing Gong, Jiawei Han, Mingyue Zheng, Hao Zhou, Wei-Ying Ma
Generative models for structure-based drug design (SBDD) have shown promising results in recent years.
1 code implementation • 17 Mar 2024 • Yuxuan Song, Jingjing Gong, Yanru Qu, Hao Zhou, Mingyue Zheng, Jingjing Liu, Wei-Ying Ma
Advanced generative model (e. g., diffusion model) derived from simplified continuity assumptions of data distribution, though showing promising progress, has been difficult to apply directly to geometry generation applications due to the multi-modality and noise-sensitive nature of molecule geometry.
no code implementations • 13 Oct 2023 • Jianghao Lin, Bo Chen, Hangyu Wang, Yunjia Xi, Yanru Qu, Xinyi Dai, Kangning Zhang, Ruiming Tang, Yong Yu, Weinan Zhang
Traditional CTR models convert the multi-field categorical data into ID features via one-hot encoding, and extract the collaborative signals among features.
1 code implementation • 3 Aug 2023 • Jianghao Lin, Yanru Qu, Wei Guo, Xinyi Dai, Ruiming Tang, Yong Yu, Weinan Zhang
The large capacity of neural models helps digest such massive amounts of data under the supervised learning paradigm, yet they fail to utilize the substantial data to its full potential, since the 1-bit click signal is not sufficient to guide the model to learn capable representations of features and instances.
1 code implementation • 16 Aug 2021 • Mingcheng Chen, Zhenghui Wang, Zhiyun Zhao, Weinan Zhang, Xiawei Guo, Jian Shen, Yanru Qu, Jieli Lu, Min Xu, Yu Xu, Tiange Wang, Mian Li, Wei-Wei Tu, Yong Yu, Yufang Bi, Weiqing Wang, Guang Ning
To tackle the above challenges, we employ gradient boosting decision trees (GBDT) to handle data heterogeneity and introduce multi-task learning (MTL) to solve data insufficiency.
1 code implementation • NAACL 2021 • Haoyang Wen, Yanru Qu, Heng Ji, Qiang Ning, Jiawei Han, Avi Sil, Hanghang Tong, Dan Roth
Grounding events into a precise timeline is important for natural language understanding but has received limited attention in recent work.
no code implementations • ICLR 2021 • Yanru Qu, Dinghan Shen, Yelong Shen, Sandra Sajeev, Jiawei Han, Weizhu Chen
To verify the effectiveness of the proposed framework, we apply CoDA to Transformer-based models on a wide range of natural language understanding tasks.
1 code implementation • EMNLP 2020 • Yuning Mao, Yanru Qu, Yiqing Xie, Xiang Ren, Jiawei Han
Additionally, the explicit redundancy measure in MMR helps the neural representation of the summary to better capture redundancy.
2 code implementations • 29 Sep 2020 • Dinghan Shen, Mingzhi Zheng, Yelong Shen, Yanru Qu, Weizhu Chen
Adversarial training has been shown effective at endowing the learned representations with stronger generalization ability.
Ranked #8 on Machine Translation on IWSLT2014 German-English
3 code implementations • 13 Sep 2020 • Yang Yang, Jian Shen, Yanru Qu, Yunfei Liu, Kerong Wang, Yaoming Zhu, Wei-Nan Zhang, Yong Yu
With the rapid development in online education, knowledge tracing (KT) has become a fundamental problem which traces students' knowledge status and predicts their performance on new questions.
Ranked #7 on Knowledge Tracing on EdNet
1 code implementation • 12 Aug 2019 • Yanru Qu, Ting Bai, Wei-Nan Zhang, Jian-Yun Nie, Jian Tang
This paper studies graph-based recommendation, where an interaction graph is constructed from historical records and is lever-aged to alleviate data sparsity and cold start problems.
Ranked #2 on Click-Through Rate Prediction on MovieLens 1M
1 code implementation • ACL 2019 • Yunxuan Xiao, Yanru Qu, Lin Qiu, Hao Zhou, Lei LI, Wei-Nan Zhang, Yong Yu
However, many difficult questions require multiple supporting evidence from scattered text among two or more documents.
Ranked #33 on Question Answering on HotpotQA
no code implementations • 12 Sep 2018 • Liheng Chen, Yanru Qu, Zhenghui Wang, Lin Qiu, Wei-Nan Zhang, Ken Chen, Shaodian Zhang, Yong Yu
TGE-PS uses Pairs Sampling (PS) to improve the sampling strategy of RW, being able to reduce ~99% training samples while preserving competitive performance.
8 code implementations • 1 Jul 2018 • Yanru Qu, Bohui Fang, Wei-Nan Zhang, Ruiming Tang, Minzhe Niu, Huifeng Guo, Yong Yu, Xiuqiang He
User response prediction is a crucial component for personalized information retrieval and filtering scenarios, such as recommender system and web search.
no code implementations • NAACL 2018 • Zhenghui Wang, Yanru Qu, Li-Heng Chen, Jian Shen, Wei-Nan Zhang, Shaodian Zhang, Yimei Gao, Gen Gu, Ken Chen, Yong Yu
We study the problem of named entity recognition (NER) from electronic medical records, which is one of the most fundamental and critical problems for medical text mining.
Medical Named Entity Recognition named-entity-recognition +3
1 code implementation • 10 Apr 2018 • Lin Qiu, Hao Zhou, Yanru Qu, Wei-Nan Zhang, Suoheng Li, Shu Rong, Dongyu Ru, Lihua Qian, Kewei Tu, Yong Yu
Information Extraction (IE) refers to automatically extracting structured relation tuples from unstructured texts.
8 code implementations • 5 Jul 2017 • Jian Shen, Yanru Qu, Wei-Nan Zhang, Yong Yu
Inspired by Wasserstein GAN, in this paper we propose a novel approach to learn domain invariant feature representations, namely Wasserstein Distance Guided Representation Learning (WDGRL).
11 code implementations • 1 Nov 2016 • Yanru Qu, Han Cai, Kan Ren, Wei-Nan Zhang, Yong Yu, Ying Wen, Jun Wang
Predicting user responses, such as clicks and conversions, is of great importance and has found its usage in many Web applications including recommender systems, web search and online advertising.
Ranked #1 on Click-Through Rate Prediction on iPinYou