no code implementations • CCL 2020 • Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang
In addition, we propose an auxiliary term classification task to predict the types of the matched entity names, and jointly train it with the NER model to fuse both contexts and dictionary knowledge into NER.
no code implementations • CCL 2020 • Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang
We compute a title-body matching score based on the representations of title and body enhanced by their interactions.
no code implementations • 15 Dec 2024 • Zhuo Wu, Qinglin Jia, Chuhan Wu, Zhaocheng Du, Shuai Wang, Zan Wang, Zhenhua Dong
More specifically, for each sample we use LLM to generate a user profile description based on user behavior history or off-the-shelf profile features, which is used to guide LLM to play the role of this user and evaluate the relative preference for two recommendation results generated by different models.
no code implementations • 30 Nov 2024 • Tingjia Shen, Hao Wang, Chuhan Wu, Jin Yao Chin, Wei Guo, Yong liu, Huifeng Guo, Defu Lian, Ruiming Tang, Enhong Chen
In response, we introduce the Performance Law for SR models, which aims to theoretically investigate and model the relationship between model performance and data quality.
1 code implementation • 1 Nov 2024 • Chumeng Jiang, Jiayin Wang, Weizhi Ma, Charles L. A. Clarke, Shuai Wang, Chuhan Wu, Min Zhang
We intend our evaluation framework and observations to benefit future research on the use of LLMs as recommenders.
1 code implementation • 10 Oct 2024 • Yuanqing Yu, Zhefan Wang, Weizhi Ma, Zhicheng Guo, Jingtao Zhan, Shuai Wang, Chuhan Wu, Zhiqiang Guo, Min Zhang
Despite having powerful reasoning and inference capabilities, Large Language Models (LLMs) still need external tools to acquire real-time information retrieval or domain-specific expertise to solve complex tasks, which is referred to as tool learning.
no code implementations • 7 Oct 2024 • Qiyuan Zhang, YuFei Wang, Tiezheng Yu, Yuxin Jiang, Chuhan Wu, Liangyou Li, Yasheng Wang, Xin Jiang, Lifeng Shang, Ruiming Tang, Fuyuan Lyu, Chen Ma
With significant efforts in recent studies, LLM-as-a-Judge has become a cost-effective alternative to human evaluation for assessing the text generation quality in a wide range of tasks.
no code implementations • 2 Sep 2024 • Weiwen Liu, Xu Huang, Xingshan Zeng, Xinlong Hao, Shuai Yu, Dexun Li, Shuai Wang, Weinan Gan, Zhengying Liu, Yuanqing Yu, Zezhong Wang, Yuxian Wang, Wu Ning, Yutai Hou, Bin Wang, Chuhan Wu, Xinzhi Wang, Yong liu, Yasheng Wang, Duyu Tang, Dandan Tu, Lifeng Shang, Xin Jiang, Ruiming Tang, Defu Lian, Qun Liu, Enhong Chen
Function calling significantly extends the application boundary of large language models, where high-quality and diverse training data is critical for unlocking this capability.
1 code implementation • 19 Aug 2024 • Chuhan Wu, Ruiming Tang
Based on only a few key hyperparameters of the LLM architecture and the size of training data, we obtain a quite accurate MMLU prediction of various LLMs with diverse sizes and architectures developed by different organizations in different years.
no code implementations • 14 Jul 2024 • Bo Chen, Xinyi Dai, Huifeng Guo, Wei Guo, Weiwen Liu, Yong liu, Jiarui Qin, Ruiming Tang, Yichao Wang, Chuhan Wu, Yaxiong Wu, Hao Zhang
Recommender systems (RS) are vital for managing information overload and delivering personalized content, responding to users' diverse information needs.
3 code implementations • 9 Jul 2024 • Mingjia Yin, Chuhan Wu, YuFei Wang, Hao Wang, Wei Guo, Yasheng Wang, Yong liu, Ruiming Tang, Defu Lian, Enhong Chen
Inspired by the information compression nature of LLMs, we uncover an ``entropy law'' that connects LLM performance with data compression ratio and first-epoch training loss, which reflect the information redundancy of a dataset and the mastery of inherent knowledge encoded in this dataset, respectively.
no code implementations • 11 May 2024 • Jieming Zhu, Chuhan Wu, Rui Zhang, Zhenhua Dong
This tutorial seeks to provide a thorough exploration of the latest advancements and future trajectories in multimodal pretraining and generation techniques within the realm of recommender systems.
no code implementations • 11 May 2024 • Kexin Jiang, Chuhan Wu, Yaoran Chen
Time series prediction is a fundamental problem in scientific exploration and artificial intelligence (AI) technologies have substantially bolstered its efficiency and accuracy.
no code implementations • 12 Apr 2024 • Peijie Sun, Yifan Wang, Min Zhang, Chuhan Wu, Yan Fang, Hong Zhu, Yuan Fang, Meng Wang
In summary, our contributions underscore the importance of stable model training frameworks and the efficacy of collaborative-enhanced models in predicting user spending behavior in mobile gaming.
1 code implementation • 31 Mar 2024 • Wenlin Zhang, Chuhan Wu, Xiangyang Li, Yuhao Wang, Kuicai Dong, Yichao Wang, Xinyi Dai, Xiangyu Zhao, Huifeng Guo, Ruiming Tang
The lack of training data gives rise to the system cold-start problem in recommendation systems, making them struggle to provide effective recommendations.
no code implementations • 25 Mar 2024 • Yunjia Xi, Weiwen Liu, Jianghao Lin, Chuhan Wu, Bo Chen, Ruiming Tang, Weinan Zhang, Yong Yu
The rise of large language models (LLMs) has opened new opportunities in Recommender Systems (RSs) by enhancing user behavior modeling and content understanding.
no code implementations • 27 Feb 2024 • Yuang Zhao, Chuhan Wu, Qinglin Jia, Hong Zhu, Jia Yan, Libin Zong, Linxuan Zhang, Zhenhua Dong, Muyu Zhang
Accurately predicting the probabilities of user feedback, such as clicks and conversions, is critical for advertisement ranking and bidding.
1 code implementation • 19 Feb 2024 • Yuxin Jiang, YuFei Wang, Chuhan Wu, Wanjun Zhong, Xingshan Zeng, Jiahui Gao, Liangyou Li, Xin Jiang, Lifeng Shang, Ruiming Tang, Qun Liu, Wei Wang
Knowledge editing techniques, aiming to efficiently modify a minor proportion of knowledge in large language models (LLMs) without negatively impacting performance across other inputs, have garnered widespread attention.
1 code implementation • 17 Dec 2023 • Zichuan Fu, Xiangyang Li, Chuhan Wu, Yichao Wang, Kuicai Dong, Xiangyu Zhao, Mengchen Zhao, Huifeng Guo, Ruiming Tang
Click-Through Rate (CTR) prediction is a crucial task in online recommendation platforms as it involves estimating the probability of user engagement with advertisements or items by clicking on them.
no code implementations • DLP@RecSys 2023 • Qi Zhang, Chuhan Wu, Jieming Zhu, Jingjie Li, Qinglin Jia, Ruiming Tang, Rui Zhang, Liangbi Li
We then select them in a domain-aware way to promote informative features for different domains.
no code implementations • 26 Jun 2023 • Chuhan Wu, Jingjie Li, Qinglin Jia, Hong Zhu, Yuan Fang, Ruiming Tang
Accurate customer lifetime value (LTV) prediction can help service providers optimize their marketing policies in customer-centric applications.
1 code implementation • 9 Jun 2023 • Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Hao Zhang, Yong liu, Chuhan Wu, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
In this paper, we conduct a comprehensive survey on this research direction from the perspective of the whole pipeline in real-world recommender systems.
1 code implementation • 15 Mar 2023 • Sungwon Han, Seungeon Lee, Fangzhao Wu, Sundong Kim, Chuhan Wu, Xiting Wang, Xing Xie, Meeyoung Cha
Algorithmic fairness has become an important machine learning problem, especially for mission-critical Web applications.
1 code implementation • 17 Oct 2022 • Jingwei Yi, Fangzhao Wu, Chuhan Wu, Xiaolong Huang, Binxing Jiao, Guangzhong Sun, Xing Xie
In this paper, we propose an effective query-aware webpage snippet extraction method named DeepQSE, aiming to select a few sentences which can best summarize the webpage content in the context of input query.
1 code implementation • 19 Jul 2022 • Sungwon Han, Sungwon Park, Fangzhao Wu, Sundong Kim, Chuhan Wu, Xing Xie, Meeyoung Cha
This paper presents FedX, an unsupervised federated learning framework.
1 code implementation • 7 Jun 2022 • Tao Qi, Fangzhao Wu, Chuhan Wu, Lingjuan Lyu, Tong Xu, Zhongliang Yang, Yongfeng Huang, Xing Xie
In order to learn a fair unified representation, we send it to each platform storing fairness-sensitive features and apply adversarial learning to remove bias from the unified representation inherited from the biased data.
1 code implementation • Nature Communications 2022 • Chuhan Wu, Fangzhao Wu, Lingjuan Lyu, Tao Qi, Yongfeng Huang, Xing Xie
Graph neural network (GNN) is effective in modeling high-order interactions and has been widely used in various personalized applications such as recommendation.
Ranked #1 on Recommendation Systems on MovieLens 100K (RMSE metric)
no code implementations • 21 Apr 2022 • Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang, Xing Xie
In this paper, we propose a federated contrastive learning method named FedCL for privacy-preserving recommendation, which can exploit high-quality negative samples for effective model training with privacy well protected.
no code implementations • 10 Apr 2022 • Tao Qi, Fangzhao Wu, Chuhan Wu, Yongfeng Huang
Existing methods for news recommendation usually model user interest from historical clicked news without the consideration of candidate news.
no code implementations • 10 Apr 2022 • Tao Qi, Fangzhao Wu, Chuhan Wu, Yongfeng Huang
The core idea of FUM is to concatenate the clicked news into a long document and transform user modeling into a document modeling task with both intra-news and inter-news word-level interactions.
1 code implementation • 10 Apr 2022 • Tao Qi, Fangzhao Wu, Chuhan Wu, Peijie Sun, Le Wu, Xiting Wang, Yongfeng Huang, Xing Xie
To learn provider-fair representations from biased data, we employ provider-biased representations to inherit provider bias from data.
no code implementations • 1 Apr 2022 • Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang
Different from existing news recommendation methods that are usually based on point- or pair-wise ranking, in LeaDivRec we propose a more effective list-wise news recommendation model.
no code implementations • 1 Apr 2022 • Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang
Since candidate news selection can be biased, we propose to use a shared candidate-aware user model to match user interest with a real displayed candidate news and a random news, respectively, to learn a candidate-aware user embedding that reflects user interest in candidate news and a candidate-invariant user embedding that indicates intrinsic user interest.
no code implementations • 1 Apr 2022 • Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang
In this paper, we propose a semi-supervised fair representation learning approach based on adversarial variational autoencoder, which can reduce the dependency of adversarial fair models on data with labeled sensitive attributes.
no code implementations • 1 Apr 2022 • Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang
In addition, we weight the distillation loss based on the overall prediction correctness of the teacher ensemble to distill high-quality knowledge.
no code implementations • 28 Feb 2022 • Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang
In this paper, we propose a quality-aware news recommendation method named QualityRec that can effectively improve the quality of recommended news.
no code implementations • 28 Feb 2022 • Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang
They are usually learned on historical user behavior data to infer user interest and predict future user behaviors (e. g., clicks).
no code implementations • ACL 2022 • Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang, Xing Xie
In this paper, we propose a very simple yet effective method named NoisyTune to help better finetune PLMs on downstream tasks by adding some noise to the parameters of PLMs before fine-tuning.
no code implementations • 16 Feb 2022 • Ruixuan Liu, Fangzhao Wu, Chuhan Wu, Yanlin Wang, Lingjuan Lyu, Hong Chen, Xing Xie
In this way, all the clients can participate in the model learning in FL, and the final model can be big and powerful enough.
no code implementations • 10 Feb 2022 • Chuhan Wu, Fangzhao Wu, Tao Qi, Yanlin Wang, Yuqing Yang, Yongfeng Huang, Xing Xie
To solve the game, we propose a platform negotiation method that simulates the bargaining among platforms and locally optimizes their policies via gradient descent.
no code implementations • 10 Feb 2022 • Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang, Xing Xie
However, existing general FL poisoning methods for degrading model performance are either ineffective or not concealed in poisoning federated recommender systems.
1 code implementation • 2 Dec 2021 • Yang Yu, Fangzhao Wu, Chuhan Wu, Jingwei Yi, Qi Liu
We further propose a two-stage knowledge distillation method to improve the efficiency of the large PLM-based news recommendation model while maintaining its performance.
1 code implementation • EMNLP 2021 • Jingwei Yi, Fangzhao Wu, Chuhan Wu, Ruixuan Liu, Guangzhong Sun, Xing Xie
However, the computation and communication cost of directly learning many existing news recommendation models in a federated way are unacceptable for user clients.
no code implementations • Findings (EMNLP) 2021 • Tao Qi, Fangzhao Wu, Chuhan Wu, Yongfeng Huang, Xing Xie
In this paper, we propose a unified news recommendation framework, which can utilize user data locally stored in user clients to train models and serve users in a privacy-preserving way.
no code implementations • 3 Sep 2021 • Chuhan Wu, Fangzhao Wu, Yang Yu, Tao Qi, Yongfeng Huang, Xing Xie
Two self-supervision tasks are incorporated in UserBERT for user model pre-training on unlabeled user behavior data to empower user modeling.
no code implementations • 30 Aug 2021 • Chuhan Wu, Fangzhao Wu, Lingjuan Lyu, Yongfeng Huang, Xing Xie
Instead of directly communicating the large models between clients and server, we propose an adaptive mutual distillation framework to reciprocally learn a student and a teacher model on each client, where only the student model is shared by different clients and updated collaboratively to reduce the communication cost.
12 code implementations • 20 Aug 2021 • Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang, Xing Xie
In this way, Fastformer can achieve effective context modeling with linear complexity.
Ranked #1 on News Recommendation on MIND (using extra training data)
no code implementations • 20 Aug 2021 • Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang
News recommendation is often modeled as a sequential recommendation task, which assumes that there are rich short-term dependencies over historical clicked news.
no code implementations • 20 Aug 2021 • Chuhan Wu, Fangzhao Wu, Tao Qi, Binxing Jiao, Daxin Jiang, Yongfeng Huang, Xing Xie
We then sample token pairs based on their probability scores derived from the sketched attention matrix to generate different sparse attention index matrices for different attention heads.
no code implementations • 16 Jun 2021 • Chuhan Wu, Fangzhao Wu, Yongfeng Huang, Xing Xie
Instead of following the conventional taxonomy of news recommendation methods, in this paper we propose a novel perspective to understand personalized news recommendation based on its core problems and the associated techniques and challenges.
no code implementations • 11 Jun 2021 • Chuhan Wu, Fangzhao Wu, Yongfeng Huang
It is important to eliminate the effect of position biases on the recommendation model to accurately target user interests.
no code implementations • ACL 2021 • Tao Qi, Fangzhao Wu, Chuhan Wu, Peiru Yang, Yang Yu, Xing Xie, Yongfeng Huang
Instead of a single user embedding, in our method each user is represented in a hierarchical interest tree to better capture their diverse and multi-grained interest in news.
no code implementations • ACL 2021 • Tao Qi, Fangzhao Wu, Chuhan Wu, Yongfeng Huang
The former is used to capture the personalized user interest in news.
no code implementations • Findings (ACL) 2021 • Chuhan Wu, Fangzhao Wu, Yongfeng Huang
In addition, we propose a multi-teacher hidden loss and a multi-teacher distillation loss to transfer the useful knowledge in both hidden states and soft labels from multiple teacher PLMs to the student model.
no code implementations • ACL 2021 • Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang
It can effectively reduce the complexity and meanwhile capture global document context in the modeling of each sentence.
no code implementations • 27 May 2021 • Chuhan Wu, Fangzhao Wu, Yongfeng Huang
We estimate the optimal negative sampling ratio using the $K$ value that maximizes the training effectiveness function.
1 code implementation • 20 Apr 2021 • Tao Qi, Fangzhao Wu, Chuhan Wu, Yongfeng Huang
Our method interactively models candidate news and user interest to facilitate their accurate matching.
no code implementations • 15 Apr 2021 • Jingwei Yi, Fangzhao Wu, Chuhan Wu, Qifei Li, Guangzhong Sun, Xing Xie
The core of our method includes a bias representation module, a bias-aware user modeling module, and a bias-aware click prediction module.
1 code implementation • 15 Apr 2021 • Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang
Our PLM-empowered news recommendation models have been deployed to the Microsoft News platform, and achieved significant gains in terms of both click and pageview in both English-speaking and global markets.
1 code implementation • 15 Apr 2021 • Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang
Most of existing news representation methods learn news representations only from news texts while ignore the visual information in news like images.
no code implementations • Findings (ACL) 2022 • Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang
Recall and ranking are two critical steps in personalized news recommendation.
no code implementations • 9 Feb 2021 • Chuhan Wu, Fangzhao Wu, Yang Cao, Yongfeng Huang, Xing Xie
To incorporate high-order user-item interactions, we propose a user-item graph expansion method that can find neighboring users with co-interacted items and exchange their embeddings for expanding the local user-item graphs in a privacy-preserving way.
Ranked #2 on Recommendation Systems on MovieLens 100K (RMSE metric)
no code implementations • Findings (EMNLP) 2021 • Chuhan Wu, Fangzhao Wu, Yang Yu, Tao Qi, Yongfeng Huang, Qi Liu
However, existing language models are pre-trained and distilled on general corpus like Wikipedia, which has some gaps with the news domain and may be suboptimal for news intelligence.
no code implementations • 9 Feb 2021 • Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang
Besides, the feed recommendation models trained solely on click behaviors cannot optimize other objectives such as user engagement.
no code implementations • 12 Jan 2021 • Chuhan Wu, Fangzhao Wu, Yongfeng Huang, Xing Xie
The dwell time of news reading is an important clue for user interest modeling, since short reading dwell time usually indicates low and even negative interest.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Yubo Chen, Chuhan Wu, Tao Qi, Zhigang Yuan, Yongfeng Huang
In this paper, we propose a unified framework to incorporate multi-level contexts for named entity recognition.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang
We learn user representations from browsed news representations, and compute click scores based on user and candidate news representations.
no code implementations • NAACL 2021 • Chuhan Wu, Fangzhao Wu, Yongfeng Huang
Since the raw weighted real distances may not be optimal for adjusting self-attention weights, we propose a learnable sigmoid function to map them into re-scaled coefficients that have proper ranges.
no code implementations • 8 Oct 2020 • Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang
We propose a query-value interaction function which can learn query-aware attention values, and combine them with the original values and attention weights to form the final output.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Chuhan Wu, Fangzhao Wu, Tao Qi, Jianxun Lian, Yongfeng Huang, Xing Xie
Motivated by pre-trained language models which are pre-trained on large-scale unlabeled corpus to empower many downstream tasks, in this paper we propose to pre-train user models from large-scale unlabeled user behaviors data.
1 code implementation • 23 Jul 2020 • Chuhan Wu, Fangzhao Wu, Tao Di, Yongfeng Huang, Xing Xie
On each platform a local user model is used to learn user embeddings from the local user behaviors on that platform.
2 code implementations • ACL 2020 • Fangzhao Wu, Ying Qiao, Jiun-Hung Chen, Chuhan Wu, Tao Qi, Jianxun Lian, Danyang Liu, Xing Xie, Jianfeng Gao, Winnie Wu, Ming Zhou
News recommendation is an important technique for personalized news service.
no code implementations • ACL 2020 • Chuhan Wu, Fangzhao Wu, Tao Qi, Xiaohui Cui, Yongfeng Huang
Different from existing pooling methods that use a fixed pooling norm, we propose to learn the norm in an end-to-end manner to automatically find the optimal ones for text representation in different tasks.
no code implementations • 30 Jun 2020 • Chuhan Wu, Fangzhao Wu, Xiting Wang, Yongfeng Huang, Xing Xie
In this paper, we propose a fairness-aware news recommendation approach with decomposed adversarial learning and orthogonality regularization, which can alleviate unfairness in news recommendation brought by the biases of sensitive user attributes.
no code implementations • 31 Mar 2020 • Suyu Ge, Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang
Existing news recommendation methods achieve personalization by building accurate news representations from news content and user representations from their direct interactions with news (e. g., click), while ignoring the high-order relatedness between users and news.
2 code implementations • Findings of the Association for Computational Linguistics 2020 • Tao Qi, Fangzhao Wu, Chuhan Wu, Yongfeng Huang, Xing Xie
Extensive experiments on a real-world dataset show the effectiveness of our method in news recommendation model training with privacy protection.
no code implementations • 20 Mar 2020 • Suyu Ge, Fangzhao Wu, Chuhan Wu, Tao Qi, Yongfeng Huang, Xing Xie
Since the labeled data in different platforms usually has some differences in entity type and annotation criteria, instead of constraining different platforms to share the same model, we decompose the medical NER model in each platform into a shared module and a private module.
no code implementations • IJCNLP 2019 • Chuhan Wu, Fangzhao Wu, Mingxiao An, Tao Qi, Jianqiang Huang, Yongfeng Huang, Xing Xie
In the user representation module, we propose an attentive multi-view learning framework to learn unified representations of users from their heterogeneous behaviors such as search queries, clicked news and browsed webpages.
4 code implementations • IJCNLP 2019 • Chuhan Wu, Fangzhao Wu, Suyu Ge, Tao Qi, Yongfeng Huang, Xing Xie
The core of our approach is a news encoder and a user encoder.
no code implementations • IJCNLP 2019 • Chuhan Wu, Fangzhao Wu, Tao Qi, Suyu Ge, Yongfeng Huang, Xing Xie
In the review content-view, we propose to use a hierarchical model to first learn sentence representations from words, then learn review representations from sentences, and finally learn user/item representations from reviews.
1 code implementation • 21 Aug 2019 • Yuan Dong, Dawei Li, Chi Zhang, Chuhan Wu, Hong Wang, Ming Xin, Jianlin Cheng, Jian Lin
A significant novelty of the proposed RGAN is that it combines the supervised and regressional convolutional neural network (CNN) with the traditional unsupervised GAN, thus overcoming the common technical barrier in the traditional GANs, which cannot generate data associated with given continuous quantitative labels.
Computational Physics Materials Science Applied Physics
no code implementations • WS 2019 • Suyu Ge, Tao Qi, Chuhan Wu, Yongfeng Huang
This paper describes our system for the first and second shared tasks of the fourth Social Media Mining for Health Applications (SMM4H) workshop.
no code implementations • 12 Jul 2019 • Chuhan Wu, Fangzhao Wu, Mingxiao An, Jianqiang Huang, Yongfeng Huang, Xing Xie
Since different words and different news articles may have different informativeness for representing news and users, we propose to apply both word- and news-level attention mechanism to help our model attend to important words and news articles.
5 code implementations • 12 Jul 2019 • Chuhan Wu, Fangzhao Wu, Mingxiao An, Jianqiang Huang, Yongfeng Huang, Xing Xie
In the user encoder we learn the representations of users based on their browsed news and apply attention mechanism to select informative news for user representation learning.
Ranked #6 on News Recommendation on MIND
1 code implementation • ACL 2019 • Mingxiao An, Fangzhao Wu, Chuhan Wu, Kun Zhang, Zheng Liu, Xing Xie
In this paper, we propose a neural news recommendation approach which can learn both long- and short-term user representations.
Ranked #7 on News Recommendation on MIND
no code implementations • ACL 2019 • Chuhan Wu, Fangzhao Wu, Mingxiao An, Yongfeng Huang, Xing Xie
The core of our approach is a topic-aware news encoder and a user encoder.
no code implementations • SEMEVAL 2019 • Tao Qi, Suyu Ge, Chuhan Wu, Yubo Chen, Yongfeng Huang
First name: Tao Last name: Qi Email: taoqi. qt@gmail. com Affiliation: Department of Electronic Engineering, Tsinghua University First name: Suyu Last name: Ge Email: gesy17@mails. tsinghua. edu. cn Affiliation: Department of Electronic Engineering, Tsinghua University First name: Chuhan Last name: Wu Email: wuch15@mails. tsinghua. edu. cn Affiliation: Department of Electronic Engineering, Tsinghua University First name: Yubo Last name: Chen Email: chen-yb18@mails. tsinghua. edu. cn Affiliation: Department of Electronic Engineering, Tsinghua University First name: Yongfeng Last name: Huang Email: yfhuang@mail. tsinghua. edu. cn Affiliation: Department of Electronic Engineering, Tsinghua University Toponym resolution is an important and challenging task in the neural language processing field, and has wide applications such as emergency response and social media geographical event analysis.
no code implementations • SEMEVAL 2019 • Suyu Ge, Tao Qi, Chuhan Wu, Yongfeng Huang
With the development of the Internet, dialog systems are widely used in online platforms to provide personalized services for their users.
no code implementations • NAACL 2019 • Chuhan Wu, Fangzhao Wu, Junxin Liu, Yongfeng Huang
In this paper, we propose a hierarchical user and item representation model with three-tier attention to learn user and item representations from reviews for recommendation.
5 code implementations • 29 May 2019 • Hongtao Liu, Fangzhao Wu, Wenjun Wang, Xianchen Wang, Pengfei Jiao, Chuhan Wu, Xing Xie
In this paper we propose a neural recommendation approach with personalized attention to learn personalized representations of users and items from reviews.
1 code implementation • 26 Apr 2019 • Fangzhao Wu, Junxin Liu, Chuhan Wu, Yongfeng Huang, Xing Xie
Besides, the training data for CNER in many domains is usually insufficient, and annotating enough training data for CNER is very expensive and time-consuming.
Chinese Named Entity Recognition named-entity-recognition +1
no code implementations • 26 Apr 2019 • Junxin Liu, Fangzhao Wu, Chuhan Wu, Yongfeng Huang, Xing Xie
Luckily, the unlabeled data is usually easy to collect and many high-quality Chinese lexicons are off-the-shelf, both of which can provide useful information for CWS.
1 code implementation • WS 2018 • Chuhan Wu, Fangzhao Wu, Junxin Liu, Sixing Wu, Yongfeng Huang, Xing Xie
This paper describes our system for the first and third shared tasks of the third Social Media Mining for Health Applications (SMM4H) workshop, which aims to detect the tweets mentioning drug names and adverse drug reactions.
no code implementations • 28 Sep 2018 • Yuan Dong, Chuhan Wu, Chi Zhang, Yingda Liu, Jianlin Cheng, Jian Lin
Moreover, given ubiquitous existence of topologies in materials, this work will stimulate widespread interests in applying deep learning algorithms to topological design of materials crossing atomic, nano-, meso-, and macro- scales.
Materials Science Computational Physics
no code implementations • 11 Jul 2018 • Junxin Liu, Fangzhao Wu, Chuhan Wu, Yongfeng Huang, Xing Xie
The experimental results on two benchmark datasets validate that our approach can effectively improve the performance of Chinese word segmentation, especially when training data is insufficient.
no code implementations • SEMEVAL 2018 • Chuhan Wu, Fangzhao Wu, Sixing Wu, Zhigang Yuan, Junxin Liu, Yongfeng Huang
Thus, in SemEval-2018 Task 2 an interesting and challenging task is proposed, i. e., predicting which emojis are evoked by text-based tweets.
no code implementations • SEMEVAL 2018 • Chuhan Wu, Fangzhao Wu, Sixing Wu, Zhigang Yuan, Yongfeng Huang
Thus, the aim of SemEval-2018 Task 10 is to predict whether a word is a discriminative attribute between two concepts.
1 code implementation • SEMEVAL 2018 • Chuhan Wu, Fangzhao Wu, Sixing Wu, Junxin Liu, Zhigang Yuan, Yongfeng Huang
Detecting irony is an important task to mine fine-grained information from social web messages.
no code implementations • SEMEVAL 2018 • Chuhan Wu, Fangzhao Wu, Junxin Liu, Zhigang Yuan, Sixing Wu, Yongfeng Huang
In order to address this task, we propose a system based on an attention CNN-LSTM model.
no code implementations • WS 2018 • Chuhan Wu, Fangzhao Wu, Yubo Chen, Sixing Wu, Zhigang Yuan, Yongfeng Huang
In addition, we compare the performance of the softmax classifier and conditional random field (CRF) for sequential labeling in this task.
no code implementations • IJCNLP 2017 • Chuhan Wu, Fangzhao Wu, Yongfeng Huang, Sixing Wu, Zhigang Yuan
Since the existing valence-arousal resources of Chinese are mainly in word-level and there is a lack of phrase-level ones, the Dimensional Sentiment Analysis for Chinese Phrases (DSAP) task aims to predict the valence-arousal ratings for Chinese affective words and phrases automatically.