no code implementations • Findings (EMNLP) 2021 • Deyu Zhou, Yanzheng Xiang, Linhai Zhang, Chenchen Ye, Qian-Wen Zhang, Yunbo Cao
However, most of existing approaches only detect one single path to obtain the answer without considering other correct paths, which might affect the final performance.
no code implementations • Findings (ACL) 2022 • Tao Wang, Linhai Zhang, Chenchen Ye, Junxi Liu, Deyu Zhou
Medical code prediction from clinical notes aims at automatically associating medical codes with the clinical notes.
no code implementations • EMNLP 2021 • Chenchen Ye, Linhai Zhang, Yulan He, Deyu Zhou, Jie Wu
The other is label heterogeneous graph, which is constructed based on both the labels’ hierarchy and their statistical dependencies.
1 code implementation • 2 Dec 2023 • Yunshan Ma, Chenchen Ye, Zijian Wu, Xiang Wang, Yixin Cao, Liang Pang, Tat-Seng Chua
Temporal complex event forecasting aims to predict the future events given the observed events from history.
1 code implementation • 12 Aug 2023 • Yunshan Ma, Chenchen Ye, Zijian Wu, Xiang Wang, Yixin Cao, Tat-Seng Chua
The task of event forecasting aims to model the relational and temporal patterns based on historical events and makes forecasting to what will happen in the future.
1 code implementation • ACM SIGIR Conference on Research and Development in Information Retrieval 2022 • Chenchen Ye, Lizi Liao, Fuli Feng, Wei Ji, Tat-Seng Chua
Existing approaches either 1) predict structured dialog acts first and then generate natural response; or 2) map conversation context to natural responses directly in an end-to-end manner.
no code implementations • 29 Sep 2021 • Chenchen Ye, Lizi Liao, Fuli Feng, Wei Ji, Tat-Seng Chua
The core is to construct a latent content space for strategy optimization and disentangle the surface style from it.
1 code implementation • ACL 2020 • Rui Wang, Xuemeng Hu, Deyu Zhou, Yulan He, Yuxuan Xiong, Chenchen Ye, Haiyang Xu
Recent years have witnessed a surge of interests of using neural topic models for automatic topic extraction from text, since they avoid the complicated mathematical derivations for model inference as in traditional topic models such as Latent Dirichlet Allocation (LDA).
Ranked #1 on Text Clustering on 20 Newsgroups