no code implementations • EMNLP 2021 • Dianbo Sui, Chenhao Wang, Yubo Chen, Kang Liu, Jun Zhao, Wei Bi
In this paper, we formulate end-to-end KBP as a direct set generation problem, avoiding considering the order of multiple facts.
no code implementations • EMNLP 2020 • Dianbo Sui, Yubo Chen, Jun Zhao, Yantao Jia, Yuantao Xie, Weijian Sun
In this paper, we propose a privacy-preserving medical relation extraction model based on federated learning, which enables training a central model with no single piece of private local data being shared or exchanged.
1 code implementation • ACL 2022 • Zhuoran Jin, Tianyi Men, Hongbang Yuan, Zhitao He, Dianbo Sui, Chenhao Wang, Zhipeng Xue, Yubo Chen, Jun Zhao
Designing CogKGE aims to provide a unified programming framework for KGE tasks and a series of knowledge representations for downstream tasks.
no code implementations • SemEval (NAACL) 2022 • Jia Fu, Zhen Gan, Zhucong Li, Sirui Li, Dianbo Sui, Yubo Chen, Kang Liu, Jun Zhao
This paper describes our approach to develop a complex named entity recognition system in SemEval 2022 Task 11: MultiCoNER Multilingual Complex Named Entity Recognition, Track 9 - Chinese.
1 code implementation • 14 Oct 2024 • Haochuan Wang, Xiachong Feng, Lei LI, Zhanyue Qin, Dianbo Sui, Lingpeng Kong
The rapid advancement of large language models (LLMs) has accelerated their application in reasoning, with strategic reasoning drawing increasing attention.
no code implementations • 10 Oct 2024 • Zhanyue Qin, Haochuan Wang, Zecheng Wang, Deyuan Liu, Cunhang Fan, Zhao Lv, Zhiying Tu, Dianhui Chu, Dianbo Sui
At the same time, the experimental results show that, considering both the gender bias of the model and its general code generation capability, MG-Editing is most effective when applied at the row and neuron levels of granularity.
1 code implementation • 10 Oct 2024 • Can Wang, Dianbo Sui, Hongliang Sun, Hao Ding, Bolin Zhang, Zhiying Tu
This paper introduces a novel method to estimate the performance of LLM services across different tasks and contexts, which can be "plug-and-play" utilizing only a few unlabeled samples like ICL.
no code implementations • 1 Aug 2024 • Bolin Zhang, Zhiwei Yi, Jiahao Wang, Dianbo Sui, Zhiying Tu, Dianhui Chu
However, concepts such as acupuncture points (acupoints) and meridians involved in TCM always appear in the consultation, which cannot be displayed intuitively.
1 code implementation • 2 Jul 2024 • Bozhong Tian, Xiaozhuan Liang, Siyuan Cheng, Qingbin Liu, Mengru Wang, Dianbo Sui, Xi Chen, Huajun Chen, Ningyu Zhang
Large Language Models (LLMs) trained on extensive corpora inevitably retain sensitive data, such as personal privacy information and copyrighted material.
no code implementations • 24 Jun 2024 • Deyuan Liu, Zhanyue Qin, Hairu Wang, Zhao Yang, Zecheng Wang, Fangying Rong, Qingbin Liu, Yanchao Hao, Xi Chen, Cunhang Fan, Zhao Lv, Zhiying Tu, Dianhui Chu, Bo Li, Dianbo Sui
While large language models (LLMs) excel in many domains, their complexity and scale challenge deployment in resource-limited environments.
no code implementations • 24 Jun 2024 • Zhanyue Qin, Haochuan Wang, Deyuan Liu, Ziyang Song, Cunhang Fan, Zhao Lv, Jinlin Wu, Zhen Lei, Zhiying Tu, Dianhui Chu, Xiaoyan Yu, Dianbo Sui
In order to answer this question, we propose the UNO Arena based on the card game UNO to evaluate the sequential decision-making capability of LLMs and explain in detail why we choose UNO.
1 code implementation • 22 May 2024 • Yongxin Guo, Jingyu Liu, Mingda Li, Dingxin Cheng, Xiaoying Tang, Dianbo Sui, Qingbin Liu, Xi Chen, Kevin Zhao
Video Temporal Grounding (VTG) strives to accurately pinpoint event timestamps in a specific video using linguistic queries, significantly impacting downstream tasks like video browsing and editing.
no code implementations • 28 Mar 2024 • Deyuan Liu, Zecheng Wang, Bingning Wang, WeiPeng Chen, Chunshan Li, Zhiying Tu, Dianhui Chu, Bo Li, Dianbo Sui
The rapid proliferation of large language models (LLMs) such as GPT-4 and Gemini underscores the intense demand for resources during their training processes, posing significant challenges due to substantial computational and environmental costs.
1 code implementation • 24 Oct 2023 • Shukang Yin, Chaoyou Fu, Sirui Zhao, Tong Xu, Hao Wang, Dianbo Sui, Yunhang Shen, Ke Li, Xing Sun, Enhong Chen
Hallucination is a big shadow hanging over the rapidly evolving Multimodal Large Language Models (MLLMs), referring to the phenomenon that the generated text is inconsistent with the image content.
1 code implementation • ACL 2021 • Zhuoran Jin, Yubo Chen, Dianbo Sui, Chenhao Wang, Zhipeng Xue, Jun Zhao
CogNet is a knowledge base that integrates three types of knowledge: linguistic knowledge, world knowledge and commonsense knowledge.
2 code implementations • ACL 2021 • Hang Yang, Dianbo Sui, Yubo Chen, Kang Liu, Jun Zhao, Taifeng Wang
We argue that sentence-level extractors are ill-suited to the DEE task where event arguments always scatter across sentences and multiple events may co-exist in a document.
1 code implementation • ACL 2021 • Dianbo Sui, Zhengkun Tian, Yubo Chen, Kang Liu, Jun Zhao
In this paper, we aim to explore an uncharted territory, which is Chinese multimodal named entity recognition (NER) with both textual and acoustic contents.
no code implementations • COLING 2020 • Jian Liu, Dianbo Sui, Kang Liu, Jun Zhao
Despite many advances, existing approaches for this task did not consider dialogue structure and background knowledge (e. g., relationships between speakers).
Ranked #6 on
Question Answering
on FriendsQA
1 code implementation • 3 Nov 2020 • Dianbo Sui, Yubo Chen, Kang Liu, Jun Zhao, Xiangrong Zeng, Shengping Liu
Compared with cross-entropy loss that highly penalizes small shifts in triple order, the proposed bipartite matching loss is invariant to any permutation of predictions; thus, it can provide the proposed networks with a more accurate training signal by ignoring triple order and focusing on relation types and entities.
Ranked #1 on
Joint Entity and Relation Extraction
on NYT
1 code implementation • Findings (EMNLP) 2021 • Dianbo Sui, Yubo Chen, Kang Liu, Jun Zhao
In this paper, we propose a federated denoising framework to suppress label noise in federated settings.
no code implementations • NAACL 2021 • Dianbo Sui, Yubo Chen, Binjie Mao, Delai Qiu, Kang Liu, Jun Zhao
This is mainly due to the fact that human beings can leverage knowledge obtained from relevant tasks.
1 code implementation • IJCNLP 2019 • Dianbo Sui, Yubo Chen, Kang Liu, Jun Zhao, Shengping Liu
The lack of word boundaries information has been seen as one of the main obstacles to develop a high performance Chinese named entity recognition (NER) system.
Ranked #11 on
Chinese Named Entity Recognition
on Weibo NER
Chinese Named Entity Recognition
named-entity-recognition
+2