1 code implementation • EMNLP 2021 • Pengfei Cao, Yubo Chen, Yuqing Yang, Kang Liu, Jun Zhao
Moreover, we propose an Uncertain Information Aggregation module to leverage the global structure for integrating the local information.
no code implementations • COLING 2022 • Xiusheng Huang, Hang Yang, Yubo Chen, Jun Zhao, Kang Liu, Weijian Sun, Zuyu Zhao
Document-level relation extraction aims to recognize relations among multiple entity pairs from a whole piece of article.
1 code implementation • ACL 2022 • Zhuoran Jin, Tianyi Men, Hongbang Yuan, Zhitao He, Dianbo Sui, Chenhao Wang, Zhipeng Xue, Yubo Chen, Jun Zhao
Designing CogKGE aims to provide a unified programming framework for KGE tasks and a series of knowledge representations for downstream tasks.
no code implementations • EMNLP 2021 • Yiming Ju, Yuanzhe Zhang, Zhixing Tian, Kang Liu, Xiaohuan Cao, Wenting Zhao, Jinlong Li, Jun Zhao
Multiple-choice MRC is one of the most studied tasks in MRC due to the convenience of evaluation and the flexibility of answer format.
1 code implementation • EMNLP 2021 • Cheng Yan, Yuanzhe Zhang, Kang Liu, Jun Zhao, Yafei Shi, Shengping Liu
Biomedical Concept Normalization (BCN) is widely used in biomedical text processing as a fundamental module.
1 code implementation • EMNLP 2021 • Qingbin Liu, Pengfei Cao, Cao Liu, Jiansong Chen, Xunliang Cai, Fan Yang, Shizhu He, Kang Liu, Jun Zhao
This paradigm is often impractical in real-world applications since online dialogue systems usually involve continually emerging new data and domains.
no code implementations • COLING 2022 • Jun Zhao, Xin Zhao, WenYu Zhan, Tao Gui, Qi Zhang, Liang Qiao, Zhanzhan Cheng, ShiLiang Pu
To deal with this problem, this work proposes a cross-document semantic enhancement method, which consists of two modules: 1) To prevent distractions from irrelevant regions in the current document, we design a learnable attention mask mechanism, which is used to adaptively filter redundant information in the current document.
no code implementations • COLING 2022 • Bo Zhou, Yubo Chen, Kang Liu, Jun Zhao, Jiexin Xu, XiaoJian Jiang, Qiuxia Li
The other issue is that the model adopts a word-level objective to model events in texts, failing to evaluate the predicted results of the model from the perspective of event sequence.
no code implementations • COLING 2022 • Bo Zhou, Chenhao Wang, Yubo Chen, Kang Liu, Jun Zhao, Jiexin Xu, XiaoJian Jiang, Qiuxia Li
Currently existing approach models this task as a statistical induction problem, to predict a sequence of events by exploring the similarity between the given goal and the known sequences of events.
1 code implementation • COLING 2022 • Yiming Ju, Weikang Wang, Yuanzhe Zhang, Suncong Zheng, Kang Liu, Jun Zhao
To bridge the gap, we propose a new task: conditional question answering with hierarchical multi-span answers, where both the hierarchical relations and the conditions need to be extracted.
no code implementations • COLING 2022 • Ran Song, Shizhu He, Suncong Zheng, Shengxiang Gao, Kang Liu, Zhengtao Yu, Jun Zhao
In fact, the semantics of a relation can be expressed by three kinds of graphs: factual graph, ontology graph, textual description graph, and they can complement each other.
no code implementations • SemEval (NAACL) 2022 • Jia Fu, Zhen Gan, Zhucong Li, Sirui Li, Dianbo Sui, Yubo Chen, Kang Liu, Jun Zhao
This paper describes our approach to develop a complex named entity recognition system in SemEval 2022 Task 11: MultiCoNER Multilingual Complex Named Entity Recognition, Track 9 - Chinese.
1 code implementation • SemEval (NAACL) 2022 • Fei Xia, Bin Li, Yixuan Weng, Shizhu He, Bin Sun, Shutao Li, Kang Liu, Jun Zhao
For the classification sub-task, we adopt the DeBERTa-v3 pre-trained model for fine-tuning datasets of different languages.
no code implementations • CCL 2020 • Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao
Specifically, to reduce the errors of predicting entity boundaries, we propose an adaptive multi-pass memory network to exploit lexical knowledge.
Chinese Named Entity Recognition
named-entity-recognition
+3
no code implementations • EMNLP 2020 • Dianbo Sui, Yubo Chen, Jun Zhao, Yantao Jia, Yuantao Xie, Weijian Sun
In this paper, we propose a privacy-preserving medical relation extraction model based on federated learning, which enables training a central model with no single piece of private local data being shared or exchanged.
no code implementations • EMNLP 2020 • Pengfei Cao, Yubo Chen, Jun Zhao, Taifeng Wang
However, existing incremental learning methods cannot handle semantic ambiguity and training data imbalance problems between old and new classes in the task of incremental event detection.
no code implementations • EMNLP (ACL) 2021 • Baoli Zhang, Zhucong Li, Zhen Gan, Yubo Chen, Jing Wan, Kang Liu, Jun Zhao, Shengping Liu, Yafei Shi
2) Inconsistency Detector: CroAno employs a detector to locate corpus-level label inconsistency and provides users an interface to correct inconsistent entities in batches.
no code implementations • EMNLP 2021 • Dianbo Sui, Chenhao Wang, Yubo Chen, Kang Liu, Jun Zhao, Wei Bi
In this paper, we formulate end-to-end KBP as a direct set generation problem, avoiding considering the order of multiple facts.
no code implementations • NAACL (SMM4H) 2021 • Tong Zhou, Zhucong Li, Zhen Gan, Baoli Zhang, Yubo Chen, Kun Niu, Jing Wan, Kang Liu, Jun Zhao, Yafei Shi, Weifeng Chong, Shengping Liu
This is the system description of the CASIA_Unisound team for Task 1, Task 7b, and Task 8 of the sixth Social Media Mining for Health Applications (SMM4H) shared task in 2021.
no code implementations • EMNLP 2020 • Zhixing Tian, Yuanzhe Zhang, Kang Liu, Jun Zhao, Yantao Jia, Zhicheng Sheng
Inspired by this behavior of humans, we propose a method to let the machine imagine a scene during reading narrative for better comprehension.
no code implementations • TU (COLING) 2022 • Minjun Zhu, Yixuan Weng, Bin Li, Shizhu He, Kang Liu, Jun Zhao
In this work, we propose a knowledge transfer method with visual prompt (VPTG) fusing multi-modal data, which is a flexible module that can utilize the text-only seq2seq model to handle visual dialogue tasks.
no code implementations • SMM4H (COLING) 2022 • Jia Fu, Sirui Li, Hui Ming Yuan, Zhucong Li, Zhen Gan, Yubo Chen, Kang Liu, Jun Zhao, Shengping Liu
This paper presents a description of our system in SMM4H-2022, where we participated in task 1a, task 4, and task 6 to task 10.
no code implementations • ACL 2022 • Runxin Sun, Shizhu He, Chong Zhu, Yaohan He, Jinlong Li, Jun Zhao, Kang Liu
Text-to-SQL aims to parse natural language questions into SQL queries, which is valuable in providing an easy interface to access large databases.
no code implementations • Findings (ACL) 2022 • Guirong Bai, Shizhu He, Kang Liu, Jun Zhao
We first formulate incremental learning for medical intent detection.
1 code implementation • 2 Mar 2025 • Yupu Hao, Pengfei Cao, Zhuoran Jin, Huanxuan Liao, Yubo Chen, Kang Liu, Jun Zhao
Personalized tool utilization is essential for aligning large language models (LLMs) with user preference in interaction scenarios with various tools.
1 code implementation • 20 Feb 2025 • Jianwen Luo, Yiming Huang, Jinxiang Meng, Fangyu Lei, Shizhu He, Xiao Liu, Shanshan Jiang, Bin Dong, Jun Zhao, Kang Liu
Large Language Models (LLMs) have shown great promise in tool-making, yet existing frameworks often struggle to efficiently construct reliable toolsets and are limited to single-task settings.
no code implementations • 18 Feb 2025 • YuHeng Chen, Pengfei Cao, Kang Liu, Jun Zhao
Previous studies primarily utilize MLP neurons as units of analysis for understanding the mechanisms of factual knowledge in Language Models (LMs); however, neurons suffer from polysemanticity, leading to limited knowledge expression and poor interpretability.
no code implementations • 17 Feb 2025 • Kun Luo, Zheng Liu, Peitian Zhang, Hongjin Qian, Jun Zhao, Kang Liu
The efficient processing of long context poses a serious challenge for large language models (LLMs).
1 code implementation • 17 Feb 2025 • Huanxuan Liao, Shizhu He, Yupu Hao, Jun Zhao, Kang Liu
For new tasks, DATA dynamically adjusts the weights of adapters of different ranks based on their relevance and distinction from previous tasks, allowing the model to acquire new task-specific skills while effectively retaining previously learned knowledge.
1 code implementation • 10 Feb 2025 • Yuqi Lin, Hengjia Li, Wenqi Shao, Zheng Yang, Jun Zhao, Xiaofei He, Ping Luo, Kaipeng Zhang
In contrast to prior refinement techniques that are tailored to specific models or tasks in a close-world manner, we propose SAMRefiner, a universal and efficient approach by adapting SAM to the mask refinement task.
no code implementations • 4 Feb 2025 • Wangtao Sun, Haotian Xu, Huanxuan Liao, Xuanqing Yu, Zhongtao Jiang, Shizhu He, Jun Zhao, Kang Liu
Through experiments, we demonstrate that VaiBot performs on par with existing baseline methods in terms of deductive capabilities while significantly surpassing them in inductive capabilities.
1 code implementation • 18 Dec 2024 • Zhuoran Jin, Hongbang Yuan, Tianyi Men, Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao
Despite the significant progress made by existing retrieval augmented language models (RALMs) in providing trustworthy responses and grounding in reliable sources, they often overlook effective alignment with human preferences.
no code implementations • 11 Dec 2024 • Yang Li, Xinyu Zhou, Yitong Wang, Liangxin Qian, Jun Zhao
Transformer models have revolutionized AI, enabling applications like content generation and sentiment analysis.
no code implementations • 1 Dec 2024 • Ziyang Huang, Jun Zhao, Kang Liu
Language Agent could be endowed with different mechanisms for autonomous task accomplishment.
no code implementations • 26 Nov 2024 • Pengfei Cao, YuHeng Chen, Zhuoran Jin, Yubo Chen, Kang Liu, Jun Zhao
Some researchers attempt to demystify the factual knowledge in LLMs from the perspective of knowledge neurons, and subsequently discover language-agnostic knowledge neurons that store factual knowledge in a form that transcends language barriers.
no code implementations • 25 Nov 2024 • Liangxin Qian, Jun Zhao
It uniquely alternates the optimization of key variables like user association, work offloading ratios, task-specific computing resource distribution, bandwidth allocation, user power usage ratios, and server computing resource allocation ratios.
no code implementations • 16 Nov 2024 • Yao Xu, Shizhu He, Zeng Xiangrong, Jiabei Chen, Guang Liu, Bingning Wang, Jun Zhao, Kang Liu
Specifically, we represent various types of structured data in a unified hypergraph format, and use self-supervised learning to pretrain a hypergraph encoder, and a G-Former compressing encoded hypergraph representations with cross-attention.
1 code implementation • 14 Nov 2024 • Xuyang Cao, Guoxin Wang, Sheng Shi, Jun Zhao, Yang Yao, Jintao Fei, Minyu Gao
Specifically, in the first stage, we introduce a decoupled facial representation framework that separates dynamic facial expressions from static 3D facial representations.
1 code implementation • 14 Nov 2024 • Chenlong Zhang, Tong Zhou, Pengfei Cao, Zhuoran Jin, Yubo Chen, Kang Liu, Jun Zhao
The rapid proliferation of online news has posed significant challenges in tracking the continuous development of news topics.
1 code implementation • 31 Oct 2024 • Xiusheng Huang, Yequan Wang, Jun Zhao, Kang Liu
Knowledge editing technology is crucial for maintaining the accuracy and timeliness of large language models (LLMs) .
no code implementations • 23 Oct 2024 • Feiyan Feng, Tianyu Liu, Hong Wang, Jun Zhao, Wei Li, Yanshen Sun
Therefore, this paper proposes a novel PGDiffSeg (Prior-Guided Diffusion Denoising Model with Parameter-Shared Attention) that applies diffusion denoising methods to breast cancer medical image segmentation, accurately recovering the affected areas from Gaussian noise.
no code implementations • 21 Oct 2024 • Tianyi Men, Pengfei Cao, Zhuoran Jin, Yubo Chen, Kang Liu, Jun Zhao
With the development of large language models, they are widely used as agents in various fields.
no code implementations • 12 Oct 2024 • Jiachun Li, Pengfei Cao, Zhuoran Jin, Yubo Chen, Kang Liu, Jun Zhao
Inductive reasoning is an essential capability for large language models (LLMs) to achieve higher intelligence, which requires the model to generalize rules from observed facts and then apply them to unseen examples.
1 code implementation • 12 Oct 2024 • Jiachun Li, Pengfei Cao, Chenhao Wang, Zhuoran Jin, Yubo Chen, Kang Liu, XiaoJian Jiang, Jiexin Xu, Jun Zhao
In it, we design a reward model to filter out the noisy knowledge and take the marginal consistent reasoning module to reduce invalid reasoning.
no code implementations • 9 Oct 2024 • Yiming Huang, Jianwen Luo, Yan Yu, Yitong Zhang, Fangyu Lei, Yifan Wei, Shizhu He, Lifu Huang, Xiao Liu, Jun Zhao, Kang Liu
We introduce DA-Code, a code generation benchmark specifically designed to assess LLMs on agent-based data science tasks.
no code implementations • 2 Oct 2024 • Xiang Hu, Zhihao Teng, Jun Zhao, Wei Wu, Kewei Tu
In this paper, we propose a novel attention mechanism based on dynamic context, Grouped Cross Attention (GCA), which can generalize to 1000 times the pre-training context length while maintaining the ability to access distant information with a constant attention window size.
no code implementations • 30 Sep 2024 • Chang Liu, Jun Zhao
As mobile devices increasingly become focal points for advanced applications, edge computing presents a viable solution to their inherent computational limitations, particularly in deploying large language models (LLMs).
no code implementations • 26 Sep 2024 • Chao Li, Chen Jiang, Xiaolong Liu, Jun Zhao, Guoxin Wang
In this paper, we introduce a novel approach for multilingual visual text creation, named JoyType, designed to maintain the font style of text during the image generation process.
no code implementations • 20 Sep 2024 • Sheng Shi, Xuyang Cao, Jun Zhao, Guoxin Wang
In audio-driven video generation, creating Mandarin videos presents significant challenges.
1 code implementation • 20 Sep 2024 • Yupu Hao, Pengfei Cao, Zhuoran Jin, Huanxuan Liao, Yubo Chen, Kang Liu, Jun Zhao
However, previous works predominantly focus on improving model's tool-utilizing accuracy and the ability to generalize to new, unseen tools, excessively forcing LLMs to adjust specific tool-invoking pattern without considering the harm to model's general performance.
1 code implementation • 20 Sep 2024 • Huanxuan Liao, Shizhu He, Yao Xu, Yuanzhe Zhang, Kang Liu, Jun Zhao
By decoupling general and specialized capabilities, the proposed NesyCD can achieve superior performance cost-effectively, utilizing smaller models and blending parameterized neural networks with symbolic KB.
1 code implementation • 20 Sep 2024 • Huanxuan Liao, Shizhu He, Yupu Hao, Xiang Li, Yuanzhe Zhang, Jun Zhao, Kang Liu
By efficiently internalizing knowledge, $\textit{SKIntern}$ reduces computational overhead and speeds up the reasoning process by focusing solely on the question during inference.
1 code implementation • 14 Sep 2024 • Lei Yu, Jintao Fei, Xinyi Liu, Yang Yao, Jun Zhao, Guoxin Wang, Xin Li
This non-contact, real-time monitoring method holds great potential for home settings.
no code implementations • 1 Sep 2024 • Yifan Wei, Xiaoyan Yu, Yixuan Weng, Huanhuan Ma, Yuanzhe Zhang, Jun Zhao, Kang Liu
Contrary to prior research suggesting that knowledge is stored in MLP weights, our experiments demonstrate that relational knowledge is also significantly encoded in attention modules.
no code implementations • 22 Aug 2024 • Kun Luo, Minghao Qin, Zheng Liu, Shitao Xiao, Jun Zhao, Kang Liu
In this work, we conduct a comprehensive empirical study on a wide range of retrieval tasks, including in domain accuracy, data efficiency, zero shot generalization, lengthy retrieval, instruction based retrieval, and multi task learning.
no code implementations • 20 Aug 2024 • Hongbang Yuan, Zhuoran Jin, Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao
In response to this vulnerability, we propose Latent Adversarial Unlearning (LAU), a universal framework that effectively enhances the robustness of the unlearned process.
1 code implementation • 14 Aug 2024 • Chenhui Hu, Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao
Moreover, this is the first study to investigate knowledge editing from the perspective of superposition and provides a comprehensive observation of superposition across numerous real-world language models.
1 code implementation • 31 Jul 2024 • Ming Zhang, Caishuang Huang, Yilong Wu, Shichun Liu, Huiyuan Zheng, Yurui Dong, Yujiong Shen, Shihan Dou, Jun Zhao, Junjie Ye, Qi Zhang, Tao Gui, Xuanjing Huang
Task-oriented dialogue (TOD) systems aim to efficiently handle task-oriented conversations, including information collection.
no code implementations • 11 Jul 2024 • Wangtao Sun, Chenxiang Zhang, Xueyou Zhang, Xuanqing Yu, Ziyang Huang, Pei Chen, Haotian Xu, Shizhu He, Jun Zhao, Kang Liu
The experimental results show that through IRFT, LLMs can learn abstract rule-following abilities from purely synthetic data and then generalize to RuleBench.
1 code implementation • 26 Jun 2024 • Ran Song, Shizhu He, Shengxiang Gao, Li Cai, Kang Liu, Zhengtao Yu, Jun Zhao
Multilingual Knowledge Graph Completion (mKGC) aim at solving queries like (h, r, ?)
no code implementations • 25 Jun 2024 • Fei Xia, Yixuan Weng, Shizhu He, Kang Liu, Jun Zhao
Taxonomies, which organize domain concepts into hierarchical structures, are crucial for building knowledge systems and downstream applications.
1 code implementation • 25 Jun 2024 • Tong Zhou, Yubo Chen, Kang Liu, Jun Zhao
In this work, we introduce a collaborative augmentation framework, CogMG, leveraging knowledge graphs to address the limitations of LLMs in QA scenarios, explicitly targeting the problems of incomplete knowledge coverage and knowledge update misalignment.
no code implementations • 23 Jun 2024 • Tianyi Men, Pengfei Cao, Zhuoran Jin, Yubo Chen, Kang Liu, Jun Zhao
In this work, we focus on exploring the look-ahead planning mechanism in large language models from the perspectives of information flow and internal representations.
no code implementations • 19 Jun 2024 • Xiaowei Yuan, Zhao Yang, Yequan Wang, Jun Zhao, Kang Liu
In the Retrieval-Augmented Generation (RAG) system, advanced Large Language Models (LLMs) have emerged as effective Query Likelihood Models (QLMs) in an unsupervised way, which re-rank documents based on the probability of generating the query given the content of a document.
no code implementations • 18 Jun 2024 • Hongbang Yuan, Yubo Chen, Pengfei Cao, Zhuoran Jin, Kang Liu, Jun Zhao
Extensive experiments demonstrate that APEFT improves model performance by an average of $\boldsymbol{3. 45\%}$ on both ID and OOD datasets, which is highly effective.
1 code implementation • 18 Jun 2024 • Huanxuan Liao, Shizhu He, Yao Xu, Yuanzhe Zhang, Yanchao Hao, Shengping Liu, Kang Liu, Jun Zhao
Within this context, we introduce Task Adapters Generation from Instructions (TAGI), which automatically constructs the task-specific model in a parameter generation manner based on the given task instructions without retraining for unseen tasks.
no code implementations • 17 Jun 2024 • Jiakuan Xie, Pengfei Cao, YuHeng Chen, Yubo Chen, Kang Liu, Jun Zhao
In this paper, we focus on multilingual knowledge editing (MKE), which requires propagating updates across multiple languages.
1 code implementation • 16 Jun 2024 • Zhuoran Jin, Pengfei Cao, Chenhao Wang, Zhitao He, Hongbang Yuan, Jiachun Li, Yubo Chen, Kang Liu, Jun Zhao
Large language models (LLMs) inevitably memorize sensitive, copyrighted, and harmful knowledge from the training corpus; therefore, it is crucial to erase this knowledge from the models.
no code implementations • 29 May 2024 • Jiachun Li, Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao
Large language models (LLMs) suffer from serious unfaithful chain-of-thought (CoT) issues.
no code implementations • 27 May 2024 • Jianting Yang, Srećko Ðurašinović, Jean-Bernard Lasserre, Victor Magron, Jun Zhao
This paper explores methods for verifying the properties of Binary Neural Networks (BNNs), focusing on robustness against adversarial attacks.
no code implementations • 23 May 2024 • YuHeng Chen, Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao
This theory is based on the knowledge localization (KL) assumption, which suggests that a fact can be localized to a few knowledge storage units, namely knowledge neurons.
1 code implementation • 5 May 2024 • Jun Zhao, Jingqi Tong, Yurong Mou, Ming Zhang, Qi Zhang, Xuanjing Huang
In this work, we investigate the compositionality of large language models (LLMs) in mathematical reasoning.
1 code implementation • 23 Apr 2024 • Yao Xu, Shizhu He, Jiabei Chen, ZiHao Wang, Yangqiu Song, Hanghang Tong, Guang Liu, Kang Liu, Jun Zhao
To simulate these real-world scenarios and evaluate the ability of LLMs to integrate internal and external knowledge, we propose leveraging LLMs for QA under Incomplete Knowledge Graph (IKGQA), where the provided KG lacks some of the factual triples for each question, and construct corresponding datasets.
1 code implementation • 1 Apr 2024 • wei he, Shichun Liu, Jun Zhao, Yiwen Ding, Yi Lu, Zhiheng Xi, Tao Gui, Qi Zhang, Xuanjing Huang
The generated demos strategically interpolate between existing demos and the given query, transforming the query from OOD to ID.
no code implementations • 30 Mar 2024 • Renyang Liu, Kwok-Yan Lam, Wei Zhou, Sixing Wu, Jun Zhao, Dongting Hu, Mingming Gong
Many attack techniques have been proposed to explore the vulnerability of DNNs and further help to improve their robustness.
1 code implementation • 26 Mar 2024 • Chenlong Zhang, Pengfei Cao, Yubo Chen, Kang Liu, Zhiqiang Zhang, Mengshu Sun, Jun Zhao
The CFED task is challenging as it involves memorizing previous event types and learning new event types with few-shot samples.
no code implementations • 25 Mar 2024 • Ziheng Deng, Hua Chen, Yongzheng Zhou, Haibo Hu, Zhiyong Xu, Jiayuan Sun, Tianling Lyu, Yan Xi, Yang Chen, Jun Zhao
We find that streak artifacts exhibit a unique rotational motion along with the patient's respiration, distinguishable from diaphragm-driven respiratory motion in the spatiotemporal domain.
1 code implementation • 22 Mar 2024 • Huanxuan Liao, Shizhu He, Yao Xu, Yuanzhe Zhang, Kang Liu, Shengping Liu, Jun Zhao
Retrieval-Augmented-Generation and Generation-Augmented-Generation have been proposed to enhance the knowledge required for question answering with Large Language Models (LLMs) by leveraging richer context.
Open-Domain Question Answering
Out-of-Distribution Generalization
1 code implementation • 12 Mar 2024 • Rui Zhao, Jun Zhao
We believe this work demonstrates a practicality of a perennial DToU language and the potential of a paradigm shift to how users interact with data and applications in a decentralized Web, offering both improved privacy and usability.
no code implementations • 9 Mar 2024 • Wangtao Sun, Haotian Xu, Xuanqing Yu, Pei Chen, Shizhu He, Jun Zhao, Kang Liu
Although Large Language Models (LLMs) are showing impressive performance on a wide range of Natural Language Processing tasks, researchers have found that they still have limited ability to conduct induction.
no code implementations • 8 Mar 2024 • Wangtao Sun, Shizhu He, Jun Zhao, Kang Liu
With good explanatory power and controllability, rule-based methods play an important role in many tasks such as knowledge reasoning and decision support.
no code implementations • 5 Mar 2024 • Zhitao He, Pengfei Cao, Zhuoran Jin, Yubo Chen, Kang Liu, Zhiqiang Zhang, Mengshu Sun, Jun Zhao
Event Causality Identification (ECI) refers to the detection of causal relations between events in texts.
1 code implementation • 5 Mar 2024 • Zhitao He, Pengfei Cao, Chenhao Wang, Zhuoran Jin, Yubo Chen, Jiexin Xu, Huaijun Li, XiaoJian Jiang, Kang Liu, Jun Zhao
With the development of deep learning, natural language processing technology has effectively improved the efficiency of various aspects of the traditional judicial industry.
no code implementations • 29 Feb 2024 • Hongbang Yuan, Pengfei Cao, Zhuoran Jin, Yubo Chen, Daojian Zeng, Kang Liu, Jun Zhao
Large Language Models (LLMs) have shown impressive capabilities but still suffer from the issue of hallucinations.
1 code implementation • 28 Feb 2024 • Jiachun Li, Pengfei Cao, Chenhao Wang, Zhuoran Jin, Yubo Chen, Daojian Zeng, Kang Liu, Jun Zhao
Large language models exhibit high-level commonsense reasoning abilities, especially with enhancement methods like Chain-of-Thought (CoT).
no code implementations • 28 Feb 2024 • Zhuoran Jin, Pengfei Cao, Hongbang Yuan, Yubo Chen, Jiexin Xu, Huaijun Li, XiaoJian Jiang, Kang Liu, Jun Zhao
Moreover, we reveal that the pivotal point at which knowledge conflicts emerge in LMs is the integration of inconsistent information flows by memory heads and context heads.
no code implementations • 22 Feb 2024 • Zhuoran Jin, Pengfei Cao, Yubo Chen, Kang Liu, XiaoJian Jiang, Jiexin Xu, Qiuxia Li, Jun Zhao
Then, we investigate the behavior and preference of RALMs from the following two perspectives: (1) Conflicts between internal memory and external sources: We find that stronger RALMs emerge with the Dunning-Kruger effect, persistently favoring their faulty internal memory even when correct evidence is provided.
1 code implementation • 22 Feb 2024 • Zhihao Zhang, Jun Zhao, Qi Zhang, Tao Gui, Xuanjing Huang
Furthermore, this core region exhibits significant dimensional dependence, perturbations to even a single parameter on specific dimensions leading to a loss of linguistic competence.
no code implementations • 21 Feb 2024 • YuHeng Chen, Pengfei Cao, Yubo Chen, Yining Wang, Shengping Liu, Kang Liu, Jun Zhao
Large language models (LLMs) store extensive factual knowledge, but the underlying mechanisms remain unclear.
1 code implementation • 20 Feb 2024 • Tongxu Luo, Jiahe Lei, Fangyu Lei, Weihao Liu, Shizhu He, Jun Zhao, Kang Liu
Fine-tuning is often necessary to enhance the adaptability of Large Language Models (LLM) to downstream tasks.
1 code implementation • 19 Feb 2024 • Xiaowei Yuan, Zhao Yang, Yequan Wang, Shengping Liu, Jun Zhao, Kang Liu
Large language models internalize enormous parametric knowledge during pre-training.
1 code implementation • 18 Feb 2024 • Jun Zhao, Can Zu, Hao Xu, Yi Lu, wei he, Yiwen Ding, Tao Gui, Qi Zhang, Xuanjing Huang
Large language models (LLMs) have demonstrated impressive performance in understanding language and executing complex reasoning tasks.
no code implementations • 18 Feb 2024 • Nuo Xu, Jun Zhao, Can Zu, Sixian Li, Lu Chen, Zhihao Zhang, Rui Zheng, Shihan Dou, Wenjuan Qin, Tao Gui, Qi Zhang, Xuanjing Huang
To address this issue, we propose a cost-effective preference learning strategy, optimizing reward models by distinguishing between human and machine translations.
1 code implementation • 16 Feb 2024 • Yi Lu, Xin Zhou, wei he, Jun Zhao, Tao Ji, Tao Gui, Qi Zhang, Xuanjing Huang
Instead of allowing each head to attend to the full sentence, which struggles with generalizing to longer sequences due to out-of-distribution (OOD) issues, we allow each head to process in-distribution length by selecting and attending to important context chunks.
1 code implementation • 16 Feb 2024 • Chenhui Hu, Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao
Knowledge editing aims to rectify inaccuracies in large language models (LLMs) without costly retraining for outdated or erroneous knowledge.
1 code implementation • 15 Feb 2024 • Yixuan Weng, Shizhu He, Kang Liu, Shengping Liu, Jun Zhao
This heightens the need to control model behaviors.
no code implementations • 15 Feb 2024 • Jiaxiang Liu, Tong Zhou, Yubo Chen, Kang Liu, Jun Zhao
In summary, our results pave the way for enhancing LLMs by incorporating Pseudo- and Multisource-KGs, particularly in the context of open-ended questions.
no code implementations • 14 Feb 2024 • Zhao Li, Xin Wang, Jun Zhao, Wenbin Guo, JianXin Li
It is desirable and challenging for knowledge hypergraph embedding to reach a trade-off between model effectiveness and efficiency.
no code implementations • 4 Feb 2024 • Tinghao Zhang, Kwok-Yan Lam, Jun Zhao
For scalability, practical HFL schemes select a subset of IoT devices to participate in the training, hence the notion of device scheduling.
no code implementations • 3 Feb 2024 • Jianing He, Qi Zhang, Weiping Ding, Duoqian Miao, Jun Zhao, Liang Hu, Longbing Cao
DE$^3$-BERT implements a hybrid exiting strategy that supplements classic entropy-based local information with distance-based global information to enhance the estimation of prediction correctness for more reliable early exiting decisions.
no code implementations • 25 Jan 2024 • Mohamed R. Shoaib, Heba M. Emara, Jun Zhao, Walid El-Shafai, Naglaa F. Soliman, Ahmed S. Mubarak, Osama A. Omer, Fathi E. Abd El-Samie, Hamada Esmaiel
The InceptionResNetv2 model, incorporating transfer learning, registered an impressive 97. 5% accuracy in both the training and testing phases.
no code implementations • 12 Jan 2024 • Mohamed R. Shoaib, Heba M. Emara, Jun Zhao
This survey paper explores the transformative influence of frontier AI, foundation models, and Large Language Models (LLMs) in the realm of Intelligent Transportation Systems (ITS), emphasizing their integral role in advancing transportation intelligence, optimizing traffic management, and contributing to the realization of smart cities.
1 code implementation • 11 Jan 2024 • Binghai Wang, Rui Zheng, Lu Chen, Yan Liu, Shihan Dou, Caishuang Huang, Wei Shen, Senjie Jin, Enyu Zhou, Chenyu Shi, Songyang Gao, Nuo Xu, Yuhao Zhou, Xiaoran Fan, Zhiheng Xi, Jun Zhao, Xiao Wang, Tao Ji, Hang Yan, Lixing Shen, Zhan Chen, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang, Zuxuan Wu, Yu-Gang Jiang
We introduce a series of novel methods to mitigate the influence of incorrect and ambiguous preferences in the dataset and fully leverage high-quality preference data.
no code implementations • 2 Jan 2024 • Jun Zhao, Zhihao Zhang, Luhui Gao, Qi Zhang, Tao Gui, Xuanjing Huang
In recent times, substantial advancements have been witnessed in large language models (LLMs), exemplified by ChatGPT, showcasing remarkable proficiency across a range of complex tasks.
1 code implementation • 15 Dec 2023 • Shihan Dou, Enyu Zhou, Yan Liu, Songyang Gao, Jun Zhao, Wei Shen, Yuhao Zhou, Zhiheng Xi, Xiao Wang, Xiaoran Fan, ShiLiang Pu, Jiang Zhu, Rui Zheng, Tao Gui, Qi Zhang, Xuanjing Huang
Supervised fine-tuning (SFT) is a crucial step for large language models (LLMs), enabling them to align with human instructions and enhance their capabilities in downstream tasks.
no code implementations • 12 Dec 2023 • Renyang Liu, Wei Zhou, Sixin Wu, Jun Zhao, Kwok-Yan Lam
Extensive studies have demonstrated that deep neural networks (DNNs) are vulnerable to adversarial attacks, which brings a huge security risk to the further application of DNNs, especially for the AI models developed in the real world.
no code implementations • 11 Dec 2023 • Wenhan Yu, Terence Jie Chua, Jun Zhao
In spite of the rapid advancements in current technologies, the computation required for a smooth, seamless and immersive socialization experience in the Metaverse is overbearing, and the accumulated user experience is essential to be considered.
no code implementations • 11 Dec 2023 • Yitong Wang, Chang Liu, Jun Zhao
In pursuit of enhancing the accessibility of AIGC services, the deployment of AIGC models (e. g., diffusion models) to edge servers and local devices has become a prevailing trend.
no code implementations • 7 Dec 2023 • Yang Li, Xinyu Zhou, Jun Zhao
The secrecy rate is the communication rate at which no information is disclosed to an eavesdropper.
1 code implementation • 21 Nov 2023 • Tong Zhou, Yubo Chen, Pengfei Cao, Kang Liu, Jun Zhao, Shengping Liu
To this end, we present a pretraining corpus curation and assessment platform called Oasis -- a one-stop system for data quality improvement and quantification with user-friendly interactive interfaces.
1 code implementation • 13 Nov 2023 • Wangtao Sun, Xuanqing Yu, Shizhu He, Jun Zhao, Kang Liu
Black-box Large Language Models (LLMs) have shown great power in solving various tasks and are considered general problem solvers.
no code implementations • 31 Oct 2023 • Mohamed R. Shoaib, Heba M. Emara, Jun Zhao
Food security, a global concern, necessitates precise and diverse data-driven solutions to address its multifaceted challenges.
no code implementations • 26 Oct 2023 • Wenhan Yu, Terence Jie Chua, Jun Zhao
The efficient deployment and fine-tuning of foundation models are pivotal in contemporary artificial intelligence.
no code implementations • 26 Oct 2023 • Terence Jie Chua, Wenhan Yu, Jun Zhao, Kwok-Yan Lam
FedPEAT uses adapters, emulators, and PEFT for federated model tuning, enhancing model privacy and memory efficiency.
no code implementations • 23 Oct 2023 • Jun Zhao, Zhihao Zhang, Yide Ma, Qi Zhang, Tao Gui, Luhui Gao, Xuanjing Huang
We have discovered a core region in LLMs that corresponds to linguistic competence, accounting for approximately 1% of the total model parameters.
2 code implementations • 23 Oct 2023 • Fangyu Lei, Qian Liu, Yiming Huang, Shizhu He, Jun Zhao, Kang Liu
The rapid development of Large Language Models (LLMs) has led to great strides in model capabilities like long-context understanding and reasoning.
no code implementations • 23 Oct 2023 • Fangyu Lei, Tongxu Luo, Pengqi Yang, Weihao Liu, Hanwen Liu, Jiahe Lei, Yiming Huang, Yifan Wei, Shizhu He, Jun Zhao, Kang Liu
Table-based question answering (TableQA) is an important task in natural language processing, which requires comprehending tables and employing various reasoning ways to answer the questions.
1 code implementation • 17 Oct 2023 • Yao Xu, Shizhu He, Cunguang Wang, Li Cai, Kang Liu, Jun Zhao
However, these methods train KG embeddings and neural set operators concurrently on both simple (one-hop) and complex (multi-hop and logical) queries, which causes performance degradation on simple queries and low training efficiency.
1 code implementation • 16 Oct 2023 • Zhongtao Jiang, Yuanzhe Zhang, Cao Liu, Jun Zhao, Kang Liu
In this paper, we for the first time theoretically and empirically identify that such a paradox is mainly due to the label shift of the in-context model to the data distribution, in which LLMs shift the label marginal $p(y)$ while having a good label conditional $p(x|y)$.
no code implementations • 15 Oct 2023 • Renyang Liu, Jinhong Zhang, Kwok-Yan Lam, Jun Zhao, Wei Zhou
However, the distribution of these fake data lacks diversity and cannot detect the decision boundary of the target model well, resulting in the dissatisfactory simulation effect.
no code implementations • 15 Oct 2023 • Renyang Liu, Jun Zhao, Xing Chu, Yu Liang, Wei Zhou, Jing He
With the rapid development of GPU (Graphics Processing Unit) technologies and neural networks, we can explore more appropriate data structures and algorithms.
1 code implementation • 11 Oct 2023 • Renyang Liu, Wei Zhou, Tianwei Zhang, Kangjie Chen, Jun Zhao, Kwok-Yan Lam
Existing black-box attacks have demonstrated promising potential in creating adversarial examples (AE) to deceive deep learning models.
no code implementations • 8 Oct 2023 • Wei Shen, Rui Zheng, WenYu Zhan, Jun Zhao, Shihan Dou, Tao Gui, Qi Zhang, Xuanjing Huang
Reinforcement learning from human feedback serves as a crucial bridge, aligning large language models with human and societal values.
1 code implementation • 8 Oct 2023 • Yifan Wei, Yisong Su, Huanhuan Ma, Xiaoyan Yu, Fangyu Lei, Yuanzhe Zhang, Jun Zhao, Kang Liu
As a result, it is natural for people to believe that LLMs have also mastered abilities such as time understanding and reasoning.
no code implementations • 22 Sep 2023 • Tongxu Luo, Fangyu Lei, Jiahe Lei, Weihao Liu, Shihu He, Jun Zhao, Kang Liu
Answering numerical questions over hybrid contents from the given tables and text(TextTableQA) is a challenging task.
no code implementations • 17 Sep 2023 • Tinghao Zhang, Kwok-Yan Lam, Jun Zhao
The large population of wireless users is a key driver of data-crowdsourced Machine Learning (ML).
no code implementations • 9 Sep 2023 • Weihao Liu, Fangyu Lei, Tongxu Luo, Jiahe Lei, Shizhu He, Jun Zhao, Kang Liu
Most importantly, we propose a Type-specific In-context Learning Strategy for MMHQA, enabling LLMs to leverage their powerful performance in this task.
1 code implementation • 31 Aug 2023 • Zhongtao Jiang, Yuanzhe Zhang, Cao Liu, Jiansong Chen, Jun Zhao, Kang Liu
As the key to sentiment analysis, sentiment composition considers the classification of a constituent via classifications of its contained sub-constituents and rules operated on them.
no code implementations • 28 Aug 2023 • Baoli Zhang, Haining Xie, Pengfan Du, JunHao Chen, Pengfei Cao, Yubo Chen, Shengping Liu, Kang Liu, Jun Zhao
To this end, we propose the ZhuJiu benchmark, which has the following strengths: (1) Multi-dimensional ability coverage: We comprehensively evaluate LLMs across 7 ability dimensions covering 51 tasks.
1 code implementation • 25 Aug 2023 • YuHeng Chen, Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao
We design cross-lingual knowledge editing experiments, demonstrating that the PLMs can accomplish this task based on language-independent neurons; (2) The discovery of Degenerate Knowledge Neurons, a novel type of neuron showing that different knowledge neurons can store the same fact.
1 code implementation • 20 Aug 2023 • Yixuan Weng, Zhiqi Wang, Huanxuan Liao, Shizhu He, Shengping Liu, Kang Liu, Jun Zhao
With the burgeoning development in the realm of large language models (LLMs), the demand for efficient incremental training tailored to specific industries and domains continues to increase.
no code implementations • 18 Aug 2023 • Beichuan Zhang, Chenggen Sun, Jianchao Tan, Xinjun Cai, Jun Zhao, Mengqi Miao, Kang Yin, Chengru Song, Na Mou, Yang song
Increasing the size of embedding layers has shown to be effective in improving the performance of recommendation models, yet gradually causing their sizes to exceed terabytes in industrial recommender systems, and hence the increase of computing and storage costs.
no code implementations • 18 Aug 2023 • Peiyuan Si, Jun Zhao, Kwok-Yan Lam, Qing Yang
In this paper, we aim to explore the use of uplink semantic communications with the assistance of UAV in order to improve data collection effiicency for metaverse users in remote areas.
no code implementations • 8 Aug 2023 • Wenhan Yu, Jun Zhao
Advanced video technologies are driving the development of the futuristic Metaverse, which aims to connect users from anywhere and anytime.
no code implementations • 8 Jun 2023 • Jun Zhao, Yongxin Zhang, Qi Zhang, Tao Gui, Zhongyu Wei, Minlong Peng, Mingming Sun
The key to the setting is selecting which instances to label.
1 code implementation • 8 Jun 2023 • Jun Zhao, WenYu Zhan, Xin Zhao, Qi Zhang, Tao Gui, Zhongyu Wei, Junzhe Wang, Minlong Peng, Mingming Sun
However, general matching methods lack explicit modeling of the above matching pattern.
1 code implementation • 8 Jun 2023 • Jun Zhao, Xin Zhao, WenYu Zhan, Qi Zhang, Tao Gui, Zhongyu Wei, Yunwen Chen, Xiang Gao, Xuanjing Huang
Inspired by text adversarial attacks, we adaptively apply small but critical perturbations to original training instances and thus synthesizing negative instances that are more likely to be mistaken by the model as known relations.
no code implementations • 29 May 2023 • Peiyuan Si, Liangxin Qian, Jun Zhao, Kwok-Yan Lam
Unmanned aerial vehicles (UAVs) are promising for providing communication services due to their advantages in cost and mobility, especially in the context of the emerging Metaverse and Internet of Things (IoT).
no code implementations • 23 May 2023 • Minjun Zhu, Yixuan Weng, Shizhu He, Kang Liu, Jun Zhao
In Textual question answering (TQA) systems, complex questions often require retrieving multiple textual fact chains with multiple reasoning steps.
1 code implementation • 19 May 2023 • Fangyu Lei, Xiang Li, Yifan Wei, Shizhu He, Yiming Huang, Jun Zhao, Kang Liu
In this paper, we propose a three-stage TextTableQA framework S3HQA, which comprises of retriever, selector, and reasoner.
1 code implementation • 9 May 2023 • Yixuan Weng, Bin Li, Fei Xia, Minjun Zhu, Bin Sun, Shizhu He, Kang Liu, Jun Zhao
The medical conversational question answering (CQA) system aims at providing a series of professional medical services to improve the efficiency of medical care.
1 code implementation • 5 May 2023 • Yifan Wei, Fangyu Lei, Yuanzhe Zhang, Jun Zhao, Kang Liu
Hybrid question answering (HybridQA) over the financial report contains both textual and tabular data, and requires the model to select the appropriate evidence for the numerical reasoning task.
3 code implementations • 4 Apr 2023 • Yixuan Weng, Minjun Zhu, Fei Xia, Bin Li, Shizhu He, Kang Liu, Jun Zhao
Our work highlights the potential of seamlessly unifying explicit rule learning via CoNNs and implicit pattern learning in LMs, paving the way for true symbolic comprehension capabilities.
no code implementations • 31 Mar 2023 • Tao Bai, Chen Chen, Lingjuan Lyu, Jun Zhao, Bihan Wen
Recent studies show that models trained by continual learning can achieve the comparable performances as the standard supervised learning and the learning flexibility of continual learning models enables their wide applications in the real world.
no code implementations • 18 Mar 2023 • Terence Jie Chua, Wenhan Yu, Jun Zhao
We then conduct further analyses on our choice of model priors and the adoption of Bayesian Neural Networks in different layers within our model architecture.
no code implementations • 18 Mar 2023 • Terence Jie Chua, Wenhan Yu, Jun Zhao
Nevertheless, as real-time, accurate detection of adversarial patches is compute-intensive, these physical world scenes have to be offloaded to the Metaverse Map Base Stations (MMBS) for computation.
no code implementations • 8 Mar 2023 • Wenhan Yu, Terence Jie Chua, Jun Zhao
Virtual reality (VR) technologies are the backbone for the virtual universe within the Metaverse as they enable a hyper-realistic and immersive experience, and especially so in the context of socialization.
no code implementations • 3 Feb 2023 • Wenhan Yu, Terence Jie Chua, Jun Zhao
In this paper, for a system consisting of a Metaverse server and multiple VR users, we consider two cases of (i) the server generating frames and transmitting them to users, and (ii) users generating frames locally and thus consuming device energy.
no code implementations • 7 Jan 2023 • Yinyu Lan, Shizhu He, Kang Liu, Jun Zhao
The former has high accuracy and good interpretability, but a major challenge is to obtain effective rules on large-scale KGs.
no code implementations • 4 Jan 2023 • Peiyuan Si, Wenhan Yu, Jun Zhao, Kwok-Yan Lam, Qing Yang
A huge amount of data in physical world needs to be synchronized to the virtual world to provide immersive experience for users, and there will be higher requirements on coverage to include more users into Metaverse.
no code implementations • 30 Dec 2022 • Wenhan Yu, Terence Jie Chua, Jun Zhao
In the DL stage, the larger-size 3D virtual objects need to be transmitted back to the XUs.
no code implementations • 19 Dec 2022 • Terence Jie Chua, Wenhan Yu, Jun Zhao
The Metaverse can be considered the extension of the present-day web, which integrates the physical and virtual worlds, delivering hyper-realistic user experiences.
1 code implementation • 19 Dec 2022 • Yixuan Weng, Minjun Zhu, Fei Xia, Bin Li, Shizhu He, Shengping Liu, Bin Sun, Kang Liu, Jun Zhao
By performing a backward verification of the answers that LLM deduced for itself, we can obtain interpretable answer validation scores to select the candidate answer with the highest score.
no code implementations • 16 Dec 2022 • Xinyu Zhou, Jun Zhao
The Metaverse is deemed the next evolution of the Internet and has received much attention recently.
no code implementations • 16 Nov 2022 • Xinyu Zhou, Chang Liu, Jun Zhao
The Metaverse has received much attention recently.
no code implementations • 24 Oct 2022 • Yiming Ju, Yuanzhe Zhang, Kang Liu, Jun Zhao
The opaqueness of deep NLP models has motivated the development of methods for interpreting how deep models predict.
no code implementations • 17 Oct 2022 • Minjun Zhu, Yixuan Weng, Shizhu He, Kang Liu, Jun Zhao
Recently, natural language database (NLDB) conducts complex QA in knowledge base with textual evidences rather than structured representations, this task attracts a lot of attention because of the flexibility and richness of textual evidence.
no code implementations • 11 Oct 2022 • Tinghao Zhang, Zhijun Li, Yongrui Chen, Kwok-Yan Lam, Jun Zhao
A reinforcement learning (RL)-based DNN compression approach is used to generate the lightweight model suitable for the edge from the heavyweight model.
no code implementations • 7 Oct 2022 • Chang Liu, Terence Jie Chua, Jun Zhao
Therefore, we formulate a joint learning and communication optimization problem to minimize total model parameter communication and computation delay, by optimizing local iteration counts and edge iteration counts.
1 code implementation • 29 Sep 2022 • Qiao Han, Jun Zhao, Kwok-Yan Lam
This research aims to make metaverse characters more realistic by adding lip animations learnt from videos in the wild.
no code implementations • 29 Sep 2022 • Xinyu Zhou, Jun Zhao, Huimei Han, Claude Guet
Federated Learning (FL) is an intriguing distributed machine learning approach due to its privacy-preserving characteristics.
no code implementations • 28 Sep 2022 • Peiyuan Si, Jun Zhao, Huimei Han, Kwok-Yan Lam, Yang Liu
With the development of blockchain and communication techniques, the Metaverse is considered as a promising next-generation Internet paradigm, which enables the connection between reality and the virtual world.
no code implementations • 28 Sep 2022 • Yitong Wang, Jun Zhao
Compared to cloud computing, as the distributed and closer infrastructure, the convergence of MEC with other emerging technologies, including the Metaverse, 6G wireless communications, artificial intelligence (AI), and blockchain, also solves the problems of network resource allocation, more network load as well as latency requirements.
no code implementations • 27 Sep 2022 • Terence Jie Chua, Wenhan Yu, Jun Zhao
Being able to access scenes and information associated with the physical world, in the Metaverse in real-time and under mobility, is essential in developing a highly accessible, interactive and interconnective experience for all users.
1 code implementation • COLING 2022 • Fangyu Lei, Shizhu He, Xiang Li, Jun Zhao, Kang Liu
In the real-world question answering scenarios, hybrid form combining both tabular and textual contents has attracted more and more attention, among which numerical reasoning problem is one of the most typical and challenging problems.
no code implementations • 26 Jul 2022 • Jiang Bian, Xuhong LI, Tao Wang, Qingzhong Wang, Jun Huang, Chen Liu, Jun Zhao, Feixiang Lu, Dejing Dou, Haoyi Xiong
While deep learning has been widely used for video analytics, such as video classification and action detection, dense action detection with fast-moving subjects from sports videos is still challenging.
1 code implementation • 20 Apr 2022 • Fei Xia, Bin Li, Yixuan Weng, Shizhu He, Kang Liu, Bin Sun, Shutao Li, Jun Zhao
The medical conversational system can relieve the burden of doctors and improve the efficiency of healthcare, especially during the pandemic.
no code implementations • 16 Apr 2022 • Binjie Qin, Haohao Mao, Yiming Liu, Jun Zhao, Yisong Lv, Yueqi Zhu, Song Ding, Xu Chen
Although robust PCA has been increasingly adopted to extract vessels from X-ray coronary angiography (XCA) images, challenging problems such as inefficient vessel-sparsity modelling, noisy and dynamic background artefacts, and high computational cost still remain unsolved.
no code implementations • 19 Nov 2021 • Tao Bai, Jun Zhao, Jinlin Zhu, Shoudong Han, Jiefeng Chen, Bo Li, Alex Kot
Through extensive experiments, AI-GAN achieves high attack success rates, outperforming existing methods, and reduces generation time significantly.
no code implementations • 14 Nov 2021 • Wanting Lyu, Yue Xiu, Jun Zhao, Zhongpei Zhang
In this letter, a reconfigurable intelligent surface (RIS)-assisted simultaneous wireless information and power transfer (SWIPT) network is investigated.
no code implementations • 15 Oct 2021 • Tao Bai, Jun Zhao, Lanqing Guo, Bihan Wen
Deep learning models are vulnerable to adversarial examples and make incomprehensible mistakes, which puts a threat on their real-world deployment.
1 code implementation • EMNLP 2021 • Jun Zhao, Tao Gui, Qi Zhang, Yaqian Zhou
The clustering-based unsupervised relation discovery method has gradually become one of the important methods of open relation extraction (OpenRE).
Ranked #1 on
Relation Extraction
on FewRel
no code implementations • ACL 2022 • Yiming Ju, Yuanzhe Zhang, Zhao Yang, Zhongtao Jiang, Kang Liu, Jun Zhao
Meanwhile, since the reasoning process of deep models is inaccessible, researchers design various evaluation methods to demonstrate their arguments.
no code implementations • 10 Aug 2021 • Qingbin Liu, Xiaoyan Yu, Shizhu He, Kang Liu, Jun Zhao
In this paper, we propose Lifelong Intent Detection (LID), which continually trains an ID model on new data to learn newly emerging intents while avoiding catastrophically forgetting old data.
no code implementations • ACL 2021 • Pengfei Cao, Xinyu Zuo, Yubo Chen, Kang Liu, Jun Zhao, Yuguang Chen, Weihua Peng
Specifically, to make use of the descriptive knowledge, we devise a Descriptive Graph Induction module to obtain and encode the graph-structured descriptive knowledge.
1 code implementation • ACL 2021 • Dianbo Sui, Zhengkun Tian, Yubo Chen, Kang Liu, Jun Zhao
In this paper, we aim to explore an uncharted territory, which is Chinese multimodal named entity recognition (NER) with both textual and acoustic contents.
1 code implementation • ACL 2021 • Zhuoran Jin, Yubo Chen, Dianbo Sui, Chenhao Wang, Zhipeng Xue, Jun Zhao
CogNet is a knowledge base that integrates three types of knowledge: linguistic knowledge, world knowledge and commonsense knowledge.
2 code implementations • ACL 2021 • Hang Yang, Dianbo Sui, Yubo Chen, Kang Liu, Jun Zhao, Taifeng Wang
We argue that sentence-level extractors are ill-suited to the DEE task where event arguments always scatter across sentences and multiple events may co-exist in a document.
1 code implementation • ACL 2021 • Tong Zhou, Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao, Kun Niu, Weifeng Chong, Shengping Liu
The ICD coding task aims at assigning codes of the International Classification of Diseases in clinical notes.
no code implementations • ACL 2021 • Zhongtao Jiang, Yuanzhe Zhang, Zhao Yang, Jun Zhao, Kang Liu
Deep learning models have achieved great success on the task of Natural Language Inference (NLI), though only a few attempts try to explain their behaviors.
no code implementations • 5 Jul 2021 • Zhiyi Lin, Chunyue Song, Jun Zhao, Chao Yang, Huan Yin
Intra-day economic dispatch of an integrated microgrid is a fundamental requirement to integrate distributed generators.
no code implementations • 29 Jun 2021 • Tao Bai, Jinqi Luo, Jun Zhao
The patches are encouraged to be consistent with the background images with adversarial training while preserving strong attack abilities.
no code implementations • 23 Jun 2021 • Chen Liu, Bo Li, Jun Zhao, Ming Su, Xu-Dong Liu
In this paper, we propose MG-DVD, a novel detection framework based on dynamic heterogeneous graph learning, to detect malware variants in real time.
no code implementations • ACL 2021 • Xinyu Zuo, Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao, Weihua Peng, Yuguang Chen
On the other hand, our approach employs a dual mechanism, which is a learnable augmentation framework and can interactively adjust the generation process to generate task-related sentences.
no code implementations • Findings (ACL) 2021 • Xinyu Zuo, Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao, Weihua Peng, Yuguang Chen
Current models for event causality identification (ECI) mainly adopt a supervised framework, which heavily rely on labeled data for training.
no code implementations • 27 May 2021 • Yinyu Lan, Shizhu He, Xiangrong Zeng, Shengping Liu, Kang Liu, Jun Zhao
To address the above issues, this paper proposes two novel path-based reasoning methods to solve the sparsity issues of entity and path respectively, which adopts the textual semantic information of entities and paths for MedKGC.
no code implementations • EACL 2021 • Pei Chen, Kang Liu, Yubo Chen, Taifeng Wang, Jun Zhao
This paper proposes a new task regarding event reason extraction from document-level texts.
1 code implementation • ACL 2021 • Tao Gui, Xiao Wang, Qi Zhang, Qin Liu, Yicheng Zou, Xin Zhou, Rui Zheng, Chong Zhang, Qinzhuo Wu, Jiacheng Ye, Zexiong Pang, Yongxin Zhang, Zhengyan Li, Ruotian Ma, Zichu Fei, Ruijian Cai, Jun Zhao, Xingwu Hu, Zhiheng Yan, Yiding Tan, Yuan Hu, Qiyuan Bian, Zhihua Liu, Bolin Zhu, Shan Qin, Xiaoyu Xing, Jinlan Fu, Yue Zhang, Minlong Peng, Xiaoqing Zheng, Yaqian Zhou, Zhongyu Wei, Xipeng Qiu, Xuanjing Huang
To guarantee user acceptability, all the text transformations are linguistically based, and we provide a human evaluation for each one.
no code implementations • 3 Mar 2021 • Chenhao Wang, Yubo Chen, Zhipeng Xue, Yang Zhou, Jun Zhao
In this paper, we present CogNet, a knowledge base (KB) dedicated to integrating three types of knowledge: (1) linguistic knowledge from FrameNet, which schematically describes situations, objects and events.
no code implementations • 2 Feb 2021 • Weiheng Jiang, Yu Zhang, Jun Zhao, Zehui Xiong, Zhiguo Ding
Cognitive radio (CR) is an effective solution to improve the spectral efficiency (SE) of wireless communications by allowing the secondary users (SUs) to share spectrum with primary users (PUs).
Information Theory Signal Processing Information Theory
no code implementations • 2 Feb 2021 • Tao Bai, Jinqi Luo, Jun Zhao, Bihan Wen, Qian Wang
Adversarial training is one of the most effective approaches defending against adversarial examples for deep learning models.
no code implementations • 27 Jan 2021 • Xin Liu, Kwok-Yan Lam, Feng Li, Jun Zhao, Li Wang
ISTCN aims to provide high speed and pervasive network services by integrating broadband terrestrial mobile networks with satellite communication networks.
no code implementations • 16 Jan 2021 • Huimei Han, Wenchao Zhai, Jun Zhao
mMTC and URLLC will co-exist in MTC networks for 5G 6G-enabled smart city.
no code implementations • 10 Jan 2021 • Quoc-Viet Pham, Thien Huynh-The, Mamoun Alazab, Jun Zhao, Won-Joo Hwang
As the integration of unmanned aerial vehicles (UAVs) into visible light communications (VLC) can offer many benefits for massive-connectivity applications and services in 5G and beyond, this work considers a UAV-assisted VLC using non-orthogonal multiple-access.
no code implementations • 1 Jan 2021 • Xuanli He, Lingjuan Lyu, Lichao Sun, Xiaojun Chang, Jun Zhao
We then demonstrate how the extracted model can be exploited to develop effective attribute inference attack to expose sensitive information of the training data.
no code implementations • 27 Dec 2020 • Hongliang Zhang, Shoudong Han, Xiaofeng Pan, Jun Zhao
Usually, attributed to the domain gaps, the pre-trained source domain model cannot extract appropriate target domain features, which will dramatically affect the clustering performance and the accuracy of pseudo-labels.
no code implementations • 25 Dec 2020 • Wenchao Zhai, Huimei Han, Lei Liu, Jun Zhao
In this paper, an LSTM-aided hybrid random access scheme (LSTMH-RA) is proposed to support diverse quality of service (QoS) requirements in 6G machine-type communication (MTC) networks, where massive MTC (mMTC) devices and ultra-reliable low latency communications (URLLC) devices coexist.
no code implementations • 23 Dec 2020 • Helin Yang, Zehui Xiong, Jun Zhao, Dusit Niyato, Qingqing Wu, Massimo Tornatore, Stefano Secci
Aiming to enhance the communication performance against smart jammer, an optimization problem for jointly optimizing power allocation at the base station (BS) and reflecting beamforming at the IRS is formulated.
no code implementations • 21 Dec 2020 • Yang Zhao, Wenchao Zhai, Jun Zhao, Tinghao Zhang, Sumei Sun, Dusit Niyato, Kwok-Yan Lam
First, we give an overview of 6G from perspectives of technologies, security and privacy, and applications.