no code implementations • 24 Dec 2024 • Binrui Zeng, Bin Ji, Xiaodong Liu, Jie Yu, Shasha Li, Jun Ma, Xiaopeng Li, Shangwen Wang, Xinran Hong
As large language models (LLMs) demonstrate exceptional performance across various domains, the deployment of these models on edge devices has emerged as a new trend.
1 code implementation • 11 Nov 2024 • Xiaopeng Li, Shangwen Wang, Shasha Li, Jun Ma, Jie Yu, Xiaodong Liu, Jing Wang, Bin Ji, Weimin Zhang
Despite that, a comprehensive study that thoroughly compares and analyzes the performance of the state-of-the-art model editing techniques for adapting the knowledge within LLMs4Code across various code-related tasks is notably absent.
1 code implementation • 29 Sep 2024 • Xiaopeng Li, Shangwen Wang, Shezheng Song, Bin Ji, Huijun Liu, Shasha Li, Jun Ma, Jie Yu
However, there is a lack of effective measures to prevent the malicious misuse of this technology, which could lead to harmful edits in LLMs.
1 code implementation • 6 Jun 2024 • Xiaohu Du, Ming Wen, Jiahao Zhu, Zifan Xie, Bin Ji, Huijun Liu, Xuanhua Shi, Hai Jin
First, we utilize the vulnerability patches to construct a vulnerability localization task.
1 code implementation • 7 Apr 2024 • Shezheng Song, Shasha Li, Shan Zhao, Xiaopeng Li, Chengyu Wang, Jie Yu, Jun Ma, Tianwei Yan, Bin Ji, Xiaoguang Mao
Multimodal entity linking (MEL) aims to utilize multimodal information (usually textual and visual information) to link ambiguous mentions to unambiguous entities in knowledge base.
no code implementations • 2 Apr 2024 • Shuai Tan, Bin Ji, Mengxiao Bi, Ye Pan
Achieving disentangled control over multiple facial motions and accommodating diverse input modalities greatly enhances the application and entertainment of the talking head generation.
no code implementations • CVPR 2024 • Shuai Tan, Bin Ji, Ye Pan
Specifically, we develop a flow-based coefficient generator that encodes the dynamics of facial emotion into a multi-emotion-class latent space represented as a mixture distribution.
no code implementations • 11 Mar 2024 • Shuai Tan, Bin Ji, Yu Ding, Ye Pan
To adapt to different speaking styles, we steer clear of employing a universal network by exploring an elaborate HyperStyle to produce the style-specific weights offset for the style branch.
no code implementations • 11 Mar 2024 • Shuai Tan, Bin Ji, Ye Pan
Although automatically animating audio-driven talking heads has recently received growing interest, previous efforts have mainly concentrated on achieving lip synchronization with the audio, neglecting two crucial elements for generating expressive videos: emotion style and art style.
1 code implementation • 12 Feb 2024 • Mingzhe Du, Anh Tuan Luu, Bin Ji, Qian Liu, See-Kiong Ng
Based on the distribution, we introduce a new metric Beyond, which computes a runtime-percentile-weighted Pass score to reflect functional correctness and code efficiency simultaneously.
1 code implementation • 31 Jan 2024 • Xiaopeng Li, Shasha Li, Shezheng Song, Huijun Liu, Bin Ji, Xi Wang, Jun Ma, Jie Yu, Xiaodong Liu, Jing Wang, Weimin Zhang
In particular, local editing methods, which directly update model parameters, are more suitable for updating a small amount of knowledge.
1 code implementation • 22 Oct 2023 • Mingzhe Du, Anh Tuan Luu, Bin Ji, See-Kiong Ng
The vast number of parameters in large language models (LLMs) endows them with remarkable capabilities, allowing them to excel in a variety of natural language processing tasks.
1 code implementation • 19 Oct 2023 • Yi Bin, Wenhao Shi, Bin Ji, Jipeng Zhang, Yujuan Ding, Yang Yang
Existing sentence ordering approaches generally employ encoder-decoder frameworks with the pointer net to recover the coherence by recurrently predicting each sentence step-by-step.
no code implementations • 5 May 2023 • Bin Ji
VicunaNER is a two-phase framework, where each phase leverages multi-turn dialogues with Vicuna to recognize entities from texts.
no code implementations • 9 Mar 2023 • Jing Yang, Bin Ji, Shasha Li, Jun Ma, Long Peng, Jie Yu
Recently, many studies incorporate external knowledge into character-level feature based models to improve the performance of Chinese relation extraction.
no code implementations • ICCV 2023 • Shuai Tan, Bin Ji, Ye Pan
During training, the emotion embedding and mouth features are used as keys, and the corresponding expression features are used as values to create key-value pairs stored in the proposed Motion Memory Net.
no code implementations • 23 Oct 2022 • Bin Ji, Shasha Li, Hao Xu, Jie Yu, Jun Ma, Huijun Liu, Jing Yang
On the one hand, the core architecture enables our model to learn token-level label information via the sequence tagging mechanism and then uses the information in the span-based joint extraction; on the other hand, it establishes a bi-directional information interaction between NER and RE.
Joint Entity and Relation Extraction named-entity-recognition +3
no code implementations • 18 Aug 2022 • Bin Ji, Hao Xu, Jie Yu, Shasha Li, Jun Ma, Yuke Ji, Huijun Liu
An exhaustive study has been conducted to investigate span-based models for the joint entity and relation extraction task.
no code implementations • 17 Aug 2022 • Huijun Liu, Jie Yu, Shasha Li, Jun Ma, Bin Ji
Textual adversarial attacks expose the vulnerabilities of text classifiers and can be used to improve their robustness.
no code implementations • COLING 2022 • Bin Ji, Shasha Li, Shaoduo Gan, Jie Yu, Jun Ma, Huijun Liu
Few-shot named entity recognition (NER) enables us to build a NER system for a new domain using very few labeled examples.
no code implementations • 11 Jul 2022 • Mengxue Du, Shasha Li, Jie Yu, Jun Ma, Bin Ji, Huijun Liu, Wuhang Lin, Zibo Yi
Document retrieval enables users to find their required documents accurately and quickly.
no code implementations • 11 Jul 2022 • Wuhang Lin, Shasha Li, Chen Zhang, Bin Ji, Jie Yu, Jun Ma, Zibo Yi
However, the existing evaluation metrics for summary text are only rough proxies for summary quality, suffering from low correlation with human scoring and inhibition of summary diversity.
no code implementations • 7 Jul 2022 • Bin Ji, Shasha Li, Jie Yu, Jun Ma, Huijun Liu
Previous research has demonstrated that the two paradigms have clear complementary advantages, but few models have attempted to leverage these advantages in a single NER model as far as we know.
no code implementations • 21 May 2021 • Bin Ji, Shasha Li, Jie Yu, Jun Ma, Huijun Liu
To solve this problem, we pro-pose Sequence Tagging enhanced Span-based Network (STSN), a span-based joint extrac-tion network that is enhanced by token BIO label information derived from sequence tag-ging based NER.
Joint Entity and Relation Extraction named-entity-recognition +4
no code implementations • 1 Jan 2021 • LiMin Wang, Bin Ji, Zhan Tong, Gangshan Wu
To mitigate this issue, this paper presents a new video architecture, termed as Temporal Difference Network (TDN), with a focus on capturing multi-scale temporal information for efficient action recognition.
1 code implementation • CVPR 2021 • LiMin Wang, Zhan Tong, Bin Ji, Gangshan Wu
To mitigate this issue, this paper presents a new video architecture, termed as Temporal Difference Network (TDN), with a focus on capturing multi-scale temporal information for efficient action recognition.
Ranked #18 on Action Recognition on Something-Something V1
no code implementations • COLING 2020 • Bin Ji, Jie Yu, Shasha Li, Jun Ma, Qingbo Wu, Yusong Tan, Huijun Liu
Span-based joint extraction models have shown their efficiency on entity recognition and relation extraction.
no code implementations • CVPR 2020 • Yan Li, Bin Ji, Xintian Shi, Jian-Guo Zhang, Bin Kang, Li-Min Wang
Temporal modeling is key for action recognition in videos.