1 code implementation • EMNLP 2021 • Rui Li, Wenlin Zhao, Cheng Yang, Sen Su
Event detection (ED) aims at identifying event instances of specified types in given texts, which has been formalized as a sequence labeling task.
1 code implementation • 18 Feb 2025 • Pengyu Zhu, Zhenhong Zhou, Yuanhe Zhang, Shilinlu Yan, Kun Wang, Sen Su
As LLM-based agents become increasingly prevalent, backdoors can be implanted into agents through user queries or environment feedback, raising critical concerns regarding safety vulnerabilities.
1 code implementation • 18 Dec 2024 • Yuanhe Zhang, Zhenhong Zhou, Wei zhang, Xinyue Wang, Xiaojun Jia, Yang Liu, Sen Su
Large Language Models (LLMs) have demonstrated remarkable performance across diverse tasks yet still are vulnerable to external threats, particularly LLM Denial-of-Service (LLM-DoS) attacks.
1 code implementation • 15 Dec 2024 • Tingfeng Hui, Lulu Zhao, Guanting Dong, Yaqi Zhang, Hua Zhou, Sen Su
In this study, we question this prevalent assumption and conduct an in-depth exploration into the potential of smaller language models (SLMs) in the context of instruction evolution.
no code implementations • 2 Oct 2024 • Tingfeng Hui, Zhenyu Zhang, Shuohuan Wang, Yu Sun, Hua Wu, Sen Su
To ensure that each specialized expert in the MoE model works as expected, we select a small amount of seed data that each expert excels to pre-optimize the router.
no code implementations • 24 Sep 2024 • Mei Wang, Weihong Deng, Jiani Hu, Sen Su
The study of oracle characters plays an important role in Chinese archaeology and philology.
1 code implementation • 14 Aug 2024 • Quan Liu, Zhenhong Zhou, Longzhu He, Yi Liu, Wei zhang, Sen Su
Large language models are susceptible to jailbreak attacks, which can result in the generation of harmful content.
no code implementations • 27 Feb 2024 • Zhenhong Zhou, Jiuyang Xiang, Haopeng Chen, Quan Liu, Zherui Li, Sen Su
Large Language Models (LLMs) have been demonstrated to generate illegal or unethical responses, particularly when subjected to "jailbreak."
no code implementations • 4 Jan 2024 • Mei Wang, Weihong Deng, Jiani Hu, Sen Su
Deep neural networks (DNNs) are often prone to learn the spurious correlations between target classes and bias attributes, like gender and race, inherent in a major portion of training data (bias-aligned samples), thus showing unfair behavior and arising controversy in the modern pluralistic and egalitarian society.
no code implementations • 11 Dec 2023 • Mei Wang, Weihong Deng, Sen Su
Ancient history relies on the study of ancient characters.
no code implementations • 21 Sep 2023 • Luyao He, Zhongbao Zhang, Sen Su, Yuxin Chen
To address these issues, we propose BitCoin, an innovative Bidirectional tagging and supervised Contrastive learning based joint relational triple extraction framework.
no code implementations • 30 Aug 2023 • Zhenhong Zhou, Jiuyang Xiang, Chaomeng Chen, Sen Su
Quantifying language model memorization helps evaluate potential privacy risks.
no code implementations • 10 Dec 2021 • Li Sun, Zhongbao Zhang, Junda Ye, Hao Peng, Jiawei Zhang, Sen Su, Philip S. Yu
Instead of working on one single constant-curvature space, we construct a mixed-curvature space via the Cartesian product of multiple Riemannian component spaces and design hierarchical attention mechanisms for learning and fusing the representations across these component spaces.
no code implementations • 24 Sep 2021 • Lei Shi, Kai Shuang, Shijie Geng, Peng Gao, Zuohui Fu, Gerard de Melo, Yunpeng Chen, Sen Su
To overcome these issues, we propose unbiased Dense Contrastive Visual-Linguistic Pretraining (DCVLP), which replaces the region regression and classification with cross-modality region contrastive learning that requires no annotations.
no code implementations • 6 Apr 2021 • Li Sun, Zhongbao Zhang, Jiawei Zhang, Feiyang Wang, Hao Peng, Sen Su, Philip S. Yu
To model the uncertainty, we devise a hyperbolic graph variational autoencoder built upon the proposed TGNN to generate stochastic node representations of hyperbolic normal distributions.
no code implementations • 26 Jul 2020 • Lei Shi, Kai Shuang, Shijie Geng, Peng Su, Zhengkai Jiang, Peng Gao, Zuohui Fu, Gerard de Melo, Sen Su
We evaluate CVLP on several down-stream tasks, including VQA, GQA and NLVR2 to validate the superiority of contrastive learning on multi-modality representation learning.
no code implementations • 3 Jan 2020 • Lei Shi, Shijie Geng, Kai Shuang, Chiori Hori, Songxiang Liu, Peng Gao, Sen Su
To solve the issue for the intermediate layers, we propose an efficient Quaternion Block Network (QBN) to learn interaction not only for the last layer but also for all intermediate layers simultaneously.
no code implementations • 25 Jul 2019 • Rui Li, Kai Shuang, Mengyu Gu, Sen Su
Due to the adaptive noises can be improved as the training processes, its negative effects can be weakened and even transformed into a positive effect to further improve the expressiveness of the main-branch RNN.