1 code implementation • 12 Jun 2024 • Duanyu Feng, Bowen Qin, Chen Huang, Youcheng Huang, Zheng Zhang, Wenqiang Lei
By leveraging this safety direction, Legend can then leverage the semantic distances of paired responses along this direction to annotate margins automatically.
no code implementations • 4 Jun 2024 • Youcheng Huang, Jingkun Tang, Duanyu Feng, Zheng Zhang, Wenqiang Lei, Jiancheng Lv, Anthony G. Cohn
We find that this also induces dishonesty in helpful and harmless alignment where LLMs tell lies in generating harmless responses.
1 code implementation • 19 Feb 2024 • Zihan Qiu, Zeyu Huang, Youcheng Huang, Jie Fu
The feed-forward networks (FFNs) in transformers are recognized as a group of key-value neural memories to restore abstract high-level knowledge.
no code implementations • 15 Jan 2024 • Youcheng Huang, Wenqiang Lei, Zheng Zhang, Jiancheng Lv, Shuicheng Yan
In this paper, we empirically find that the effects of different contexts upon LLMs in recalling the same knowledge follow a Gaussian-like distribution.
1 code implementation • 7 Nov 2022 • Youcheng Huang, Wenqiang Lei, Jie Fu, Jiancheng Lv
Incorporating large-scale pre-trained models with the prototypical neural networks is a de-facto paradigm in few-shot named entity recognition.
1 code implementation • ACL 2021 • Fengbin Zhu, Wenqiang Lei, Youcheng Huang, Chao Wang, Shuo Zhang, Jiancheng Lv, Fuli Feng, Tat-Seng Chua
In this work, we extract samples from real financial reports to build a new large-scale QA dataset containing both Tabular And Textual data, named TAT-QA, where numerical reasoning is usually required to infer the answer, such as addition, subtraction, multiplication, division, counting, comparison/sorting, and the compositions.
Ranked #1 on Question Answering on TAT-QA
no code implementations • 27 Apr 2020 • Youcheng Huang, Tangchen Wei, Jundong Zhou, Chunxin Yang
In this paper, we study how to solve these conflicts on generative models based on the conditional variational autoencoder(CVAE) model.