1 code implementation • 28 Oct 2024 • Jiajie Zhang, Zhongni Hou, Xin Lv, Shulin Cao, Zhenyu Hou, Yilin Niu, Lei Hou, Yuxiao Dong, Ling Feng, Juanzi Li
Though significant advancements have been achieved in developing long-context large language models (LLMs), the compromised quality of LLM-synthesized data for supervised fine-tuning (SFT) often affects the long-context performance of SFT models and leads to inherent limitations.
1 code implementation • 4 Sep 2024 • Jiajie Zhang, Yushi Bai, Xin Lv, Wanjun Gu, Danqing Liu, Minhao Zou, Shulin Cao, Lei Hou, Yuxiao Dong, Ling Feng, Juanzi Li
Though current long-context large language models (LLMs) have demonstrated impressive capacities in answering user questions based on extensive text, the lack of citations in their responses makes user verification difficult, leading to concerns about their trustworthiness due to their potential hallucinations.
1 code implementation • 2 Feb 2024 • Jiajie Zhang, Shulin Cao, Linmei Hu, Ling Feng, Lei Hou, Juanzi Li
Secondly, KB-Plugin utilizes abundant annotated data from a rich-resourced KB to train another pluggable module, namely PI plugin, which can help the LLM extract question-relevant schema information from the schema plugin of any KB and utilize this information to induce programs over this KB.
no code implementations • 23 Nov 2023 • Ling Feng, Danyang Li, Tianhao Wu, Xuliang Duan
Specifically, it is proposed to take fragmented student models divided from the complete student model as lower-grade models.
1 code implementation • 20 Jan 2023 • Nicholas Chong Jia Le, Ling Feng
In this study, we find that such $1/f$ noise is also found in deep neural networks trained on natural language, resembling that of their biological counterparts.
no code implementations • 14 Jan 2023 • Sirui Hu, Ling Feng, Xiaohan Yang, Yongchao Chen
We propose Supervised Contrastive Federated Learning in which devices can share the learned class-wise feature spaces with each other and add the supervised-contrastive learning loss as a regularization term to foster the feature space learning.
no code implementations • 17 Jan 2022 • Kaisheng Zeng, Zhenhao Dong, Lei Hou, Yixin Cao, Minghao Hu, Jifan Yu, Xin Lv, Juanzi Li, Ling Feng
Self-supervised entity alignment (EA) aims to link equivalent entities across different knowledge graphs (KGs) without seed alignments.
no code implementations • 20 Jul 2021 • Lin Zhang, Ling Feng, Kan Chen, Choy Heng Lai
Motivated by the edge of chaos principle behind the optimal performance of neural networks, we study the role of various hyperparameters in modern neural network training algorithms in terms of the order-chaos phase diagram.
no code implementations • 16 Dec 2020 • Lei Cao, Huijun Zhang, Ling Feng
As the most popular platform for self-expression, emotion release, and personal interaction, individuals may exhibit a number of symptoms of suicidal ideation on social media.
no code implementations • 12 Feb 2020 • Chi Zhang, Yong Sheng Soh, Ling Feng, Tianyi Zhou, Qianxiao Li
While current machine learning models have impressive performance over a wide range of applications, their large size and complexity render them unsuitable for tasks such as remote monitoring on edge devices with limited storage and computational power.
no code implementations • IJCNLP 2019 • Lei Cao, Huijun Zhang, Ling Feng, Zihan Wei, Xin Wang, Ningyun Li, Xiaohao He
Despite detection of suicidal ideation on social media has made great progress in recent years, people's implicitly and anti-real contrarily expressed posts still remain as an obstacle, constraining the detectors to acquire higher satisfactory performance.
no code implementations • 11 Sep 2019 • Ling Feng, Lin Zhang, Choy Heng Lai
It has long been suggested that the biological brain operates at some critical point between two different phases, possibly order and chaos.