Search Results for author: Ling Feng

Found 10 papers, 2 papers with code

KB-Plugin: A Plug-and-play Framework for Large Language Models to Induce Programs over Low-resourced Knowledge Bases

1 code implementation2 Feb 2024 Jiajie Zhang, Shulin Cao, Linmei Hu, Ling Feng, Lei Hou, Juanzi Li

Secondly, KB-Plugin utilizes abundant annotated data from a rich-resourced KB to train another pluggable module, namely PI plugin, which can help the LLM extract question-relevant schema information from the schema plugin of any KB and utilize this information to induce programs over this KB.

Program induction Self-Supervised Learning

Education distillation:getting student models to learn in shcools

no code implementations23 Nov 2023 Ling Feng, Danyang Li, Tianhao Wu, Xuliang Duan

Specifically, it is proposed to take fragmented student models divided from the complete student model as lower-grade models.

Incremental Learning Knowledge Distillation +1

Self-Organization Towards $1/f$ Noise in Deep Neural Networks

1 code implementation20 Jan 2023 Nicholas Chong Jia Le, Ling Feng

In this study, we find that such $1/f$ noise is also found in deep neural networks trained on natural language, resembling that of their biological counterparts.

EEG Time Series

FedSSC: Shared Supervised-Contrastive Federated Learning

no code implementations14 Jan 2023 Sirui Hu, Ling Feng, Xiaohan Yang, Yongchao Chen

We propose Supervised Contrastive Federated Learning in which devices can share the learned class-wise feature spaces with each other and add the supervised-contrastive learning loss as a regularization term to foster the feature space learning.

Contrastive Learning Federated Learning

Interactive Contrastive Learning for Self-supervised Entity Alignment

no code implementations17 Jan 2022 Kaisheng Zeng, Zhenhao Dong, Lei Hou, Yixin Cao, Minghao Hu, Jifan Yu, Xin Lv, Juanzi Li, Ling Feng

Self-supervised entity alignment (EA) aims to link equivalent entities across different knowledge graphs (KGs) without seed alignments.

Contrastive Learning Entity Alignment +1

Edge of chaos as a guiding principle for modern neural network training

no code implementations20 Jul 2021 Lin Zhang, Ling Feng, Kan Chen, Choy Heng Lai

Motivated by the edge of chaos principle behind the optimal performance of neural networks, we study the role of various hyperparameters in modern neural network training algorithms in terms of the order-chaos phase diagram.

Building and Using Personal Knowledge Graph to Improve Suicidal Ideation Detection on Social Media

no code implementations16 Dec 2020 Lei Cao, Huijun Zhang, Ling Feng

As the most popular platform for self-expression, emotion release, and personal interaction, individuals may exhibit a number of symptoms of suicidal ideation on social media.

Collaborative Inference for Efficient Remote Monitoring

no code implementations12 Feb 2020 Chi Zhang, Yong Sheng Soh, Ling Feng, Tianyi Zhou, Qianxiao Li

While current machine learning models have impressive performance over a wide range of applications, their large size and complexity render them unsuitable for tasks such as remote monitoring on edge devices with limited storage and computational power.

Collaborative Inference

Latent Suicide Risk Detection on Microblog via Suicide-Oriented Word Embeddings and Layered Attention

no code implementations IJCNLP 2019 Lei Cao, Huijun Zhang, Ling Feng, Zihan Wei, Xin Wang, Ningyun Li, Xiaohao He

Despite detection of suicidal ideation on social media has made great progress in recent years, people's implicitly and anti-real contrarily expressed posts still remain as an obstacle, constraining the detectors to acquire higher satisfactory performance.

Word Embeddings

Optimal Machine Intelligence at the Edge of Chaos

no code implementations11 Sep 2019 Ling Feng, Lin Zhang, Choy Heng Lai

It has long been suggested that the biological brain operates at some critical point between two different phases, possibly order and chaos.

Cannot find the paper you are looking for? You can Submit a new open access paper.