Search Results for author: Qika Lin

Found 7 papers, 4 papers with code

Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models

no code implementations15 Nov 2023 Fangzhi Xu, Zhiyong Wu, Qiushi Sun, Siyu Ren, Fei Yuan, Shuai Yuan, Qika Lin, Yu Qiao, Jun Liu

Although Large Language Models (LLMs) demonstrate remarkable ability in processing and generating human-like text, they do have limitations when it comes to comprehending and expressing world knowledge that extends beyond the boundaries of natural language(e. g., chemical molecular formula).

World Knowledge

A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics

1 code implementation9 Oct 2023 Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, Erik Cambria

This shift encompasses a move from discriminative AI approaches to generative AI approaches, as well as a shift from model-centered methodologies to datacentered methodologies.

Ethics Fairness

Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation and Beyond

1 code implementation16 Jun 2023 Fangzhi Xu, Qika Lin, Jiawei Han, Tianzhe Zhao, Jun Liu, Erik Cambria

Firstly, to offer systematic evaluations, we select fifteen typical logical reasoning datasets and organize them into deductive, inductive, abductive and mixed-form reasoning settings.

Benchmarking Evidence Selection +2

Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning

1 code implementation2 May 2022 Fangzhi Xu, Jun Liu, Qika Lin, Yudai Pan, Lingling Zhang

Firstly, we introduce different extraction strategies to split the text into two sets of logical units, and construct the logical graph and the syntax graph respectively.

Logical Reasoning Machine Reading Comprehension +1

Learning First-Order Rules with Relational Path Contrast for Inductive Relation Reasoning

no code implementations17 Oct 2021 Yudai Pan, Jun Liu, Lingling Zhang, Xin Hu, Tianzhe Zhao, Qika Lin

Relation reasoning in knowledge graphs (KGs) aims at predicting missing relations in incomplete triples, whereas the dominant paradigm is learning the embeddings of relations and entities, which is limited to a transductive setting and has restriction on processing unseen entities in an inductive situation.

Knowledge Graphs Relation

Cannot find the paper you are looking for? You can Submit a new open access paper.