Search Results for author: Junhao Liu

Found 13 papers, 7 papers with code

Single-Cell Omics Arena: A Benchmark Study for Large Language Models on Cell Type Annotation Using Single-Cell Data

no code implementations3 Dec 2024 Junhao Liu, Siwei Xu, Lei Zhang, Jing Zhang

To thoroughly evaluate the capability of modern instruction-tuned LLMs in automating the cell type identification process, we introduce SOAR, a comprehensive benchmarking study of LLMs for cell type annotation tasks in single-cell genomics.

Benchmarking

ConLUX: Concept-Based Local Unified Explanations

no code implementations16 Oct 2024 Junhao Liu, Haonan Yu, Xin Zhang

With the rapid advancements of various machine learning models, there is a significant demand for model-agnostic explanation techniques, which can explain these models across different architectures.

Hierarchical Context Pruning: Optimizing Real-World Code Completion with Repository-Level Pretrained Code LLMs

1 code implementation26 Jun 2024 Lei Zhang, Yunshui Li, Jiaming Li, Xiaobo Xia, Jiaxi Yang, Run Luo, Minzheng Wang, Longze Chen, Junhao Liu, Min Yang

We applied the HCP strategy in experiments with six Repo-Code LLMs, and the results demonstrate that our proposed method can significantly enhance completion accuracy while substantially reducing the length of input.

Code Completion

One-Shot Learning as Instruction Data Prospector for Large Language Models

1 code implementation16 Dec 2023 Yunshui Li, Binyuan Hui, Xiaobo Xia, Jiaxi Yang, Min Yang, Lei Zhang, Shuzheng Si, Ling-Hao Chen, Junhao Liu, Tongliang Liu, Fei Huang, Yongbin Li

Contemporary practices in instruction tuning often hinge on enlarging data scaling without a clear strategy for ensuring data quality, inadvertently introducing noise that may compromise model performance.

One-Shot Learning

Marathon: A Race Through the Realm of Long Context with Large Language Models

1 code implementation15 Dec 2023 Lei Zhang, Yunshui Li, Ziqiang Liu, Jiaxi Yang, Junhao Liu, Longze Chen, Run Luo, Min Yang

With the advancement of large language models (LLMs) and the expansion of their context windows, existing long-context benchmarks fall short in effectively evaluating the models' comprehension and reasoning abilities in extended texts.

Long-Context Understanding Multiple-choice

Self-Distillation with Meta Learning for Knowledge Graph Completion

1 code implementation Findings of the Association for Computational Linguistics: EMNLP 2022 2022 Yunshui Li, Junhao Liu, Chengming Li, Min Yang

In this paper, we propose a selfdistillation framework with meta learning(MetaSD) for knowledge graph completion with dynamic pruning, which aims to learn compressed graph embeddings and tackle the longtail samples.

Knowledge Graph Completion Meta-Learning +1

ReX: A Framework for Incorporating Temporal Information in Model-Agnostic Local Explanation Techniques

no code implementations8 Sep 2022 Junhao Liu, Xin Zhang

To address this limitation, we propose ReX, a general framework for adapting various explanation techniques to models that process variable-length inputs, expanding explanation coverage to data points of different lengths.

Anomaly Detection Sentiment Analysis

Free Lunch for Co-Saliency Detection: Context Adjustment

no code implementations4 Aug 2021 Lingdong Kong, Prakhar Ganesh, Tan Wang, Junhao Liu, Le Zhang, Yao Chen

We hope that the scale, diversity, and quality of our dataset can benefit researchers in this area and beyond.

counterfactual Saliency Detection +1

Multi-perspective Coherent Reasoning for Helpfulness Prediction of Multimodal Reviews

1 code implementation ACL 2021 Junhao Liu, Zhen Hai, Min Yang, Lidong Bing

In addition, we also devise an intra-review coherent reasoning module to identify the coherence between the text content and images of the review, which is a piece of strong evidence for review helpfulness prediction.

DR 21 South Filament: a Parsec-sized Dense Gas Accretion Flow onto the DR 21 Massive Young Cluster

no code implementations4 Dec 2020 Bo Hu, Keping Qiu, Yue Cao, Junhao Liu, Yuwei Wang, Guangxing Li, Zhiqiang Shen, Juan Li, Junzhi Wang, Bin Li, Jian Dong

DR21 south filament (DR21SF) is a unique component of the giant network of filamentary molecular clouds in the north region of Cygnus X complex.

Astrophysics of Galaxies

Dual Dynamic Memory Network for End-to-End Multi-turn Task-oriented Dialog Systems

1 code implementation COLING 2020 Jian Wang, Junhao Liu, Wei Bi, Xiaojiang Liu, Kejing He, Ruifeng Xu, Min Yang

To conquer these limitations, we propose a Dual Dynamic Memory Network (DDMN) for multi-turn dialog generation, which maintains two core components: dialog memory manager and KB memory manager.

Cross-lingual Machine Reading Comprehension with Language Branch Knowledge Distillation

no code implementations COLING 2020 Junhao Liu, Linjun Shou, Jian Pei, Ming Gong, Min Yang, Daxin Jiang

Then, we devise a multilingual distillation approach to amalgamate knowledge from multiple language branch models to a single model for all target languages.

Knowledge Distillation Machine Reading Comprehension +1

Improving Knowledge-aware Dialogue Generation via Knowledge Base Question Answering

1 code implementation16 Dec 2019 Jian Wang, Junhao Liu, Wei Bi, Xiaojiang Liu, Kejing He, Ruifeng Xu, Min Yang

In this paper, we propose a novel knowledge-aware dialogue generation model (called TransDG), which transfers question representation and knowledge matching abilities from knowledge base question answering (KBQA) task to facilitate the utterance understanding and factual knowledge selection for dialogue generation.

Dialogue Generation Knowledge Base Question Answering +1

Cannot find the paper you are looking for? You can Submit a new open access paper.