Search Results for author: Yanzeng Li

Found 14 papers, 5 papers with code

Enhancing Chinese Pre-trained Language Model via Heterogeneous Linguistics Graph

3 code implementations ACL 2022 Yanzeng Li, Jiangxia Cao, Xin Cong, Zhenyu Zhang, Bowen Yu, Hongsong Zhu, Tingwen Liu

Chinese pre-trained language models usually exploit contextual character information to learn representations, while ignoring the linguistics knowledge, e. g., word and sentence information.

Language Modeling Language Modelling +1

DySpec: Faster Speculative Decoding with Dynamic Token Tree Structure

no code implementations15 Oct 2024 Yunfan Xiong, Ruoyu Zhang, Yanzeng Li, Tianhao Wu, Lei Zou

Under low temperature setting, DySpec can improve the throughput up to 9. 1$\times$ and reduce the latency up to 9. 4$\times$ on Llama2-70B.

MedDiT: A Knowledge-Controlled Diffusion Transformer Framework for Dynamic Medical Image Generation in Virtual Simulated Patient

no code implementations22 Aug 2024 Yanzeng Li, Cheng Zeng, Jinchao Zhang, Jie zhou, Lei Zou

Additionally, a well-tuned Diffusion Transformer (DiT) model is incorporated to generate medical images according to the specified patient attributes in the KG.

Hallucination Image Generation +3

MMPKUBase: A Comprehensive and High-quality Chinese Multi-modal Knowledge Graph

no code implementations3 Aug 2024 Xuan Yi, Yanzeng Li, Lei Zou

Multi-modal knowledge graphs have emerged as a powerful approach for information representation, combining data from different modalities such as text, images, and videos.

Attribute Contrastive Learning +5

Leveraging Large Language Model as Simulated Patients for Clinical Education

no code implementations13 Apr 2024 Yanzeng Li, Cheng Zeng, Jialun Zhong, Ruoyu Zhang, Minhao Zhang, Lei Zou

Simulated Patients (SPs) play a crucial role in clinical medical education by providing realistic scenarios for student practice.

Language Modeling Language Modelling +1

LLMaAA: Making Large Language Models as Active Annotators

1 code implementation30 Oct 2023 Ruoyu Zhang, Yanzeng Li, Yongliang Ma, Ming Zhou, Lei Zou

Recently, the superior few-shot performance of large language models (LLMs) has propelled the development of dataset generation, where the training data are solely synthesized from LLMs.

Active Learning Dataset Generation +3

ADMUS: A Progressive Question Answering Framework Adaptable to Multiple Knowledge Sources

no code implementations9 Aug 2023 Yirui Zhan, Yanzeng Li, Minhao Zhang, Lei Zou

With the introduction of deep learning models, semantic parsingbased knowledge base question answering (KBQA) systems have achieved high performance in handling complex questions.

Knowledge Base Question Answering

VGStore: A Multimodal Extension to SPARQL for Querying RDF Scene Graph

1 code implementation7 Sep 2022 Yanzeng Li, Zilong Zheng, Wenjuan Han, Lei Zou

Semantic Web technology has successfully facilitated many RDF models with rich data representation methods.

Relational Reasoning Semantic Similarity +1

gBuilder: A Scalable Knowledge Graph Construction System for Unstructured Corpus

no code implementations20 Aug 2022 Yanzeng Li, Lei Zou

Furthermore, we also design a cloud-based self-adaptive task scheduling for gBuilder to ensure its scalability on large-scale knowledge graph construction.

graph construction Scheduling

RZCR: Zero-shot Character Recognition via Radical-based Reasoning

no code implementations12 Jul 2022 Xiaolei Diao, Daqian Shi, Hao Tang, Qiang Shen, Yanzeng Li, Lei Wu, Hao Xu

The long-tail effect is a common issue that limits the performance of deep learning models on real-world datasets.

Crake: Causal-Enhanced Table-Filler for Question Answering over Large Scale Knowledge Base

1 code implementation Findings (NAACL) 2022 Minhao Zhang, Ruoyu Zhang, Yanzeng Li, Lei Zou

Semantic parsing solves knowledge base (KB) question answering (KBQA) by composing a KB query, which generally involves node extraction (NE) and graph composition (GC) to detect and connect related nodes in a query.

Question Answering Relation Extraction +1

Enhancing Pre-trained Chinese Character Representation with Word-aligned Attention

1 code implementation ACL 2020 Yanzeng Li, Bowen Yu, Mengge Xue, Tingwen Liu

Most Chinese pre-trained models take character as the basic unit and learn representation according to character's external contexts, ignoring the semantics expressed in the word, which is the smallest meaningful utterance in Chinese.

Cannot find the paper you are looking for? You can Submit a new open access paper.