Search Results for author: Tong Mo

Found 20 papers, 7 papers with code

KiPT: Knowledge-injected Prompt Tuning for Event Detection

no code implementations COLING 2022 Haochen Li, Tong Mo, Hongcheng Fan, Jingkun Wang, Jiaxi Wang, Fuhao Zhang, Weiping Li

Then, knowledge-injected prompts are constructed using external knowledge bases, and a prompt tuning strategy is leveraged to optimize the prompts.

Event Detection

GuiLoMo: Allocating Expert Number and Rank for LoRA-MoE via Bilevel Optimization with GuidedSelection Vectors

no code implementations17 Jun 2025 Hengyuan Zhang, Xinrong Chen, Yingmin Qiu, Xiao Liang, Ziyue Li, Guanyu Wang, Weiping Li, Tong Mo, Wenyue Li, Hayden Kwok-Hay So, Ngai Wong

Parameter-efficient fine-tuning (PEFT) methods, particularly Low-Rank Adaptation (LoRA), offer an efficient way to adapt large language models with reduced computational costs.

Bilevel Optimization Mixture-of-Experts +1

Qwen Look Again: Guiding Vision-Language Reasoning Models to Re-attention Visual Information

1 code implementation29 May 2025 Xu Chu, Xinrong Chen, Guanyu Wang, Zhijie Tan, Kui Huang, Wenyu Lv, Tong Mo, Weiping Li

Inference time scaling drives extended reasoning to enhance the performance of Vision-Language Models (VLMs), thus forming powerful Vision-Language Reasoning Models (VLRMs).

Hallucination

Domaino1s: Guiding LLM Reasoning for Explainable Answers in High-Stakes Domains

no code implementations24 Jan 2025 Xu Chu, Zhijie Tan, Hanlin Xue, Guanyu Wang, Tong Mo, Weiping Li

However, current LLMs for high-stakes domain tasks, such as financial investment and legal QA, typically generate brief answers without reasoning processes and explanations.

2k Legal Reasoning

GraphSOS: Graph Sampling and Order Selection to Help LLMs Understand Graphs Better

no code implementations24 Jan 2025 Xu Chu, Hanlin Xue, Zhijie Tan, Bingce Wang, Tong Mo, Weiping Li

The success of Large Language Models (LLMs) in various domains has led researchers to apply them to graph-related problems by converting graph data into natural language text.

Graph Question Answering Graph Sampling +3

Mitigating Hallucinations on Object Attributes using Multiview Images and Negative Instructions

no code implementations17 Jan 2025 Zhijie Tan, Yuzhi Li, Shengwei Meng, Xiang Yuan, Weiping Li, Tong Mo, Bingce Wang, Xu Chu

Consequently, we have devised Multiview Image Augmented VLM (MIAVLM), incorporating a Multiview Attributes Perceiver (MAP) submodule capable of simultaneously eliminating the influence of input image order and aligning visual information from multiview images with Large Language Models (LLMs).

3D Generation

Order Matters: Exploring Order Sensitivity in Multimodal Large Language Models

no code implementations22 Oct 2024 Zhijie Tan, Xu Chu, Weiping Li, Tong Mo

Furthermore, we demonstrate that popular MLLMs pay special attention to certain multimodal context positions, particularly the beginning and end.

In-Context Learning Question Answering +1

Aeroengine performance prediction using a physical-embedded data-driven method

no code implementations29 Jun 2024 Tong Mo, Shiran Dai, An Fu, Xiaomeng Zhu, Shuxiao Li

Accurate and efficient prediction of aeroengine performance is of paramount importance for engine design, maintenance, and optimization endeavours.

Computational Efficiency

Supportiveness-based Knowledge Rewriting for Retrieval-augmented Language Modeling

no code implementations12 Jun 2024 Zile Qiao, Wei Ye, Yong Jiang, Tong Mo, Pengjun Xie, Weiping Li, Fei Huang, Shikun Zhang

Retrieval-augmented language models (RALMs) have recently shown great potential in mitigating the limitations of implicit knowledge in LLMs, such as untimely updating of the latest expertise and unreliable retention of long-tail knowledge.

Language Modeling Language Modelling +1

Exploiting Pseudo Future Contexts for Emotion Recognition in Conversations

1 code implementation27 Jun 2023 Yinyi Wei, Shuaipeng Liu, Hailei Yan, Wei Ye, Tong Mo, Guanglu Wan

Specifically, for an utterance, we generate its future context with pre-trained language models, potentially containing extra beneficial knowledge in a conversational form homogeneous with the historical ones.

Emotion Recognition

Exploiting Hybrid Semantics of Relation Paths for Multi-hop Question Answering Over Knowledge Graphs

no code implementations COLING 2022 Zile Qiao, Wei Ye, Tong Zhang, Tong Mo, Weiping Li, Shikun Zhang

Answering natural language questions on knowledge graphs (KGQA) remains a great challenge in terms of understanding complex questions via multi-hop reasoning.

Answer Selection Knowledge Graphs +3

Eliciting Knowledge from Pretrained Language Models for Prototypical Prompt Verbalizer

1 code implementation14 Jan 2022 Yinyi Wei, Tong Mo, Yongtao Jiang, Weiping Li, Wen Zhao

The distances between the embedding at the masked position of input and prototypical embeddings are used as classification criterion.

Contrastive Learning Language Modeling +4

Neural Architecture Search For Keyword Spotting

no code implementations1 Sep 2020 Tong Mo, Yakun Yu, Mohammad Salameh, Di Niu, Shangling Jui

Deep neural networks have recently become a popular solution to keyword spotting systems, which enable the control of smart devices via voice.

 Ranked #1 on Keyword Spotting on Google Speech Commands (Google Speech Commands V1 6 metric)

Keyword Spotting Neural Architecture Search

Review of Deep Learning

no code implementations5 Apr 2018 Rong Zhang, Weiping Li, Tong Mo

In recent years, China, the United States and other countries, Google and other high-tech companies have increased investment in artificial intelligence.

Deep Learning

An influence-based fast preceding questionnaire model for elderly assessments

no code implementations22 Nov 2017 Tong Mo, Rong Zhang, Weiping Li, Jingbo Zhang, Zhonghai Wu, Wei Tan

The practice in an elderly-care company shows that the FPQM can reduce the number of attributes by 90. 56% with a prediction accuracy of 98. 39%.

Cannot find the paper you are looking for? You can Submit a new open access paper.