Search Results for author: Tong Mo

Found 11 papers, 5 papers with code

Eliciting Knowledge from Pretrained Language Models for Prototypical Prompt Verbalizer

1 code implementation14 Jan 2022 Yinyi Wei, Tong Mo, Yongtao Jiang, Weiping Li, Wen Zhao

The distances between the embedding at the masked position of input and prototypical embeddings are used as classification criterion.

Contrastive Learning Language Modelling +3

Exploiting Pseudo Future Contexts for Emotion Recognition in Conversations

1 code implementation27 Jun 2023 Yinyi Wei, Shuaipeng Liu, Hailei Yan, Wei Ye, Tong Mo, Guanglu Wan

Specifically, for an utterance, we generate its future context with pre-trained language models, potentially containing extra beneficial knowledge in a conversational form homogeneous with the historical ones.

Emotion Recognition

Review of Deep Learning

no code implementations5 Apr 2018 Rong Zhang, Weiping Li, Tong Mo

In recent years, China, the United States and other countries, Google and other high-tech companies have increased investment in artificial intelligence.

An influence-based fast preceding questionnaire model for elderly assessments

no code implementations22 Nov 2017 Tong Mo, Rong Zhang, Weiping Li, Jingbo Zhang, Zhonghai Wu, Wei Tan

The practice in an elderly-care company shows that the FPQM can reduce the number of attributes by 90. 56% with a prediction accuracy of 98. 39%.

Neural Architecture Search For Keyword Spotting

no code implementations1 Sep 2020 Tong Mo, Yakun Yu, Mohammad Salameh, Di Niu, Shangling Jui

Deep neural networks have recently become a popular solution to keyword spotting systems, which enable the control of smart devices via voice.

 Ranked #1 on Keyword Spotting on Google Speech Commands (Google Speech Commands V1 6 metric)

Keyword Spotting Neural Architecture Search

Exploiting Hybrid Semantics of Relation Paths for Multi-hop Question Answering Over Knowledge Graphs

no code implementations COLING 2022 Zile Qiao, Wei Ye, Tong Zhang, Tong Mo, Weiping Li, Shikun Zhang

Answering natural language questions on knowledge graphs (KGQA) remains a great challenge in terms of understanding complex questions via multi-hop reasoning.

Answer Selection Knowledge Graphs +3

KiPT: Knowledge-injected Prompt Tuning for Event Detection

no code implementations COLING 2022 Haochen Li, Tong Mo, Hongcheng Fan, Jingkun Wang, Jiaxi Wang, Fuhao Zhang, Weiping Li

Then, knowledge-injected prompts are constructed using external knowledge bases, and a prompt tuning strategy is leveraged to optimize the prompts.

Event Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.