no code implementations • EMNLP 2021 • Mieradilijiang Maimaiti, Yang Liu, Yuanhang Zheng, Gang Chen, Kaiyu Huang, Ji Zhang, Huanbo Luan, Maosong Sun
Besides, the robustness of the previous neural methods is limited by the large-scale annotated data.
no code implementations • EMNLP 2021 • Yuanhang Zheng, Zhixing Tan, Meng Zhang, Mieradilijiang Maimaiti, Huanbo Luan, Maosong Sun, Qun Liu, Yang Liu
Quality estimation (QE) of machine translation (MT) aims to evaluate the quality of machine-translated sentences without references and is important in practical applications of MT.
1 code implementation • 17 Jun 2024 • Dawulie Jinensibieke, Mieradilijiang Maimaiti, Wentao Xiao, Yuanhang Zheng, Xiaobo Wang
Relation Extraction (RE) serves as a crucial technology for transforming unstructured text into structured information, especially within the framework of Knowledge Graph development.
no code implementations • 11 Mar 2024 • Yuanhang Zheng, Peng Li, Wei Liu, Yang Liu, Jian Luan, Bin Wang
Specifically, our proposed ToolRerank includes Adaptive Truncation, which truncates the retrieval results related to seen and unseen tools at different positions, and Hierarchy-Aware Reranking, which makes retrieval results more concentrated for single-tool queries and more diverse for multi-tool queries.
no code implementations • 3 Mar 2024 • Mieradilijiang Maimaiti, Yuanhang Zheng, Ji Zhang, Fei Huang, Yue Zhang, Wenpei Luo, Kaiyu Huang
Semantic Retrieval (SR) has become an indispensable part of the FAQ system in the task-oriented question-answering (QA) dialogue scenario.
2 code implementations • 25 Feb 2024 • Yuanhang Zheng, Peng Li, Ming Yan, Ji Zhang, Fei Huang, Yang Liu
Despite intensive efforts devoted to tool learning, the problem of budget-constrained tool learning, which focuses on resolving user queries within a specific budget constraint, has been widely overlooked.
no code implementations • 4 May 2023 • Yuanhang Zheng, Zhixing Tan, Peng Li, Yang Liu
Black-box prompt tuning employs derivative-free optimization algorithms to learn prompts within low-dimensional subspaces rather than back-propagating through the network of Large Language Models (LLMs).
no code implementations • 1 Nov 2022 • Hongyang He, Feng Ziliang, Yuanhang Zheng, Shudong Huang, HaoBing Gao
In the field of medical CT image processing, convolutional neural networks (CNNs) have been the dominant technique. Encoder-decoder CNNs utilise locality for efficiency, but they cannot simulate distant pixel interactions properly. Recent research indicates that self-attention or transformer layers can be stacked to efficiently learn long-range dependencies. By constructing and processing picture patches as embeddings, transformers have been applied to computer vision applications.
no code implementations • NAACL 2022 • Jianhai Zhang, Mieradilijiang Maimaiti, Xing Gao, Yuanhang Zheng, Ji Zhang
They also ignore the importance to capture the inter-dependency between query and the support set for few-shot text classification.
no code implementations • 30 Mar 2022 • Wenshen Xu, Mieradilijiang Maimaiti, Yuanhang Zheng, Xin Tang, Ji Zhang
Unexpectedly, MLM ignores the sentence-level training, and CL also neglects extraction of the internal info from the query.