Search Results for author: Yuanhang Zheng

Found 9 papers, 1 papers with code

Self-Supervised Quality Estimation for Machine Translation

no code implementations EMNLP 2021 Yuanhang Zheng, Zhixing Tan, Meng Zhang, Mieradilijiang Maimaiti, Huanbo Luan, Maosong Sun, Qun Liu, Yang Liu

Quality estimation (QE) of machine translation (MT) aims to evaluate the quality of machine-translated sentences without references and is important in practical applications of MT.

Machine Translation Sentence +1

ToolRerank: Adaptive and Hierarchy-Aware Reranking for Tool Retrieval

no code implementations11 Mar 2024 Yuanhang Zheng, Peng Li, Wei Liu, Yang Liu, Jian Luan, Bin Wang

Specifically, our proposed ToolRerank includes Adaptive Truncation, which truncates the retrieval results related to seen and unseen tools at different positions, and Hierarchy-Aware Reranking, which makes retrieval results more concentrated for single-tool queries and more diverse for multi-tool queries.

Retrieval

Improving Cross-lingual Representation for Semantic Retrieval with Code-switching

no code implementations3 Mar 2024 Mieradilijiang Maimaiti, Yuanhang Zheng, Ji Zhang, Fei Huang, Yue Zhang, Wenpei Luo, Kaiyu Huang

Semantic Retrieval (SR) has become an indispensable part of the FAQ system in the task-oriented question-answering (QA) dialogue scenario.

Question Answering Retrieval +3

Budget-Constrained Tool Learning with Planning

1 code implementation25 Feb 2024 Yuanhang Zheng, Peng Li, Ming Yan, Ji Zhang, Fei Huang, Yang Liu

Despite intensive efforts devoted to tool learning, the problem of budget-constrained tool learning, which focuses on resolving user queries within a specific budget constraint, has been widely overlooked.

Black-box Prompt Tuning with Subspace Learning

no code implementations4 May 2023 Yuanhang Zheng, Zhixing Tan, Peng Li, Yang Liu

Black-box prompt tuning uses derivative-free optimization algorithms to learn prompts in low-dimensional subspaces instead of back-propagating through the network of Large Language Models (LLMs).

Meta-Learning

MHITNet: a minimize network with a hierarchical context-attentional filter for segmenting medical ct images

no code implementations1 Nov 2022 Hongyang He, Feng Ziliang, Yuanhang Zheng, Shudong Huang, HaoBing Gao

In the field of medical CT image processing, convolutional neural networks (CNNs) have been the dominant technique. Encoder-decoder CNNs utilise locality for efficiency, but they cannot simulate distant pixel interactions properly. Recent research indicates that self-attention or transformer layers can be stacked to efficiently learn long-range dependencies. By constructing and processing picture patches as embeddings, transformers have been applied to computer vision applications.

Cannot find the paper you are looking for? You can Submit a new open access paper.