Search Results for author: Xiaohui Hu

Found 12 papers, 5 papers with code

Improve LLM-as-a-Judge Ability as a General Ability

no code implementations17 Feb 2025 Jiachen Yu, Shaoning Sun, Xiaohui Hu, Jiaxu Yan, Kaidong Yu, Xuelong Li

Furthermore, our training method enhances the general capabilities of the model by constructing complicated judge task, and the judge signals provided by our model have significantly enhanced the downstream DPO training performance of our internal models in our test to optimize policy model with Judge Model.

Supporting Medical Relation Extraction via Causality-Pruned Semantic Dependency Forest

1 code implementation COLING 2022 Yifan Jin, Jiangmeng Li, Zheng Lian, Chengbo Jiao, Xiaohui Hu

However, the quality of the 1-best dependency tree for medical texts produced by an out-of-domain parser is relatively limited so that the performance of medical relation extraction method may degenerate.

Medical Relation Extraction Relation +1

Disentangle and Remerge: Interventional Knowledge Distillation for Few-Shot Object Detection from A Conditional Causal Perspective

1 code implementation26 Aug 2022 Jiangmeng Li, Yanan Zhang, Wenwen Qiang, Lingyu Si, Chengbo Jiao, Xiaohui Hu, Changwen Zheng, Fuchun Sun

To understand the reasons behind this phenomenon, we revisit the learning paradigm of knowledge distillation on the few-shot object detection task from the causal theoretic standpoint, and accordingly, develop a Structural Causal Model.

Few-Shot Learning Few-Shot Object Detection +4

MME-CRS: Multi-Metric Evaluation Based on Correlation Re-Scaling for Evaluating Open-Domain Dialogue

no code implementations19 Jun 2022 Pengfei Zhang, Xiaohui Hu, Kaidong Yu, Jian Wang, Song Han, Cao Liu, Chunyang Yuan

Firstly, we build an evaluation metric composed of 5 groups of parallel sub-metrics called Multi-Metric Evaluation (MME) to evaluate the quality of dialogue comprehensively.

Dialogue Evaluation MME

Multiple Fusion Adaptation: A Strong Framework for Unsupervised Semantic Segmentation Adaptation

1 code implementation1 Dec 2021 Kai Zhang, Yifan Sun, Rui Wang, Haichang Li, Xiaohui Hu

MFA basically considers three parallel information fusion strategies, i. e., the cross-model fusion, temporal fusion and a novel online-offline pseudo label fusion.

Pseudo Label Segmentation +3

Cross Modification Attention Based Deliberation Model for Image Captioning

no code implementations17 Sep 2021 Zheng Lian, Yanan Zhang, Haichang Li, Rui Wang, Xiaohui Hu

The conventional encoder-decoder framework for image captioning generally adopts a single-pass decoding process, which predicts the target descriptive sentence word by word in temporal order.

Decoder Descriptive +2

MvSR-NAT: Multi-view Subset Regularization for Non-Autoregressive Machine Translation

no code implementations19 Aug 2021 Pan Xie, Zexian Li, Xiaohui Hu

Conditional masked language models (CMLM) have shown impressive progress in non-autoregressive machine translation (NAT).

Machine Translation Sentence +1

PiSLTRc: Position-informed Sign Language Transformer with Content-aware Convolution

no code implementations27 Jul 2021 Pan Xie, Mengyi Zhao, Xiaohui Hu

Since the superiority of Transformer in learning long-term dependency, the sign language Transformer model achieves remarkable progress in Sign Language Recognition (SLR) and Translation (SLT).

Decoder Position +2

Multi-Scale Local-Temporal Similarity Fusion for Continuous Sign Language Recognition

no code implementations27 Jul 2021 Pan Xie, Zhi Cui, Yao Du, Mengyi Zhao, Jianwei Cui, Bin Wang, Xiaohui Hu

Continuous sign language recognition (cSLR) is a public significant task that transcribes a sign language video into an ordered gloss sequence.

Sign Language Recognition

Infusing Sequential Information into Conditional Masked Translation Model with Self-Review Mechanism

1 code implementation COLING 2020 Pan Xie, Zhi Cui, Xiuyin Chen, Xiaohui Hu, Jianwei Cui, Bin Wang

Concretely, we insert a left-to-right mask to the same decoder of CMTM, and then induce it to autoregressively review whether each generated word from CMTM is supposed to be replaced or kept.

Decoder Knowledge Distillation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.