1 code implementation • 26 Jun 2024 • Xinyu Hu, Li Lin, Mingqi Gao, Xunjian Yin, Xiaojun Wan
The evaluation of natural language generation (NLG) tasks is a significant and longstanding research issue.
1 code implementation • 24 Jun 2024 • Xiao Liang, Xinyu Hu, Simiao Zuo, Yeyun Gong, Qiang Lou, Yi Liu, Shao-Lun Huang, Jian Jiao
On average, TRAIT improves LLM performance by 8% in the advertisement domain and 7. 5% in the math domain.
no code implementations • 19 Jun 2024 • Junzhe Zhang, Huixuan Zhang, Xunjian Yin, Baizhou Huang, Xu Zhang, Xinyu Hu, Xiaojun Wan
Our benchmark facilitates independent correction of misreading and misrecognition errors by editing the corresponding knowledge component.
2 code implementations • 19 Feb 2024 • Xinyu Hu, Mingqi Gao, Sen Hu, Yang Zhang, Yicheng Chen, Teng Xu, Xiaojun Wan
Some prior work has shown that LLMs perform well in NLG evaluation for different tasks.
no code implementations • 2 Feb 2024 • Mingqi Gao, Xinyu Hu, Jie Ruan, Xiao Pu, Xiaojun Wan
Evaluating natural language generation (NLG) is a vital but challenging problem in artificial intelligence.
no code implementations • 20 Oct 2023 • Xinyu Hu, Pengfei Tang, Simiao Zuo, Zihan Wang, Bowen Song, Qiang Lou, Jian Jiao, Denis Charles
In Evoke, there are two instances of a same LLM: one as a reviewer (LLM-Reviewer), it scores the current prompt; the other as an author (LLM-Author), it edits the prompt by considering the edit history and the reviewer's feedback.
1 code implementation • 16 Jul 2023 • Bowen Song, Soo Min Kwon, Zecheng Zhang, Xinyu Hu, Qing Qu, Liyue Shen
However, training diffusion models in the pixel space are both data-intensive and computationally demanding, which restricts their applicability as priors for high-dimensional real-world data such as medical images.
no code implementations • 30 Jun 2023 • Simiao Zuo, Pengfei Tang, Xinyu Hu, Qiang Lou, Jian Jiao, Denis Charles
For model-free enhancement, we collect unlabeled web queries to augment domain knowledge; and we collect web search results to enrich the information of ads queries.
1 code implementation • 15 Nov 2022 • Xunjian Yin, Xinyu Hu, Jin Jiang, Xiaojun Wan
Chinese Spelling Check (CSC) aims to detect and correct error tokens in Chinese contexts, which has a wide range of applications.
no code implementations • 2022/08/15 2022 • Yu Wang, Wenbin, FENG Chongchong YU, Xinyu Hu, Yuqiu ZHANG4
In order to solve the problems of low model accuracy, poor computing power, poor parallel ability and excessive power consumption in the deployment of RGBD based 3 D target detection model at the embedded end, this paper first proposes an improved RGBD 3 D target detection model based on ENet semantic segmentation model, which takes ENet as the semantic segmentation network, RGB image and depth information are fused to realize 3 D target detection. Secondly, in order to apply the model at the edge, this paper constructs a lightweight network and cuts the network in the down-sampling stage of ENet model. Finally, this paper uses Xilinx ZCU104 as the hardware development kit, which takes FPGA as the auxiliary parallel operation unit and ARM as the main operation unit. It is a heterogeneous computing architecture with the ability to deal with complex operations. The architecture uses FPGA to accelerate the depth model in parallel, which improves the operation speed and reduces the power consumption. The test results of the model on ZCU104 are compared with other hardware. The results show that while ensuring the accuracy, the power consumption of the heterogeneous computing architecture used in this paper is 93% lowerthan that of Intel Xeon e5-2620 v4 CPU, the speed is 12 times higher, and the speed is more than 180 times higher than that of ARM Cortex-A53 commonly used at the edge.
no code implementations • 5 Jun 2022 • Xinyu Hu, Tanmay Binaykiya, Eric Frank, Olcay Cirit
Estimated Time of Arrival (ETA) plays an important role in delivery and ride-hailing platforms.
no code implementations • 13 Dec 2021 • Jiafan Zhuang, Yixin Zhang, Xinyu Hu, Junjie Li, Zilei Wang
In this article, we introduce the solution we used in the VSPW 2021 Challenge.
no code implementations • 1 Oct 2021 • Jian Yang, Xinyu Hu, Gang Xiao, Yulong Shen
Pre-trained language models learn informative word representations on a large-scale text corpus through self-supervised learning, which has achieved promising performance in fields of natural language processing (NLP) after fine-tuning.
no code implementations • 16 Jul 2021 • Niel Teng Hu, Xinyu Hu, Rosanne Liu, Sara Hooker, Jason Yosinski
Each example is propagated forward and backward through the network the same amount of times, independent of how much the example contributes to the learning protocol.
no code implementations • 17 Jan 2019 • Xinyu Hu, Paul Szerlip, Theofanis Karaletsos, Rohit Singh
A regression-based BNN model is proposed to predict spatiotemporal quantities like hourly rider demand with calibrated uncertainties.