no code implementations • Findings (ACL) 2022 • Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Jiaming Wu, Heng Gong, Bing Qin
Weighted decoding methods composed of the pretrained language model (LM) and the controller have achieved promising results for controllable text generation.
1 code implementation • 8 Aug 2024 • Lei Huang, Xiaocheng Feng, Weitao Ma, Yuxuan Gu, Weihong Zhong, Xiachong Feng, Weijiang Yu, Weihua Peng, Duyu Tang, Dandan Tu, Bing Qin
Despite the impressive performance on information-seeking tasks, large language models (LLMs) still struggle with hallucinations.
1 code implementation • 30 Jun 2024 • Weihong Zhong, Xiaocheng Feng, Liang Zhao, Qiming Li, Lei Huang, Yuxuan Gu, Weitao Ma, Yuan Xu, Bing Qin
To mitigate this, we further propose a training-free method called Residual Visual Decoding, where we revise the output distribution of LVLMs with the one derived from the residual visual input, providing models with direct access to the visual information.
1 code implementation • 3 Jun 2024 • Kun Zhu, Xiaocheng Feng, Xiyuan Du, Yuxuan Gu, Weijiang Yu, Haotian Wang, Qianglong Chen, Zheng Chu, Jingchang Chen, Bing Qin
Retrieval-augmented generation integrates the capabilities of large language models with relevant information retrieved from an extensive corpus, yet encounters challenges when confronted with real-world noisy data.
no code implementations • 15 Feb 2024 • Yuxuan Gu, Yi Jin, Ben Wang, Zhixiang Wei, Xiaoxiao Ma, Pengyang Ling, Haoxuan Wang, Huaian Chen, Enhong Chen
In this work, we observe that the generators, which are pre-trained on massive natural images, inherently hold the promising potential for superior low-light image enhancement against varying scenarios. Specifically, we embed a pre-trained generator to Retinex model to produce reflectance maps with enhanced detail and vividness, thereby recovering features degraded by low-light conditions. Taking one step further, we introduce a novel optimization strategy, which backpropagates the gradients to the input seeds rather than the parameters of the low-light enhancement model, thus intactly retaining the generative knowledge learned from natural images and achieving faster convergence speed.
1 code implementation • 16 Dec 2022 • Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Lingyuan Zhang, Heng Gong, Weihong Zhong, Bing Qin
Previous work on controllable text generation has explored the idea of control from the latent space, such as optimizing a representation with attribute-related classifiers or sampling a representation from relevant discrete samples.
1 code implementation • 6 Oct 2022 • Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Lingyuan Zhang, Heng Gong, Bing Qin
Multi-aspect controllable text generation is a more challenging and practical task than single-aspect control.
no code implementations • 21 Jun 2022 • Yuxuan Gu, Jianxiao Wang, Yuanbo Chen, Kedi Zheng, Zhongwei Deng, Qixin Chen
Based on the sensitivity indexes, the SSO algorithm can decouple the mixed impacts of different parameters during the identification.
no code implementations • 5 Nov 2021 • Yuxuan Gu, Jianxiao Wang, Yuanbo Chen, Zhongwei Deng, Hongye Guo, Kedi Zheng, Qixin Chen
The penetrations of lithium-ion batteries in transport, energy and communication systems are increasing rapidly.