no code implementations • 18 Aug 2024 • Renliang Sun, Mengyuan Liu, Shiping Yang, Rui Wang, Junqing He, Jiaxing Zhang
Benefiting from diverse instruction datasets, contemporary Large Language Models (LLMs) perform effectively as AI assistants in collaborating with humans.
1 code implementation • 26 Jan 2024 • XiaoJun Wu, Dixiang Zhang, Ruyi Gan, Junyu Lu, Ziwei Wu, Renliang Sun, Jiaxing Zhang, Pingjian Zhang, Yan Song
Recent advancements in text-to-image models have significantly enhanced image generation capabilities, yet a notable gap of open-source models persists in bilingual or Chinese language support.
no code implementations • 7 Dec 2023 • Ruyi Gan, XiaoJun Wu, Junyu Lu, Yuanhe Tian, Dixiang Zhang, Ziwei Wu, Renliang Sun, Chang Liu, Jiaxing Zhang, Pingjian Zhang, Yan Song
However, there are few specialized models in certain domains, such as interior design, which is attributed to the complex textual descriptions and detailed visual elements inherent in design, alongside the necessity for adaptable resolution.
4 code implementations • CVPR 2024 • Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, Cong Wei, Botao Yu, Ruibin Yuan, Renliang Sun, Ming Yin, Boyuan Zheng, Zhenzhu Yang, Yibo Liu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
We introduce MMMU: a new benchmark designed to evaluate multimodal models on massive multi-discipline tasks demanding college-level subject knowledge and deliberate reasoning.
no code implementations • 6 Nov 2023 • Ruyi Gan, Ziwei Wu, Renliang Sun, Junyu Lu, XiaoJun Wu, Dixiang Zhang, Kunhao Pan, Junqing He, Yuanhe Tian, Ping Yang, Qi Yang, Hao Wang, Jiaxing Zhang, Yan Song
Although many such issues are addressed along the line of research on LLMs, an important yet practical limitation is that many studies overly pursue enlarging model sizes without comprehensively analyzing and optimizing the use of pre-training data in their learning process, as well as appropriate organization and leveraging of such data in training LLMs under cost-effective settings.
1 code implementation • 10 Oct 2023 • Shiping Yang, Renliang Sun, Xiaojun Wan
Contrasting previous studies of zero-resource hallucination detection, our method and benchmark concentrate on passage-level detection instead of sentence-level.
1 code implementation • 7 Jun 2023 • Shiping Yang, Renliang Sun, Xiaojun Wan
Sentence Simplification is a valuable technique that can benefit language learners and children a lot.
1 code implementation • 21 May 2023 • Renliang Sun, Wei Xu, Xiaojun Wan
In this paper, we propose a new continued pre-training strategy to teach the pre-trained model to generate simple texts.
1 code implementation • 5 Apr 2023 • Mingqi Gao, Jie Ruan, Renliang Sun, Xunjian Yin, Shiping Yang, Xiaojun Wan
Evaluating text summarization is a challenging problem, and existing evaluation metrics are far from satisfactory.
1 code implementation • 14 Feb 2023 • Renliang Sun, Zhixian Yang, Xiaojun Wan
One of the major problems with text simplification is the lack of high-quality data.
1 code implementation • NAACL 2022 • Zhixian Yang, Renliang Sun, Xiaojun Wan
k-nearest-neighbor machine translation (NN-MT), proposed by Khandelwal et al. (2021), has achieved many state-of-the-art results in machine translation tasks.
no code implementations • 16 Apr 2022 • Renliang Sun, Xiaojun Wan
We use a small-scale simple text dataset for continued pre-training and employ two methods to identify simple words from the texts.
1 code implementation • EMNLP 2021 • Renliang Sun, Hanqi Jin, Xiaojun Wan
Finally, we select several representative models as baseline models for this task and perform automatic evaluation and human evaluation.
1 code implementation • COLING 2020 • Renliang Sun, Zhe Lin, Xiaojun Wan
Our model uses neural networks to learn the different effects of the preceding sentences and the following sentences on the current sentence and applies them to the improved transformer model.