1 code implementation • ACL 2022 • Yan Liu, Sanyuan Chen, Yazheng Yang, Qi Dai
In this paper, we propose a multi-level Mutual Promotion mechanism for self-evolved Inference and sentence-level Interpretation (MPII).
no code implementations • 12 Oct 2024 • Lei LI, Zhihui Xie, Mukai Li, Shunian Chen, Peiyi Wang, Liang Chen, Yazheng Yang, Benyou Wang, Lingpeng Kong, Qi Liu
As large vision-language models (LVLMs) evolve rapidly, the demand for high-quality and diverse data to align these models becomes increasingly crucial.
Ranked #62 on
Visual Question Answering
on MM-Vet
no code implementations • 8 May 2024 • Yan Liu, Yazheng Yang, Xiaokang Chen
Long text understanding is important yet challenging for natural language processing.
no code implementations • 18 Apr 2024 • Aitor Ormazabal, Che Zheng, Cyprien de Masson d'Autume, Dani Yogatama, Deyu Fu, Donovan Ong, Eric Chen, Eugenie Lamprecht, Hai Pham, Isaac Ong, Kaloyan Aleksiev, Lei LI, Matthew Henderson, Max Bain, Mikel Artetxe, Nishant Relan, Piotr Padlewski, Qi Liu, Ren Chen, Samuel Phua, Yazheng Yang, Yi Tay, Yuqi Wang, Zhongkai Zhu, Zhihui Xie
On text benchmarks, Core not only performs competitively to other frontier models on a set of well-established benchmarks (e. g. MMLU, GSM8K) but also outperforms GPT4-0613 on human evaluation.
no code implementations • 29 Mar 2024 • Yazheng Yang, Yuqi Wang, Yaxuan Li, Sankalok Sen, Lei LI, Qi Liu
Despite their proficiency in comprehending natural language, LLMs fall short in dealing with structured tabular data.
no code implementations • 17 Dec 2023 • Lei LI, Zhihui Xie, Mukai Li, Shunian Chen, Peiyi Wang, Liang Chen, Yazheng Yang, Benyou Wang, Lingpeng Kong
This paper explores preference distillation for large vision language models (LVLMs), improving their ability to generate helpful and faithful responses anchoring the visual context.
Ranked #65 on
Visual Question Answering
on MM-Vet
1 code implementation • 22 Jul 2023 • Yuwei Yin, Yazheng Yang, Jian Yang, Qi Liu
To tackle these issues, we propose FinPT and FinBench: the former is a novel approach for financial risk prediction that conduct Profile Tuning on large pretrained foundation models, and the latter is a set of high-quality datasets on financial risks such as default, fraud, and churn.
no code implementations • 18 Jul 2023 • Yazheng Yang, Yuqi Wang, Guang Liu, Ledell Wu, Qi Liu
This research primarily centers on classification and regression tasks involving tabular data, and conducts rigorous experimental testing and analyses to validate the effectiveness of our methodology.
1 code implementation • 12 Jun 2023 • Yazheng Yang, Zhou Zhao, Qi Liu
Our proposed method addresses this issue by assigning individual style vector to each token in a text, allowing for fine-grained control and manipulation of the style strength.
no code implementations • 7 Jun 2023 • Lei LI, Yuwei Yin, Shicheng Li, Liang Chen, Peiyi Wang, Shuhuai Ren, Mukai Li, Yazheng Yang, Jingjing Xu, Xu sun, Lingpeng Kong, Qi Liu
To tackle this challenge and promote research in the vision-language field, we introduce the Multi-Modal, Multilingual Instruction Tuning (M$^3$IT) dataset, designed to optimize VLM alignment with human instructions.
1 code implementation • 21 Feb 2022 • Sihao Hu, Yi Cao, Yu Gong, Zhao Li, Yazheng Yang, Qingwen Liu, Shouling Ji
Specifically, we establish a heterogeneous graph that contains physical and semantic linkages to guide the feature transfer process from warmed-up video to cold-start videos.
no code implementations • 14 Dec 2021 • Yazheng Yang, Boyuan Pan, Deng Cai, Huan Sun
In particular, instead of directly generating a story, we first learn to map the short text input to a low-dimensional topic distribution (which is pre-assigned by a topic model).
no code implementations • 10 Oct 2021 • Yan Liu, Yazheng Yang
Long text understanding is important yet challenging in natural language processing.
1 code implementation • ICML 2020 • Boyuan Pan, Yazheng Yang, Kaizhao Liang, Bhavya Kailkhura, Zhongming Jin, Xian-Sheng Hua, Deng Cai, Bo Li
Recent advances in maximizing mutual information (MI) between the source and target have demonstrated its effectiveness in text generation.
no code implementations • 14 Jan 2020 • Boyuan Pan, Yazheng Yang, Zhou Zhao, Yueting Zhuang, Deng Cai
Neural Machine Translation (NMT) has become a popular technology in recent years, and the encoder-decoder framework is the mainstream among all the methods.
1 code implementation • ACL 2018 • Boyuan Pan, Yazheng Yang, Zhou Zhao, Yueting Zhuang, Deng Cai, Xiaofei He
We observe that people usually use some discourse markers such as "so" or "but" to represent the logical relationship between two sentences.
Ranked #16 on
Natural Language Inference
on SNLI
no code implementations • NeurIPS 2018 • Boyuan Pan, Yazheng Yang, Hao Li, Zhou Zhao, Yueting Zhuang, Deng Cai, Xiaofei He
In this paper, we transfer knowledge learned from machine comprehension to the sequence-to-sequence tasks to deepen the understanding of the text.