no code implementations • Findings (EMNLP) 2021 • Siyu Lai, Hui Huang, Dong Jing, Yufeng Chen, Jinan Xu, Jian Liu
Recent multilingual pre-trained models, like XLM-RoBERTa (XLM-R), have been demonstrated effective in many cross-lingual tasks.
Cross-Lingual Sentiment Classification Dialogue State Tracking +4
no code implementations • 7 Mar 2024 • Yanqi Dai, Dong Jing, Nanyi Fei, Zhiwu Lu
To mitigate this issue, we propose a novel Comprehensive Task Balancing (CoTBal) algorithm for multi-task visual instruction tuning of LMMs.
no code implementations • 26 May 2022 • Dong Jing, Shuo Zhang, Song Chang, Youfang Lin
In this paper, we propose a novel LFRR network by directly utilizing the complementary pixel information of raindrop-free areas in the input raindrop LF, which consists of the re-sampling module and the refinement module.