1 code implementation • 15 Feb 2024 • Jiaxin Zhang, Zhongzhi Li, Mingliang Zhang, Fei Yin, ChengLin Liu, Yashar Moshfeghi
Yet, their proficiency in tackling geometry math problems, which necessitates an integrated understanding of both textual and visual information, has not been thoroughly evaluated.
no code implementations • 11 May 2023 • Mingliang Zhang, Zhen Cao, Juntao Liu, LiQiang Niu, Fandong Meng, Jie zhou
Our approach effectively demonstrates the benefits of combining query-based and anchor-free models for achieving robust layout segmentation in corporate documents.
1 code implementation • 16 Dec 2022 • Yujing Wang, Yaming Yang, Zhuo Li, Jiangang Bai, Mingliang Zhang, Xiangtai Li, Jing Yu, Ce Zhang, Gao Huang, Yunhai Tong
To the best of our knowledge, this is the first work that explicitly models the layer-wise evolution of attention maps.
1 code implementation • 20 May 2022 • Yihan Hao, Mingliang Zhang, Fei Yin, Linlin Huang
An appropriate dataset is critical for the research of PGDP.
no code implementations • Findings (EMNLP) 2021 • Mingliang Zhang, Fandong Meng, Yunhai Tong, Jie zhou
Therefore, we focus on balancing the learning competencies of different languages and propose Competence-based Curriculum Learning for Multilingual Machine Translation, named CCL-M.
2 code implementations • 20 Feb 2021 • Yujing Wang, Yaming Yang, Jiangang Bai, Mingliang Zhang, Jing Bai, Jing Yu, Ce Zhang, Gao Huang, Yunhai Tong
In this paper, we propose a novel and generic mechanism based on evolving attention to improve the performance of transformers.
no code implementations • 1 Jan 2021 • Yujing Wang, Yaming Yang, Jiangang Bai, Mingliang Zhang, Jing Bai, Jing Yu, Ce Zhang, Yunhai Tong
Instead, we model their dependencies via a chain of prediction models that take previous attention maps as input to predict the attention maps of a new layer through convolutional neural networks.