Search Results for author: Mingliang Zhang

Found 7 papers, 4 papers with code

GeoEval: Benchmark for Evaluating LLMs and Multi-Modal Models on Geometry Problem-Solving

1 code implementation15 Feb 2024 Jiaxin Zhang, Zhongzhi Li, Mingliang Zhang, Fei Yin, ChengLin Liu, Yashar Moshfeghi

Yet, their proficiency in tackling geometry math problems, which necessitates an integrated understanding of both textual and visual information, has not been thoroughly evaluated.

Geometry Problem Solving Math

WeLayout: WeChat Layout Analysis System for the ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents

no code implementations11 May 2023 Mingliang Zhang, Zhen Cao, Juntao Liu, LiQiang Niu, Fandong Meng, Jie zhou

Our approach effectively demonstrates the benefits of combining query-based and anchor-free models for achieving robust layout segmentation in corporate documents.

Bayesian Optimization Segmentation

Convolution-enhanced Evolving Attention Networks

1 code implementation16 Dec 2022 Yujing Wang, Yaming Yang, Zhuo Li, Jiangang Bai, Mingliang Zhang, Xiangtai Li, Jing Yu, Ce Zhang, Gao Huang, Yunhai Tong

To the best of our knowledge, this is the first work that explicitly models the layer-wise evolution of attention maps.

Image Classification Machine Translation +3

Competence-based Curriculum Learning for Multilingual Machine Translation

no code implementations Findings (EMNLP) 2021 Mingliang Zhang, Fandong Meng, Yunhai Tong, Jie zhou

Therefore, we focus on balancing the learning competencies of different languages and propose Competence-based Curriculum Learning for Multilingual Machine Translation, named CCL-M.

Machine Translation Translation

Evolving Attention with Residual Convolutions

2 code implementations20 Feb 2021 Yujing Wang, Yaming Yang, Jiangang Bai, Mingliang Zhang, Jing Bai, Jing Yu, Ce Zhang, Gao Huang, Yunhai Tong

In this paper, we propose a novel and generic mechanism based on evolving attention to improve the performance of transformers.

Image Classification Machine Translation +2

Predictive Attention Transformer: Improving Transformer with Attention Map Prediction

no code implementations1 Jan 2021 Yujing Wang, Yaming Yang, Jiangang Bai, Mingliang Zhang, Jing Bai, Jing Yu, Ce Zhang, Yunhai Tong

Instead, we model their dependencies via a chain of prediction models that take previous attention maps as input to predict the attention maps of a new layer through convolutional neural networks.

Machine Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.