Search Results for author: Cha Zhang

Found 18 papers, 10 papers with code

XFUND: A Benchmark Dataset for Multilingual Visually Rich Form Understanding

no code implementations Findings (ACL) 2022 Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei

Multimodal pre-training with text, layout, and image has achieved SOTA performance for visually rich document understanding tasks recently, which demonstrates the great potential for joint learning across different modalities.

DiT: Self-supervised Pre-training for Document Image Transformer

2 code implementations4 Mar 2022 Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei

Image Transformer has recently achieved significant progress for natural image understanding, either using supervised (ViT, DeiT, etc.)

Document AI Document Image Classification +3

Improving Structured Text Recognition with Regular Expression Biasing

no code implementations10 Nov 2021 Baoguang Shi, WenFeng Cheng, Yijuan Lu, Cha Zhang, Dinei Florencio

We study the problem of recognizing structured text, i. e. text that follows certain formats, and propose to improve the recognition accuracy of structured text by specifying regular expressions (regexes) for biasing.

TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models

2 code implementations21 Sep 2021 Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei

Existing approaches for text recognition are usually built based on CNN for image understanding and RNN for char-level text generation.

Handwritten Text Recognition Language Modelling +2

LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding

4 code implementations18 Apr 2021 Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei

In this paper, we present LayoutXLM, a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually-rich document understanding.

Document Image Classification

LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding

4 code implementations ACL 2021 Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou

Pre-training of text and layout has proved effective in a variety of visually-rich document understanding tasks due to its effective model architecture and the advantage of large-scale unlabeled scanned/digital-born documents.

Document Image Classification Document Layout Analysis +4

TAP: Text-Aware Pre-training for Text-VQA and Text-Caption

1 code implementation CVPR 2021 Zhengyuan Yang, Yijuan Lu, JianFeng Wang, Xi Yin, Dinei Florencio, Lijuan Wang, Cha Zhang, Lei Zhang, Jiebo Luo

Due to this aligned representation learning, even pre-trained on the same downstream task dataset, TAP already boosts the absolute accuracy on the TextVQA dataset by +5. 4%, compared with a non-TAP baseline.

Language Modelling Masked Language Modeling +5

Multimodal active speaker detection and virtual cinematography for video conferencing

no code implementations10 Feb 2020 Ross Cutler, Ramin Mehran, Sam Johnson, Cha Zhang, Adam Kirk, Oliver Whyte, Adarsh Kowdle

Active speaker detection (ASD) and virtual cinematography (VC) can significantly improve the remote user experience of a video conference by automatically panning, tilting and zooming of a video conferencing camera: users subjectively rate an expert video cinematographer's video significantly higher than unedited video.

Renofeation: A Simple Transfer Learning Method for Improved Adversarial Robustness

1 code implementation7 Feb 2020 Ting-Wu Chin, Cha Zhang, Diana Marculescu

Fine-tuning through knowledge transfer from a pre-trained model on a large-scale dataset is a widely spread approach to effectively build models on small-scale datasets.

Adversarial Attack Adversarial Robustness +1

RePr: Improved Training of Convolutional Filters

1 code implementation CVPR 2019 Aaditya Prakash, James Storer, Dinei Florencio, Cha Zhang

We show that by temporarily pruning and then restoring a subset of the model's filters, and repeating this process cyclically, overlap in the learned features is reduced, producing improved generalization.

Layer-compensated Pruning for Resource-constrained Convolutional Neural Networks

1 code implementation1 Oct 2018 Ting-Wu Chin, Cha Zhang, Diana Marculescu

Resource-efficient convolution neural networks enable not only the intelligence on edge devices but also opportunities in system-level optimization such as scheduling.

Meta-Learning

Orthogonal and Idempotent Transformations for Learning Deep Neural Networks

no code implementations19 Jul 2017 Jingdong Wang, Yajie Xing, Kexin Zhang, Cha Zhang

Identity transformations, used as skip-connections in residual networks, directly connect convolutional layers close to the input and those close to the output in deep neural networks, improving information flow and thus easing the training.

Precision Enhancement of 3D Surfaces from Multiple Compressed Depth Maps

no code implementations25 Feb 2014 Pengfei Wan, Gene Cheung, Philip A. Chou, Dinei Florencio, Cha Zhang, Oscar C. Au

In texture-plus-depth representation of a 3D scene, depth maps from different camera viewpoints are typically lossily compressed via the classical transform coding / coefficient quantization paradigm.

Quantization

Wide-Baseline Hair Capture Using Strand-Based Refinement

no code implementations CVPR 2013 Linjie Luo, Cha Zhang, Zhengyou Zhang, Szymon Rusinkiewicz

We propose a novel algorithm to reconstruct the 3D geometry of human hairs in wide-baseline setups using strand-based refinement.

Multiple-Instance Pruning For Learning Efficient Cascade Detectors

no code implementations NeurIPS 2007 Cha Zhang, Paul A. Viola

Cascade detectors have been shown to operate extremely rapidly, with high accuracy, and have important applications such as face detection.

Face Detection Multiple Instance Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.