Search Results for author: Hyunsuk Chung

Found 6 papers, 2 papers with code

PEACH: Pretrained-embedding Explanation Across Contextual and Hierarchical Structure

1 code implementation21 Apr 2024 Feiqi Cao, Caren Han, Hyunsuk Chung

In this work, we propose a novel tree-based explanation technique, PEACH (Pretrained-embedding Explanation Across Contextual and Hierarchical Structure), that can explain how text-based documents are classified by using any pretrained contextual embeddings in a tree-based human-interpretable manner.

Attribute feature selection +2

PDFVQA: A New Dataset for Real-World VQA on PDF Documents

no code implementations13 Apr 2023 Yihao Ding, Siwen Luo, Hyunsuk Chung, Soyeon Caren Han

Document-based Visual Question Answering examines the document understanding of document images in conditions of natural language questions.

document understanding Key Information Extraction +2

Form-NLU: Dataset for the Form Natural Language Understanding

1 code implementation4 Apr 2023 Yihao Ding, Siqu Long, Jiabin Huang, Kaixuan Ren, Xingxiang Luo, Hyunsuk Chung, Soyeon Caren Han

Compared to general document analysis tasks, form document structure understanding and retrieval are challenging.

4k Key Information Extraction +4

PiggyBack: Pretrained Visual Question Answering Environment for Backing up Non-deep Learning Professionals

no code implementations29 Nov 2022 Zhihao Zhang, Siwen Luo, Junyi Chen, Sijia Lai, Siqu Long, Hyunsuk Chung, Soyeon Caren Han

We propose a PiggyBack, a Visual Question Answering platform that allows users to apply the state-of-the-art visual-language pretrained models easily.

Question Answering Visual Question Answering

V-Doc : Visual questions answers with Documents

no code implementations27 May 2022 Yihao Ding, Zhe Huang, Runlin Wang, Yanhang Zhang, Xianru Chen, Yuzhong Ma, Hyunsuk Chung, Soyeon Caren Han

We propose V-Doc, a question-answering tool using document images and PDF, mainly for researchers and general non-deep learning experts looking to generate, process, and understand the document visual question answering tasks.

Question Answering Question Generation +2

V-Doc: Visual Questions Answers With Documents

no code implementations CVPR 2022 Yihao Ding, Zhe Huang, Runlin Wang, Yanhang Zhang, Xianru Chen, Yuzhong Ma, Hyunsuk Chung, Soyeon Caren Han

We propose V-Doc, a question-answering tool using document images and PDF, mainly for researchers and general non-deep learning experts looking to generate, process, and understand the document visual question answering tasks.

Question Answering Question Generation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.