Search Results for author: Woo Suk Choi

Found 6 papers, 1 papers with code

Scene Graph Parsing via Abstract Meaning Representation in Pre-trained Language Models

no code implementations NAACL (DLG4NLP) 2022 Woo Suk Choi, Yu-Jung Heo, Dharani Punithan, Byoung-Tak Zhang

In this work, we propose the application of abstract meaning representation (AMR) based semantic parsing models to parse textual descriptions of a visual scene into scene graphs, which is the first work to the best of our knowledge.

AMR Parsing Dependency Parsing

SGRAM: Improving Scene Graph Parsing via Abstract Meaning Representation

no code implementations17 Oct 2022 Woo Suk Choi, Yu-Jung Heo, Byoung-Tak Zhang

To this end, we design a simple yet effective two-stage scene graph parsing framework utilizing abstract meaning representation, SGRAM (Scene GRaph parsing via Abstract Meaning representation): 1) transforming a textual description of an image into an AMR graph (Text-to-AMR) and 2) encoding the AMR graph into a Transformer-based language model to generate a scene graph (AMR-to-SG).

Dependency Parsing Graph Generation +5

Hypergraph Transformer: Weakly-supervised Multi-hop Reasoning for Knowledge-based Visual Question Answering

1 code implementation ACL 2022 Yu-Jung Heo, Eun-Sol Kim, Woo Suk Choi, Byoung-Tak Zhang

Knowledge-based visual question answering (QA) aims to answer a question which requires visually-grounded external knowledge beyond image content itself.

Question Answering Visual Question Answering

Toward a Human-Level Video Understanding Intelligence

no code implementations8 Oct 2021 Yu-Jung Heo, Minsu Lee, SeongHo Choi, Woo Suk Choi, Minjung Shin, Minjoon Jung, Jeh-Kwang Ryu, Byoung-Tak Zhang

In this paper, we propose the Video Turing Test to provide effective and practical assessments of video understanding intelligence as well as human-likeness evaluation of AI agents.

Video Understanding

Cannot find the paper you are looking for? You can Submit a new open access paper.