1 code implementation • NAACL 2022 • Jae Sung Park, Sheng Shen, Ali Farhadi, Trevor Darrell, Yejin Choi, Anna Rohrbach
We test the robustness of recent methods on the proposed automatic contrast sets, and compare them to additionally collected human-generated counterparts, to assess their effectiveness.
no code implementations • 10 Feb 2022 • Jack Hessel, Jena D. Hwang, Jae Sung Park, Rowan Zellers, Chandra Bhagavatula, Anna Rohrbach, Kate Saenko, Yejin Choi
We present Sherlock, an annotated corpus of 103K images for testing machine capacity for abductive reasoning beyond literal image contents.
1 code implementation • NeurIPS 2021 • Rowan Zellers, Ximing Lu, Jack Hessel, Youngjae Yu, Jae Sung Park, Jize Cao, Ali Farhadi, Yejin Choi
As humans, we understand events in the visual world contextually, performing multimodal reasoning across time to make inferences about the past, present, and future.
1 code implementation • NeurIPS 2021 • Aditya Kusupati, Matthew Wallingford, Vivek Ramanujan, Raghav Somani, Jae Sung Park, Krishna Pillutla, Prateek Jain, Sham Kakade, Ali Farhadi
We further quantitatively measure the quality of our codes by applying it to the efficient image retrieval as well as out-of-distribution (OOD) detection problems.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Ana Marasović, Chandra Bhagavatula, Jae Sung Park, Ronan Le Bras, Noah A. Smith, Yejin Choi
Natural language rationales could provide intuitive, higher-level explanations that are easily understandable by humans, complementing the more broadly studied lower-level explanations based on gradients or attention weights.
1 code implementation • ECCV 2020 • Jae Sung Park, Trevor Darrell, Anna Rohrbach
This auxiliary task allows us to propose a two-stage approach to Identity-Aware Video Description.
no code implementations • 5 Jun 2020 • Yue Tan, Chunjing Hu, Kuan Zhang, Kan Zheng, Ethan A. Davis, Jae Sung Park
Anomaly detection for non-linear dynamical system plays an important role in ensuring the system stability.
no code implementations • ECCV 2020 • Jae Sung Park, Chandra Bhagavatula, Roozbeh Mottaghi, Ali Farhadi, Yejin Choi
In addition, we provide person-grounding (i. e., co-reference links) between people appearing in the image and people mentioned in the textual commonsense descriptions, allowing for tighter integration between images and text.
1 code implementation • 2 May 2019 • Jae Woong Soh, Jae Sung Park, Nam Ik Cho
This paper presents a new framework for jointly enhancing the resolution and the dynamic range of an image, i. e., simultaneous super-resolution (SR) and high dynamic range imaging (HDRI), based on a convolutional neural network (CNN).
1 code implementation • CVPR 2019 • Jae Sung Park, Marcus Rohrbach, Trevor Darrell, Anna Rohrbach
Among the main issues are the fluency and coherence of the generated descriptions, and their relevance to the video.
1 code implementation • 2 Aug 2017 • Jae Sung Park, Nam Ik Cho
This paper presents an algorithm that enhances undesirably illuminated images by generating and fusing multi-level illuminations from a single image. The input image is first decomposed into illumination and reflectance components by using an edge-preserving smoothing filter.
no code implementations • 8 Jul 2017 • Jae Sung Park, Biao Jia, Mohit Bansal, Dinesh Manocha
We generate a factor graph from natural language instructions called the Dynamic Grounding Graph (DGG), which takes latent parameters into account.
Robotics