Search Results for author: Apoorv Khandelwal

Found 5 papers, 3 papers with code

A-OKVQA: A Benchmark for Visual Question Answering using World Knowledge

1 code implementation3 Jun 2022 Dustin Schwenk, Apoorv Khandelwal, Christopher Clark, Kenneth Marino, Roozbeh Mottaghi

In contrast to the existing knowledge-based VQA datasets, the questions generally cannot be answered by simply querying a knowledge base, and instead require some form of commonsense reasoning about the scene depicted in the image.

Question Answering Visual Question Answering

Simple but Effective: CLIP Embeddings for Embodied AI

2 code implementations CVPR 2022 Apoorv Khandelwal, Luca Weihs, Roozbeh Mottaghi, Aniruddha Kembhavi

Contrastive language image pretraining (CLIP) encoders have been shown to be beneficial for a range of visual tasks from classification and detection to captioning and image manipulation.

Image Manipulation Navigate

Who's Waldo? Linking People Across Text and Images

1 code implementation ICCV 2021 Claire Yuqing Cui, Apoorv Khandelwal, Yoav Artzi, Noah Snavely, Hadar Averbuch-Elor

We present a task and benchmark dataset for person-centric visual grounding, the problem of linking between people named in a caption and people pictured in an image.

 Ranked #1 on Person-centric Visual Grounding on Who’s Waldo (using extra training data)

Person-centric Visual Grounding

An Ethical Highlighter for People-Centric Dataset Creation

no code implementations27 Nov 2020 Margot Hanley, Apoorv Khandelwal, Hadar Averbuch-Elor, Noah Snavely, Helen Nissenbaum

Important ethical concerns arising from computer vision datasets of people have been receiving significant attention, and a number of datasets have been withdrawn as a result.

Cannot find the paper you are looking for? You can Submit a new open access paper.