Search Results for author: Apoorv Khandelwal

Found 3 papers, 2 papers with code

Simple but Effective: CLIP Embeddings for Embodied AI

2 code implementations18 Nov 2021 Apoorv Khandelwal, Luca Weihs, Roozbeh Mottaghi, Aniruddha Kembhavi

Contrastive language image pretraining (CLIP) encoders have been shown to be beneficial for a range of visual tasks from classification and detection to captioning and image manipulation.

Image Manipulation

Who's Waldo? Linking People Across Text and Images

1 code implementation ICCV 2021 Claire Yuqing Cui, Apoorv Khandelwal, Yoav Artzi, Noah Snavely, Hadar Averbuch-Elor

We present a task and benchmark dataset for person-centric visual grounding, the problem of linking between people named in a caption and people pictured in an image.

 Ranked #1 on Person-centric Visual Grounding on Who’s Waldo (using extra training data)

Person-centric Visual Grounding

An Ethical Highlighter for People-Centric Dataset Creation

no code implementations27 Nov 2020 Margot Hanley, Apoorv Khandelwal, Hadar Averbuch-Elor, Noah Snavely, Helen Nissenbaum

Important ethical concerns arising from computer vision datasets of people have been receiving significant attention, and a number of datasets have been withdrawn as a result.

Cannot find the paper you are looking for? You can Submit a new open access paper.