Search Results for author: Ankur Sikarwar

Found 6 papers, 3 papers with code

Reason from Context with Self-supervised Learning

no code implementations23 Nov 2022 Xiao Liu, Ankur Sikarwar, Gabriel Kreiman, Zenglin Shi, Mengmi Zhang

To better accommodate the object-centric nature of current downstream tasks such as object recognition and detection, various methods have been proposed to suppress contextual biases or disentangle objects from contexts.

Object Object Recognition +2

Can Machines Imitate Humans? Integrative Turing Tests for Vision and Language Demonstrate a Narrowing Gap

no code implementations23 Nov 2022 Mengmi Zhang, Giorgia Dellaferrera, Ankur Sikarwar, Caishun Chen, Marcelo Armendariz, Noga Mudrik, Prachi Agrawal, Spandan Madan, Mranmay Shetty, Andrei Barbu, Haochen Yang, Tanishq Kumar, Shui'Er Han, Aman RAJ Singh, Meghna Sadwani, Stella Dellaferrera, Michele Pizzochero, Brandon Tang, Yew Soon Ong, Hanspeter Pfister, Gabriel Kreiman

To address this question, we turn to the Turing test and systematically benchmark current AIs in their abilities to imitate humans in three language tasks (Image captioning, Word association, and Conversation) and three vision tasks (Object detection, Color estimation, and Attention prediction).

Image Captioning object-detection +1

When Can Transformers Ground and Compose: Insights from Compositional Generalization Benchmarks

1 code implementation23 Oct 2022 Ankur Sikarwar, Arkil Patel, Navin Goyal

On analyzing the task, we find that identifying the target location in the grid world is the main challenge for the models.

On the Efficacy of Co-Attention Transformer Layers in Visual Question Answering

no code implementations11 Jan 2022 Ankur Sikarwar, Gabriel Kreiman

In recent years, multi-modal transformers have shown significant progress in Vision-Language tasks, such as Visual Question Answering (VQA), outperforming previous architectures by a considerable margin.

POS Question Answering +1

Cannot find the paper you are looking for? You can Submit a new open access paper.