Search Results for author: Satwik Kottur

Found 23 papers, 18 papers with code

DialogStitch: Synthetic Deeper and Multi-Context Task-Oriented Dialogs

1 code implementation SIGDIAL (ACL) 2021 Satwik Kottur, Chinnadhurai Sankar, Zhou Yu, Alborz Geramifard

Real-world conversational agents must effectively handle long conversations that span multiple contexts.

An Analysis of State-of-the-Art Models for Situated Interactive MultiModal Conversations (SIMMC)

no code implementations SIGDIAL (ACL) 2021 Satwik Kottur, Paul Crook, Seungwhan Moon, Ahmad Beirami, Eunjoon Cho, Rajen Subba, Alborz Geramifard

There is a growing interest in virtual assistants with multimodal capabilities, e. g., inferring the context of a conversation through scene understanding.

Scene Understanding

Continual Dialogue State Tracking via Example-Guided Question Answering

1 code implementation23 May 2023 Hyundong Cho, Andrea Madotto, Zhaojiang Lin, Khyathi Raghavi Chandu, Satwik Kottur, Jing Xu, Jonathan May, Chinnadhurai Sankar

Dialogue systems are frequently updated to accommodate new services, but naively updating them by continually training with data for new services in diminishing performance on previously learnt services.

Continual Learning Dialogue State Tracking +3

Hierarchical Video-Moment Retrieval and Step-Captioning

1 code implementation CVPR 2023 Abhay Zala, Jaemin Cho, Satwik Kottur, Xilun Chen, Barlas Oğuz, Yasher Mehdad, Mohit Bansal

Our hierarchical benchmark consists of video retrieval, moment retrieval, and two novel moment segmentation and step captioning tasks.

Information Retrieval Moment Retrieval +4

Navigating Connected Memories with a Task-oriented Dialog System

1 code implementation15 Nov 2022 Seungwhan Moon, Satwik Kottur, Alborz Geramifard, Babak Damavandi

Recent years have seen an increasing trend in the volume of personal media captured by users, thanks to the advent of smartphones and smart glasses, resulting in large media collections.

Retrieval

Tell Your Story: Task-Oriented Dialogs for Interactive Content Creation

no code implementations8 Nov 2022 Satwik Kottur, Seungwhan Moon, Aram H. Markosyan, Hardik Shah, Babak Damavandi, Alborz Geramifard

We collect a new dataset C3 (Conversational Content Creation), comprising 10k dialogs conditioned on media montages simulated from a large media collection.

Benchmarking Retrieval

Database Search Results Disambiguation for Task-Oriented Dialog Systems

no code implementations NAACL 2022 Kun Qian, Ahmad Beirami, Satwik Kottur, Shahin Shayandeh, Paul Crook, Alborz Geramifard, Zhou Yu, Chinnadhurai Sankar

We find that training on our augmented dialog data improves the model's ability to deal with ambiguous scenarios, without sacrificing performance on unmodified turns.

Multi-Task Learning

Robustness through Data Augmentation Loss Consistency

1 code implementation21 Oct 2021 Tianjian Huang, Shaunak Halbe, Chinnadhurai Sankar, Pooyan Amini, Satwik Kottur, Alborz Geramifard, Meisam Razaviyayn, Ahmad Beirami

Our experiments show that DAIR consistently outperforms ERM and DA-ERM with little marginal computational cost and sets new state-of-the-art results in several benchmarks involving covariant data augmentation.

Multi-domain Dialogue State Tracking Visual Question Answering

Ego4D: Around the World in 3,000 Hours of Egocentric Video

5 code implementations CVPR 2022 Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh Kumar Ramakrishnan, Fiona Ryan, Jayant Sharma, Michael Wray, Mengmeng Xu, Eric Zhongcong Xu, Chen Zhao, Siddhant Bansal, Dhruv Batra, Vincent Cartillier, Sean Crane, Tien Do, Morrie Doulaty, Akshay Erapalli, Christoph Feichtenhofer, Adriano Fragomeni, Qichen Fu, Abrham Gebreselasie, Cristina Gonzalez, James Hillis, Xuhua Huang, Yifei HUANG, Wenqi Jia, Weslie Khoo, Jachym Kolar, Satwik Kottur, Anurag Kumar, Federico Landini, Chao Li, Yanghao Li, Zhenqiang Li, Karttikeya Mangalam, Raghava Modhugu, Jonathan Munro, Tullie Murrell, Takumi Nishiyasu, Will Price, Paola Ruiz Puentes, Merey Ramazanova, Leda Sari, Kiran Somasundaram, Audrey Southerland, Yusuke Sugano, Ruijie Tao, Minh Vo, Yuchen Wang, Xindi Wu, Takuma Yagi, Ziwei Zhao, Yunyi Zhu, Pablo Arbelaez, David Crandall, Dima Damen, Giovanni Maria Farinella, Christian Fuegen, Bernard Ghanem, Vamsi Krishna Ithapu, C. V. Jawahar, Hanbyul Joo, Kris Kitani, Haizhou Li, Richard Newcombe, Aude Oliva, Hyun Soo Park, James M. Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio Torralba, Lorenzo Torresani, Mingfei Yan, Jitendra Malik

We introduce Ego4D, a massive-scale egocentric video dataset and benchmark suite.

De-identification Ethics

SIMMC 2.0: A Task-oriented Dialog Dataset for Immersive Multimodal Conversations

1 code implementation EMNLP 2021 Satwik Kottur, Seungwhan Moon, Alborz Geramifard, Babak Damavandi

Next generation task-oriented dialog systems need to understand conversational contexts with their perceived surroundings, to effectively help users in the real-world multimodal environment.

Language Modelling

DVD: A Diagnostic Dataset for Multi-step Reasoning in Video Grounded Dialogue

1 code implementation ACL 2021 Hung Le, Chinnadhurai Sankar, Seungwhan Moon, Ahmad Beirami, Alborz Geramifard, Satwik Kottur

A video-grounded dialogue system is required to understand both dialogue, which contains semantic dependencies from turn to turn, and video, which contains visual cues of spatial and temporal scene variations.

Object Tracking Visual Reasoning

Situated and Interactive Multimodal Conversations

2 code implementations COLING 2020 Seungwhan Moon, Satwik Kottur, Paul A. Crook, Ankita De, Shivani Poddar, Theodore Levin, David Whitney, Daniel Difranco, Ahmad Beirami, Eunjoon Cho, Rajen Subba, Alborz Geramifard

Next generation virtual assistants are envisioned to handle multimodal inputs (e. g., vision, memories of previous interactions, in addition to the user's utterances), and perform multimodal actions (e. g., displaying a route in addition to generating the system's utterance).

Response Generation

On Emergent Communication in Competitive Multi-Agent Teams

1 code implementation4 Mar 2020 Paul Pu Liang, Jeffrey Chen, Ruslan Salakhutdinov, Louis-Philippe Morency, Satwik Kottur

Several recent works have found the emergence of grounded compositional language in the communication protocols developed by mostly cooperative multi-agent systems when learned end-to-end to maximize performance on a downstream task.

Natural Language Does Not Emerge `Naturally' in Multi-Agent Dialog

1 code implementation EMNLP 2017 Satwik Kottur, Jos{\'e} Moura, Stefan Lee, Dhruv Batra

A number of recent works have proposed techniques for end-to-end learning of communication protocols among cooperative multi-agent populations, and have simultaneously found the emergence of grounded human-interpretable language in the protocols developed by the agents, learned without any human supervision!

Slot Filling

Canopy --- Fast Sampling with Cover Trees

no code implementations ICML 2017 Manzil Zaheer, Satwik Kottur, Amr Ahmed, José Moura, Alex Smola

In this work, we propose Canopy, a sampler based on Cover Trees that is exact, has guaranteed runtime logarithmic in the number of atoms, and is provably polynomial in the inherent dimensionality of the underlying parameter space.

Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog

3 code implementations26 Jun 2017 Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra

A number of recent works have proposed techniques for end-to-end learning of communication protocols among cooperative multi-agent populations, and have simultaneously found the emergence of grounded human-interpretable language in the protocols developed by the agents, all learned without any human supervision!

Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning

7 code implementations ICCV 2017 Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra

Specifically, we pose a cooperative 'image guessing' game between two agents -- Qbot and Abot -- who communicate in natural language dialog so that Qbot can select an unseen image from a lineup of images.

reinforcement-learning Reinforcement Learning (RL) +2

Deep Sets

5 code implementations NeurIPS 2017 Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Ruslan Salakhutdinov, Alexander Smola

Our main theorem characterizes the permutation invariant functions and provides a family of functions to which any permutation invariant objective function must belong.

Anomaly Detection Outlier Detection +1

Visual Dialog

11 code implementations CVPR 2017 Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra

We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content.

Chatbot Retrieval +1

Visual Word2Vec (vis-w2v): Learning Visually Grounded Word Embeddings Using Abstract Scenes

1 code implementation CVPR 2016 Satwik Kottur, Ramakrishna Vedantam, José M. F. Moura, Devi Parikh

While word embeddings trained using text have been extremely successful, they cannot uncover notions of semantic relatedness implicit in our visual world.

Common Sense Reasoning Image Retrieval +3

Cannot find the paper you are looking for? You can Submit a new open access paper.