Search Results for author: Abhay Zala

Found 10 papers, 4 papers with code

Ctrl-Adapter: An Efficient and Versatile Framework for Adapting Diverse Controls to Any Diffusion Model

no code implementations15 Apr 2024 Han Lin, Jaemin Cho, Abhay Zala, Mohit Bansal

Ctrl-Adapter provides diverse capabilities including image control, video control, video control with sparse frames, multi-condition control, compatibility with different backbones, adaptation to unseen control conditions, and video editing.

Image Generation Video Editing +1

EnvGen: Generating and Adapting Environments via LLMs for Training Embodied Agents

no code implementations18 Mar 2024 Abhay Zala, Jaemin Cho, Han Lin, Jaehong Yoon, Mohit Bansal

Instead of directly employing LLMs as agents, can we use LLMs' reasoning capabilities to adaptively create training environments to help smaller embodied RL agents learn useful skills that they are weak at?

Reinforcement Learning (RL) World Knowledge

DiagrammerGPT: Generating Open-Domain, Open-Platform Diagrams via LLM Planning

no code implementations18 Oct 2023 Abhay Zala, Han Lin, Jaemin Cho, Mohit Bansal

In the first stage, we use LLMs to generate and iteratively refine 'diagram plans' (in a planner-auditor feedback loop) which describe all the entities (objects and text labels), their relationships (arrows or lines), and their bounding box layouts.

VideoDirectorGPT: Consistent Multi-scene Video Generation via LLM-Guided Planning

no code implementations26 Sep 2023 Han Lin, Abhay Zala, Jaemin Cho, Mohit Bansal

Our experiments demonstrate that VideoDirectorGPT framework substantially improves layout and movement control in both single- and multi-scene video generation and can generate multi-scene videos with visual consistency across scenes, while achieving competitive performance with SOTAs in open-domain single-scene T2V generation.

Image Generation Video Generation

Visual Programming for Text-to-Image Generation and Evaluation

no code implementations24 May 2023 Jaemin Cho, Abhay Zala, Mohit Bansal

First, we introduce VPGen, an interpretable step-by-step T2I generation framework that decomposes T2I generation into three steps: object/count generation, layout generation, and image generation.

Text-to-Image Generation World Knowledge

Hierarchical Video-Moment Retrieval and Step-Captioning

1 code implementation CVPR 2023 Abhay Zala, Jaemin Cho, Satwik Kottur, Xilun Chen, Barlas Oğuz, Yasher Mehdad, Mohit Bansal

Our hierarchical benchmark consists of video retrieval, moment retrieval, and two novel moment segmentation and step captioning tasks.

Information Retrieval Moment Retrieval +4

CoSIm: Commonsense Reasoning for Counterfactual Scene Imagination

1 code implementation NAACL 2022 Hyounghun Kim, Abhay Zala, Mohit Bansal

Next, a counterfactual imagined scene change (in textual form) is applied, and the model has to predict the new response to the initial question based on this scene change.

counterfactual

DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generation Models

2 code implementations ICCV 2023 Jaemin Cho, Abhay Zala, Mohit Bansal

In this work, we investigate the visual reasoning capabilities and social biases of different text-to-image models, covering both multimodal transformer language models and diffusion models.

Image Captioning Image Classification +9

FixMyPose: Pose Correctional Captioning and Retrieval

1 code implementation4 Apr 2021 Hyounghun Kim, Abhay Zala, Graham Burri, Mohit Bansal

During the correctional-captioning task, models must generate descriptions of how to move from the current to target pose image, whereas in the retrieval task, models should select the correct target pose given the initial pose and correctional description.

Pose Retrieval Retrieval

ArraMon: A Joint Navigation-Assembly Instruction Interpretation Task in Dynamic Environments

no code implementations Findings of the Association for Computational Linguistics 2020 Hyounghun Kim, Abhay Zala, Graham Burri, Hao Tan, Mohit Bansal

During this task, the agent (similar to a PokeMON GO player) is asked to find and collect different target objects one-by-one by navigating based on natural language instructions in a complex, realistic outdoor environment, but then also ARRAnge the collected objects part-by-part in an egocentric grid-layout environment.

Referring Expression Referring Expression Comprehension +1

Cannot find the paper you are looking for? You can Submit a new open access paper.