Search Results for author: Artemis Panagopoulou

Found 9 papers, 7 papers with code

ViUniT: Visual Unit Tests for More Robust Visual Programming

no code implementations12 Dec 2024 Artemis Panagopoulou, Honglu Zhou, Silvio Savarese, Caiming Xiong, Chris Callison-Burch, Mark Yatskar, Juan Carlos Niebles

In our framework, a unit test is represented as a novel image and answer pair meant to verify the logical correctness of a program produced for a given query.

Image Generation Image-text matching +4

Evaluating Vision-Language Models on Bistable Images

1 code implementation29 May 2024 Artemis Panagopoulou, Coby Melkin, Chris Callison-Burch

Bistable images, also known as ambiguous or reversible images, present visual stimuli that can be seen in two distinct interpretations, though not simultaneously by the observer.

X-InstructBLIP: A Framework for aligning X-Modal instruction-aware representations to LLMs and Emergent Cross-modal Reasoning

2 code implementations30 Nov 2023 Artemis Panagopoulou, Le Xue, Ning Yu, Junnan Li, Dongxu Li, Shafiq Joty, ran Xu, Silvio Savarese, Caiming Xiong, Juan Carlos Niebles

To enable this framework, we devise a scalable pipeline that automatically generates high-quality, instruction-tuning datasets from readily available captioning data across different modalities, and contribute 24K QA data for audio and 250K QA data for 3D.

Visual Reasoning

I Spy a Metaphor: Large Language Models and Diffusion Models Co-Create Visual Metaphors

1 code implementation24 May 2023 Tuhin Chakrabarty, Arkadiy Saakyan, Olivia Winn, Artemis Panagopoulou, Yue Yang, Marianna Apidianaki, Smaranda Muresan

We propose to solve the task through the collaboration between Large Language Models (LLMs) and Diffusion Models: Instruct GPT-3 (davinci-002) with Chain-of-Thought prompting generates text that represents a visual elaboration of the linguistic metaphor containing the implicit meaning and relevant objects, which is then used as input to the diffusion-based text-to-image models. Using a human-AI collaboration framework, where humans interact both with the LLM and the top-performing diffusion model, we create a high-quality dataset containing 6, 476 visual metaphors for 1, 540 linguistic metaphors and their associated visual elaborations.

Visual Entailment

Visualizing the Obvious: A Concreteness-based Ensemble Model for Noun Property Prediction

1 code implementation24 Oct 2022 Yue Yang, Artemis Panagopoulou, Marianna Apidianaki, Mark Yatskar, Chris Callison-Burch

We propose to extract these properties from images and use them in an ensemble model, in order to complement the information that is extracted from language models.

Property Prediction

Induce, Edit, Retrieve:Language Grounded Multimodal Schema for Instructional Video Retrieval

no code implementations17 Nov 2021 Yue Yang, Joongwon Kim, Artemis Panagopoulou, Mark Yatskar, Chris Callison-Burch

Schemata are structured representations of complex tasks that can aid artificial intelligence by allowing models to break down complex tasks into intermediate steps.

Retrieval Video Retrieval

Visual Goal-Step Inference using wikiHow

1 code implementation EMNLP 2021 Yue Yang, Artemis Panagopoulou, Qing Lyu, Li Zhang, Mark Yatskar, Chris Callison-Burch

Understanding what sequence of steps are needed to complete a goal can help artificial intelligence systems reason about human activities.

Multimodal Reasoning VGSI

Cannot find the paper you are looking for? You can Submit a new open access paper.