Search Results for author: Timothy Ossowski

Found 4 papers, 4 papers with code

How does Multi-Task Training Affect Transformer In-Context Capabilities? Investigations with Function Classes

1 code implementation4 Apr 2024 Harmon Bhasin, Timothy Ossowski, Yiqiao Zhong, Junjie Hu

Large language models (LLM) have recently shown the extraordinary ability to perform unseen tasks based on few-shot examples provided as text, also known as in-context learning (ICL).

In-Context Learning Multi-Task Learning

Prompting Large Vision-Language Models for Compositional Reasoning

1 code implementation20 Jan 2024 Timothy Ossowski, Ming Jiang, Junjie Hu

Vision-language models such as CLIP have shown impressive capabilities in encoding texts and images into aligned embeddings, enabling the retrieval of multimodal data in a shared embedding space.

Retrieval Visual Reasoning

Multimodal Prompt Retrieval for Generative Visual Question Answering

1 code implementation30 Jun 2023 Timothy Ossowski, Junjie Hu

Recent years have witnessed impressive results of pre-trained vision-language models on knowledge-intensive tasks such as visual question answering (VQA).

Domain Adaptation Generative Visual Question Answering +3

Cannot find the paper you are looking for? You can Submit a new open access paper.