Search Results for author: Noa Garcia

Found 27 papers, 10 papers with code

Can multiple-choice questions really be useful in detecting the abilities of LLMs?

1 code implementation26 Mar 2024 Wangyue Li, Liangzhi Li, Tong Xiang, Xiao Liu, Wei Deng, Noa Garcia

Additionally, we propose two methods to quantify the consistency and confidence of LLMs' output, which can be generalized to other QA evaluation benchmarks.

Multiple-choice Question Answering

Stable Diffusion Exposed: Gender Bias from Prompt to Image

no code implementations5 Dec 2023 Yankun Wu, Yuta Nakashima, Noa Garcia

Recent studies have highlighted biases in generative models, shedding light on their predisposition towards gender-based stereotypes and imbalances.

CARE-MI: Chinese Benchmark for Misinformation Evaluation in Maternity and Infant Care

1 code implementation NeurIPS 2023 Tong Xiang, Liangzhi Li, Wangyue Li, Mingbai Bai, Lu Wei, Bowen Wang, Noa Garcia

In an effort to minimize the reliance on human resources for performance evaluation, we offer off-the-shelf judgment models for automatically assessing the LF output of LLMs given benchmark questions.

Misinformation

Model-Agnostic Gender Debiased Image Captioning

1 code implementation CVPR 2023 Yusuke Hirota, Yuta Nakashima, Noa Garcia

From this observation, we hypothesize that there are two types of gender bias affecting image captioning models: 1) bias that exploits context to predict gender, and 2) bias in the probability of generating certain (often stereotypical) words because of gender.

Image Captioning

Uncurated Image-Text Datasets: Shedding Light on Demographic Bias

1 code implementation CVPR 2023 Noa Garcia, Yusuke Hirota, Yankun Wu, Yuta Nakashima

The increasing tendency to collect large and uncurated datasets to train vision-and-language models has raised concerns about fair representations.

Image Captioning Text-to-Image Generation

Gender and Racial Bias in Visual Question Answering Datasets

no code implementations17 May 2022 Yusuke Hirota, Yuta Nakashima, Noa Garcia

Our findings suggest that there are dangers associated to using VQA datasets without considering and dealing with the potentially harmful stereotypes.

Question Answering Visual Question Answering

The Met Dataset: Instance-level Recognition for Artworks

no code implementations3 Feb 2022 Nikolaos-Antonios Ypsilantis, Noa Garcia, Guangxing Han, Sarah Ibrahimi, Nanne van Noord, Giorgos Tolias

Testing is primarily performed on photos taken by museum guests depicting exhibits, which introduces a distribution shift between training and testing.

Contrastive Learning Out-of-Distribution Detection

Attending Self-Attention: A Case Study of Visually Grounded Supervision in Vision-and-Language Transformers

no code implementations ACL 2021 Jules Samaran, Noa Garcia, Mayu Otani, Chenhui Chu, Yuta Nakashima

The impressive performances of pre-trained visually grounded language models have motivated a growing body of research investigating what has been learned during the pre-training.

Language Modelling Visual Grounding

A Picture May Be Worth a Hundred Words for Visual Question Answering

no code implementations25 Jun 2021 Yusuke Hirota, Noa Garcia, Mayu Otani, Chenhui Chu, Yuta Nakashima, Ittetsu Taniguchi, Takao Onoye

This paper delves into the effectiveness of textual representations for image understanding in the specific context of VQA.

Data Augmentation Descriptive +2

Understanding the Role of Scene Graphs in Visual Question Answering

no code implementations14 Jan 2021 Vinay Damodaran, Sharanya Chakravarthy, Akshay Kumar, Anjana Umapathy, Teruko Mitamura, Yuta Nakashima, Noa Garcia, Chenhui Chu

Visual Question Answering (VQA) is of tremendous interest to the research community with important applications such as aiding visually impaired users and image-based search.

Graph Generation Question Answering +2

Demographic Influences on Contemporary Art with Unsupervised Style Embeddings

no code implementations30 Sep 2020 Nikolai Huckle, Noa Garcia, Yuta Nakashima

Art produced today, on the other hand, is numerous and easily accessible, through the internet and social networks that are used by professional and amateur artists alike to display their work.

Art Analysis

Knowledge-Based Video Question Answering with Unsupervised Scene Descriptions

1 code implementation ECCV 2020 Noa Garcia, Yuta Nakashima

To understand movies, humans constantly reason over the dialogues and actions shown in specific scenes and relate them to the overall storyline already seen.

Question Answering Video Question Answering +1

Knowledge-Based Visual Question Answering in Videos

no code implementations17 Apr 2020 Noa Garcia, Mayu Otani, Chenhui Chu, Yuta Nakashima

We propose a novel video understanding task by fusing knowledge-based and video question answering.

Question Answering Video Question Answering +2

Understanding Art through Multi-Modal Retrieval in Paintings

no code implementations24 Apr 2019 Noa Garcia, Benjamin Renoust, Yuta Nakashima

In computer vision, visual arts are often studied from a purely aesthetics perspective, mostly by analysing the visual appearance of an artistic reproduction to infer its style, its author, or its representative features.

Art Analysis Retrieval

Context-Aware Embeddings for Automatic Art Analysis

1 code implementation10 Apr 2019 Noa Garcia, Benjamin Renoust, Yuta Nakashima

Whereas visual representations are able to capture information about the content and the style of an artwork, our proposed context-aware embeddings additionally encode relationships between different artistic attributes, such as author, school, or historical period.

Art Analysis Cross-Modal Retrieval +3

How to Read Paintings: Semantic Art Understanding with Multi-Modal Retrieval

no code implementations23 Oct 2018 Noa Garcia, George Vogiatzis

Automatic art analysis has been mostly focused on classifying artworks into different artistic styles.

Art Analysis Retrieval

Dress like a Star: Retrieving Fashion Products from Videos

no code implementations19 Oct 2017 Noa Garcia, George Vogiatzis

This work proposes a system for retrieving clothing and fashion products from video content.

Retrieval

Learning Non-Metric Visual Similarity for Image Retrieval

no code implementations ICLR 2018 Noa Garcia, George Vogiatzis

Theoretically, non-metric distances are able to generate a more complex and accurate similarity model than metric distances, provided that the non-linear data distribution is precisely captured by the system.

Content-Based Image Retrieval Instance Search +1

Cannot find the paper you are looking for? You can Submit a new open access paper.