no code implementations • 11 May 2022 • Catherine Wong, William P. McCarthy, Gabriel Grand, Yoni Friedman, Joshua B. Tenenbaum, Jacob Andreas, Robert D. Hawkins, Judith E. Fan
Our understanding of the visual world goes beyond naming objects, encompassing our ability to parse objects into meaningful parts, attributes, and relations.
1 code implementation • 19 Oct 2020 • Seyone Chithrananda, Gabriel Grand, Bharath Ramsundar
GNNs and chemical fingerprints are the predominant approaches to representing molecules for property prediction.
1 code implementation • NAACL 2019 • Gabriel Grand, Yonatan Belinkov
Visual question answering (VQA) models have been shown to over-rely on linguistic biases in VQA datasets, answering questions "blindly" without considering visual context.
no code implementations • 3 Jun 2018 • Gabriel Grand, Aron Szanto, Yoon Kim, Alexander Rush
Visual question answering (VQA) models respond to open-ended natural language questions about images.
no code implementations • 5 Feb 2018 • Gabriel Grand, Idan Asher Blank, Francisco Pereira, Evelina Fedorenko
Because related words appear in similar contexts, such spaces - called "word embeddings" - can be learned from patterns of lexical co-occurrences in natural language.