Search Results for author: Vanessa D'Amario

Found 6 papers, 4 papers with code

D3: Data Diversity Design for Systematic Generalization in Visual Question Answering

1 code implementation15 Sep 2023 Amir Rahimi, Vanessa D'Amario, Moyuru Yamada, Kentaro Takemoto, Tomotake Sasaki, Xavier Boix

We demonstrate that this result is independent of the similarity between the training and testing data and applies to well-known families of neural network architectures for VQA (i. e. monolithic architectures and neural module networks).

Question Answering Systematic Generalization +1

Transformer Module Networks for Systematic Generalization in Visual Question Answering

1 code implementation27 Jan 2022 Moyuru Yamada, Vanessa D'Amario, Kentaro Takemoto, Xavier Boix, Tomotake Sasaki

We reveal that Neural Module Networks (NMNs), i. e., question-specific compositions of modules that tackle a sub-task, achieve better or similar systematic generalization performance than the conventional Transformers, even though NMNs' modules are CNN-based.

Question Answering Systematic Generalization +1

The Foes of Neural Network's Data Efficiency Among Unnecessary Input Dimensions

no code implementations13 Jul 2021 Vanessa D'Amario, Sanjana Srivastava, Tomotake Sasaki, Xavier Boix

Datasets often contain input dimensions that are unnecessary to predict the output label, e. g. background in object recognition, which lead to more trainable parameters.

Object Recognition

The Foes of Neural Network’s Data Efficiency Among Unnecessary Input Dimensions

no code implementations1 Jan 2021 Vanessa D'Amario, Sanjana Srivastava, Tomotake Sasaki, Xavier Boix

In this paper, we investigate the impact of unnecessary input dimensions on one of the central issues of machine learning: the number of training examples needed to achieve high generalization performance, which we refer to as the network's data efficiency.

Foveation Image Classification +3

Frivolous Units: Wider Networks Are Not Really That Wide

1 code implementation10 Dec 2019 Stephen Casper, Xavier Boix, Vanessa D'Amario, Ling Guo, Martin Schrimpf, Kasper Vinken, Gabriel Kreiman

We identify two distinct types of "frivolous" units that proliferate when the network's width is increased: prunable units which can be dropped out of the network without significant change to the output and redundant units whose activities can be expressed as a linear combination of others.

Cannot find the paper you are looking for? You can Submit a new open access paper.