Search Results for author: Michael Fiman

Found 1 papers, 1 papers with code

Is BERT Blind? Exploring the Effect of Vision-and-Language Pretraining on Visual Language Understanding

1 code implementation CVPR 2023 Morris Alper, Michael Fiman, Hadar Averbuch-Elor

We show that SOTA multimodally trained text encoders outperform unimodally trained text encoders on the VLU tasks while being underperformed by them on the NLU tasks, lending new context to previously mixed results regarding the NLU capabilities of multimodal models.

Knowledge Probing Language Modelling +2

Cannot find the paper you are looking for? You can Submit a new open access paper.