Search Results for author: Hassan Shahmohammadi

Found 5 papers, 3 papers with code

Visual Grounding of Inter-lingual Word-Embeddings

no code implementations8 Sep 2022 Wafaa Mohammed, Hassan Shahmohammadi, Hendrik P. A. Lensch, R. Harald Baayen

We obtained visually grounded vector representations for these languages and studied whether visual grounding on one or multiple languages improved the performance of embeddings on word similarity and categorization benchmarks.

Visual Grounding Word Embeddings +1

Language with Vision: a Study on Grounded Word and Sentence Embeddings

1 code implementation17 Jun 2022 Hassan Shahmohammadi, Maria Heitmeier, Elnaz Shafaei-Bajestan, Hendrik P. A. Lensch, Harald Baayen

Our model effectively balances the interplay between language and vision by aligning textual embeddings with visual information while simultaneously preserving the distributional statistics that characterize word usage in text corpora.

Sentence Sentence Embeddings +3

Learning Zero-Shot Multifaceted Visually Grounded Word Embeddings via Multi-Task Training

1 code implementation CoNLL (EMNLP) 2021 Hassan Shahmohammadi, Hendrik P. A. Lensch, R. Harald Baayen

The general approach is to embed both textual and visual information into a common space -the grounded space-confined by an explicit relationship between both modalities.

Multi-Task Learning Word Embeddings

Cannot find the paper you are looking for? You can Submit a new open access paper.