2 code implementations • 16 Oct 2023 • Hassan Shahmohammadi, Adhiraj Ghosh, Hendrik P. A. Lensch
Figurative and non-literal expressions are profoundly integrated in human communication.
no code implementations • 8 Sep 2022 • Wafaa Mohammed, Hassan Shahmohammadi, Hendrik P. A. Lensch, R. Harald Baayen
We obtained visually grounded vector representations for these languages and studied whether visual grounding on one or multiple languages improved the performance of embeddings on word similarity and categorization benchmarks.
no code implementations • 30 Jun 2022 • Hassan Shahmohammadi, Maria Heitmeier, Elnaz Shafaei-Bajestan, Hendrik P. A. Lensch, Harald Baayen
To what extent does this setup rely on visual information from images?
1 code implementation • 17 Jun 2022 • Hassan Shahmohammadi, Maria Heitmeier, Elnaz Shafaei-Bajestan, Hendrik P. A. Lensch, Harald Baayen
Our model effectively balances the interplay between language and vision by aligning textual embeddings with visual information while simultaneously preserving the distributional statistics that characterize word usage in text corpora.
1 code implementation • CoNLL (EMNLP) 2021 • Hassan Shahmohammadi, Hendrik P. A. Lensch, R. Harald Baayen
The general approach is to embed both textual and visual information into a common space -the grounded space-confined by an explicit relationship between both modalities.