Search Results for author: Mehdi Ghanimifard

Found 9 papers, 2 papers with code

Fast visual grounding in interaction: bringing few-shot learning with neural networks to an interactive robot

no code implementations PaM 2020 José Miguel Cano Santín, Simon Dobnik, Mehdi Ghanimifard

The major shortcomings of using neural networks with situated agents are that in incremental interaction very few learning examples are available and that their visual sensory representations are quite different from image caption datasets.

Few-Shot Learning Transfer Learning +1

What goes into a word: generating image descriptions with top-down spatial knowledge

no code implementations WS 2019 Mehdi Ghanimifard, Simon Dobnik

The aim of this paper is to evaluate what representations facilitate generating image descriptions with spatial relations and lead to better grounded language generation.

Decoder Language Modelling +2

What a neural language model tells us about spatial relations

1 code implementation WS 2019 Mehdi Ghanimifard, Simon Dobnik

Understanding and generating spatial descriptions requires knowledge about what objects are related, their functional interactions, and where the objects are geometrically located.

Language Modelling

Bigrams and BiLSTMs Two Neural Networks for Sequential Metaphor Detection

1 code implementation WS 2018 Yuri Bizzoni, Mehdi Ghanimifard

We present and compare two alternative deep neural architectures to perform word-level metaphor detection on text: a bi-LSTM model and a new structure based on recursive feed-forward concatenation of the input.

Vocal Bursts Valence Prediction Word Embeddings

Exploring the Functional and Geometric Bias of Spatial Relations Using Neural Language Models

no code implementations WS 2018 Simon Dobnik, Mehdi Ghanimifard, John Kelleher

The challenge for computational models of spatial descriptions for situated dialogue systems is the integration of information from different modalities.

Image Captioning

``Deep'' Learning : Detecting Metaphoricity in Adjective-Noun Pairs

no code implementations WS 2017 Yuri Bizzoni, Stergios Chatzikyriakidis, Mehdi Ghanimifard

We show that using a single neural network combined with pre-trained vector embeddings can outperform the state of the art in terms of accuracy.

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.