Search Results for author: Teresa Botschen

Found 7 papers, 2 papers with code

Prediction of Frame-to-Frame Relations in the FrameNet Hierarchy with Frame Embeddings

no code implementations WS 2017 Teresa Botschen, Hatem Mousselly-Sergieh, Iryna Gurevych

Automatic completion of frame-to-frame (F2F) relations in the FrameNet (FN) hierarchy has received little attention, although they incorporate meta-level commonsense knowledge and are used in downstream approaches.

Natural Language Inference Representation Learning +1

A Multimodal Translation-Based Approach for Knowledge Graph Representation Learning

no code implementations SEMEVAL 2018 Hatem Mousselly-Sergieh, Teresa Botschen, Iryna Gurevych, Stefan Roth

Current methods for knowledge graph (KG) representation learning focus solely on the structure of the KG and do not exploit any kind of external information, such as visual and linguistic information corresponding to the KG entities.

Graph Representation Learning Information Retrieval +3

Multimodal Grounding for Language Processing

1 code implementation COLING 2018 Lisa Beinborn, Teresa Botschen, Iryna Gurevych

This survey discusses how recent developments in multimodal processing facilitate conceptual grounding of language.

Frame- and Entity-Based Knowledge for Common-Sense Argumentative Reasoning

1 code implementation WS 2018 Teresa Botschen, Daniil Sorokin, Iryna Gurevych

Common-sense argumentative reasoning is a challenging task that requires holistic understanding of the argumentation where external knowledge about the world is hypothesized to play a key role.

Argument Mining Common Sense Reasoning +8

Joint Wasserstein Autoencoders for Aligning Multimodal Embeddings

no code implementations14 Sep 2019 Shweta Mahajan, Teresa Botschen, Iryna Gurevych, Stefan Roth

One of the key challenges in learning joint embeddings of multiple modalities, e. g. of images and text, is to ensure coherent cross-modal semantics that generalize across datasets.

Cross-Modal Retrieval Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.