Search Results for author: Marie-Francine Moens

Found 121 papers, 34 papers with code

Online Continual Learning from Imbalanced Data

no code implementations ICML 2020 Aristotelis Chrysakis, Marie-Francine Moens

Motivated by this remark, we aim to evaluate memory population methods that are used in online continual learning, when dealing with highly imbalanced and temporally correlated streams of data.

Computational Efficiency Continual Learning

Efficient Information Extraction in Few-Shot Relation Classification through Contrastive Representation Learning

1 code implementation25 Mar 2024 Philipp Borchert, Jochen De Weerdt, Marie-Francine Moens

In this paper, we introduce a novel approach to enhance information extraction combining multiple sentence representations and contrastive learning.

Classification Contrastive Learning +4

Computational Models to Study Language Processing in the Human Brain: A Survey

no code implementations20 Mar 2024 Shaonan Wang, Jingyuan Sun, Yunhao Zhang, Nan Lin, Marie-Francine Moens, Chengqing Zong

Despite differing from the human language processing mechanism in implementation and algorithms, current language models demonstrate remarkable human-like or surpassing language capabilities.

Animate Your Motion: Turning Still Images into Dynamic Videos

no code implementations15 Mar 2024 Mingxiao Li, Bo Wan, Marie-Francine Moens, Tinne Tuytelaars

For the first time, we integrate both semantic and motion cues within a diffusion model for video generation, as demonstrated in Fig 1.

Specificity Text-to-Video Generation +1

Introducing Routing Functions to Vision-Language Parameter-Efficient Fine-Tuning with Low-Rank Bottlenecks

no code implementations14 Mar 2024 Tingyu Qu, Tinne Tuytelaars, Marie-Francine Moens

Also when fine-tuning a pre-trained multimodal model such as CLIP-BART, we observe smaller but consistent improvements across a range of VL PEFT tasks.

NeuroCine: Decoding Vivid Video Sequences from Human Brain Activties

no code implementations2 Feb 2024 Jingyuan Sun, Mingxiao Li, Zijiao Chen, Marie-Francine Moens

In the pursuit to understand the intricacies of human brain's visual processing, reconstructing dynamic visual experiences from brain activities emerges as a challenging yet fascinating endeavor.

Contrastive Learning SSIM +1

Explicitly Representing Syntax Improves Sentence-to-layout Prediction of Unexpected Situations

no code implementations25 Jan 2024 Wolf Nuyts, Ruben Cartuyvels, Marie-Francine Moens

To test compositional understanding, we collect a test set of grammatically correct sentences and layouts describing compositions of entities and relations that unlikely have been seen during training.

Image Generation Sentence

Tuning In to Neural Encoding: Linking Human Brain and Artificial Supervised Representations of Language

no code implementations5 Oct 2023 Jingyuan Sun, Xiaohan Zhang, Marie-Francine Moens

To understand the algorithm that supports the human brain's language representation, previous research has attempted to predict neural responses to linguistic stimuli using embeddings generated by artificial neural networks (ANNs), a process known as neural encoding.

Natural Language Understanding

Generating Explanations in Medical Question-Answering by Expectation Maximization Inference over Evidence

no code implementations2 Oct 2023 Wei Sun, Mingxiao Li, Damien Sileo, Jesse Davis, Marie-Francine Moens

Medical Question Answering~(medical QA) systems play an essential role in assisting healthcare workers in finding answers to their questions.

Explanation Generation Question Answering

Decoding Realistic Images from Brain Activity with Contrastive Self-supervision and Latent Diffusion

no code implementations30 Sep 2023 Jingyuan Sun, Mingxiao Li, Marie-Francine Moens

Reconstructing visual stimuli from human brain activities provides a promising opportunity to advance our understanding of the brain's visual system and its connection with computer vision models.

Contrastive Learning

Sequence-to-Sequence Spanish Pre-trained Language Models

1 code implementation20 Sep 2023 Vladimir Araujo, Maria Mihaela Trusca, Rodrigo Tufiño, Marie-Francine Moens

In recent years, significant advancements in pre-trained language models have driven the creation of numerous non-English language variants, with a particular emphasis on encoder-only and decoder-only architectures.

Generative Question Answering Natural Language Understanding +1

When Do Discourse Markers Affect Computational Sentence Understanding?

no code implementations1 Sep 2023 RuiQi Li, Liesbeth Allein, Damien Sileo, Marie-Francine Moens

The capabilities and use cases of automatic natural language processing (NLP) have grown significantly over the last few years.

Sentence

Beyond Document Page Classification: Design, Datasets, and Challenges

1 code implementation24 Aug 2023 Jordy Van Landeghem, Sanket Biswas, Matthew B. Blaschko, Marie-Francine Moens

This paper highlights the need to bring document classification benchmarking closer to real-world applications, both in the nature of data tested ($X$: multi-channel, multi-paged, multi-industry; $Y$: class distributions and label set variety) and in classification tasks considered ($f$: multi-page document, page stream, and document bundle classification, ...).

Benchmarking Classification +1

Visually-Aware Context Modeling for News Image Captioning

1 code implementation16 Aug 2023 Tingyu Qu, Tinne Tuytelaars, Marie-Francine Moens

News Image Captioning aims to create captions from news articles and images, emphasizing the connection between textual context and visual elements.

Image Captioning Language Modelling +1

Multimodal Distillation for Egocentric Action Recognition

1 code implementation ICCV 2023 Gorjan Radevski, Dusan Grujicic, Marie-Francine Moens, Matthew Blaschko, Tinne Tuytelaars

The goal of this work is to retain the performance of such a multimodal approach, while using only the RGB frames as input at inference time.

Action Recognition Knowledge Distillation +2

Alleviating Exposure Bias in Diffusion Models through Sampling with Shifted Time Steps

1 code implementation24 May 2023 Mingxiao Li, Tingyu Qu, Ruicong Yao, Wei Sun, Marie-Francine Moens

In this work, we conduct a systematic study of exposure bias in DPM and, intriguingly, we find that the exposure bias could be alleviated with a novel sampling method that we propose, without retraining the model.

Denoising

A Memory Model for Question Answering from Streaming Data Supported by Rehearsal and Anticipation of Coreference Information

no code implementations12 May 2023 Vladimir Araujo, Alvaro Soto, Marie-Francine Moens

Existing question answering methods often assume that the input content (e. g., documents or videos) is always accessible to solve the task.

Memorization Question Answering

An Information Extraction Study: Take In Mind the Tokenization!

1 code implementation27 Mar 2023 Christos Theodoropoulos, Marie-Francine Moens

Current research on the advantages and trade-offs of using characters, instead of tokenized text, as input for deep learning models, has evolved substantially.

Inductive Bias Named Entity Recognition (NER) +1

Implicit Temporal Reasoning for Evidence-Based Fact-Checking

1 code implementation24 Feb 2023 Liesbeth Allein, Marlon Saelens, Ruben Cartuyvels, Marie-Francine Moens

Our findings show that the presence of temporal information and the manner in which timelines are constructed greatly influence how fact-checking models determine the relevance and supporting or refuting character of evidence documents.

Claim Verification Fact Checking

Optimizing ship detection efficiency in SAR images

no code implementations12 Dec 2022 Arthur Van Meerbeeck, Jordy Van Landeghem, Ruben Cartuyvels, Marie-Francine Moens

The integration of the optimizations with the object detection model leads to a trade-off between speed and performance.

object-detection Object Detection +1

Layout-aware Dreamer for Embodied Referring Expression Grounding

1 code implementation30 Nov 2022 Mingxiao Li, Zehao Wang, Tinne Tuytelaars, Marie-Francine Moens

In this work, we study the problem of Embodied Referring Expression Grounding, where an agent needs to navigate in a previously unseen environment and localize a remote object described by a concise high-level natural language instruction.

Common Sense Reasoning Navigate +1

Bidirectional Representations for Low Resource Spoken Language Understanding

no code implementations24 Nov 2022 Quentin Meeus, Marie-Francine Moens, Hugo Van hamme

Class attention can be used to visually explain the predictions of our model, which goes a long way in understanding how the model makes predictions.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

Multitask Learning for Low Resource Spoken Language Understanding

no code implementations24 Nov 2022 Quentin Meeus, Marie-Francine Moens, Hugo Van hamme

We explore the benefits that multitask learning offer to speech processing as we train models on dual objectives with automatic speech recognition and intent classification or sentiment classification.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +7

Probing neural language models for understanding of words of estimative probability

no code implementations7 Nov 2022 Damien Sileo, Marie-Francine Moens

and assess whether language models can predict whether the WEP consensual probability level is close to p. Secondly, we construct a dataset of WEP-based probabilistic reasoning, to test whether language models can reason with WEP compositions.

Language Modelling Natural Language Inference

Weakly Supervised Face Naming with Symmetry-Enhanced Contrastive Loss

no code implementations17 Oct 2022 Tingyu Qu, Tinne Tuytelaars, Marie-Francine Moens

We revisit the weakly supervised cross-modal face-name alignment task; that is, given an image and a caption, we label the faces in the image with the names occurring in the caption.

Contrastive Learning

Students taught by multimodal teachers are superior action recognizers

no code implementations9 Oct 2022 Gorjan Radevski, Dusan Grujicic, Matthew Blaschko, Marie-Francine Moens, Tinne Tuytelaars

Our approach is based on multimodal knowledge distillation, featuring a multimodal teacher (in the current experiments trained only using object detections, optical flow and RGB frames) and a unimodal student (using only RGB frames as input).

Action Recognition Knowledge Distillation +3

How Relevant is Selective Memory Population in Lifelong Language Learning?

no code implementations3 Oct 2022 Vladimir Araujo, Helena Balabin, Julio Hurtado, Alvaro Soto, Marie-Francine Moens

Lifelong language learning seeks to have models continuously learn multiple tasks in a sequential order without suffering from catastrophic forgetting.

Question Answering text-classification +1

Entropy-based Stability-Plasticity for Lifelong Learning

1 code implementation18 Apr 2022 Vladimir Araujo, Julio Hurtado, Alvaro Soto, Marie-Francine Moens

The ability to continuously learn remains elusive for deep learning models.

Finding Structural Knowledge in Multimodal-BERT

1 code implementation ACL 2022 Victor Milewski, Miryam de Lhoneux, Marie-Francine Moens

In this work, we investigate the knowledge learned in the embeddings of multimodal-BERT models.

Find a Way Forward: a Language-Guided Semantic Map Navigator

no code implementations7 Mar 2022 Zehao Wang, Mingxiao Li, Minye Wu, Marie-Francine Moens, Tinne Tuytelaars

In this paper, we introduce the map-language navigation task where an agent executes natural language instructions and moves to the target position based only on a given 3D semantic map.

Imitation Learning

Dynamic Key-value Memory Enhanced Multi-step Graph Reasoning for Knowledge-based Visual Question Answering

no code implementations6 Mar 2022 Mingxiao Li, Marie-Francine Moens

Knowledge-based visual question answering (VQA) is a vision-language task that requires an agent to correctly answer image-related questions using knowledge that is not presented in the given image.

Graph Attention Question Answering +1

Modeling Coreference Relations in Visual Dialog

no code implementations EACL 2021 Mingxiao Li, Marie-Francine Moens

Visual dialog is a vision-language task where an agent needs to answer a series of questions grounded in an image based on the understanding of the dialog history and the image.

Question Answering Visual Dialog +1

Discrete and continuous representations and processing in deep learning: Looking forward

no code implementations4 Jan 2022 Ruben Cartuyvels, Graham Spinks, Marie-Francine Moens

Motivated by these insights, in this paper we argue that combining discrete and continuous representations and their processing will be essential to build systems that exhibit a general form of intelligence.

Analysis and Prediction of NLP Models Via Task Embeddings

1 code implementation LREC 2022 Damien Sileo, Marie-Francine Moens

Task embeddings are low-dimensional representations that are trained to capture task properties.

Transfer Learning

Predicting Physical World Destinations for Commands Given to Self-Driving Cars

1 code implementation10 Dec 2021 Dusan Grujicic, Thierry Deruyttere, Marie-Francine Moens, Matthew Blaschko

However, surveys have shown that giving more control to an AI in self-driving cars is accompanied by a degree of uneasiness by passengers.

Self-Driving Cars

Revisiting spatio-temporal layouts for compositional action recognition

1 code implementation2 Nov 2021 Gorjan Radevski, Marie-Francine Moens, Tinne Tuytelaars

Recognizing human actions is fundamentally a spatio-temporal reasoning problem, and should be, at least to some extent, invariant to the appearance of the human and the objects involved.

Action Classification Action Detection +3

Augmenting BERT-style Models with Predictive Coding to Improve Discourse-level Representations

no code implementations EMNLP 2021 Vladimir Araujo, Andrés Villa, Marcelo Mendoza, Marie-Francine Moens, Alvaro Soto

Current language models are usually trained using a self-supervised scheme, where the main focus is learning representations at the word or sentence level.

Relationship Detection Sentence

Imposing Relation Structure in Language-Model Embeddings Using Contrastive Learning

1 code implementation CoNLL (EMNLP) 2021 Christos Theodoropoulos, James Henderson, Andrei C. Coman, Marie-Francine Moens

Though language model text embeddings have revolutionized NLP research, their ability to capture high-level semantic information, such as relations between entities in text, is limited.

Contrastive Learning Language Modelling +7

Like Article, Like Audience: Enforcing Multimodal Correlations for Disinformation Detection

no code implementations31 Aug 2021 Liesbeth Allein, Marie-Francine Moens, Domenico Perrotta

The latent representations of news articles and user-generated content allow that during training the model is guided by the profile of users who prefer content similar to the news article that is evaluated, and this effect is reinforced if that content is shared among different users.

Model Optimization

Towards Understanding Iterative Magnitude Pruning: Why Lottery Tickets Win

no code implementations13 Jun 2021 Jaron Maene, Mingxiao Li, Marie-Francine Moens

The lottery ticket hypothesis states that sparse subnetworks exist in randomly initialized dense networks that can be trained to the same accuracy as the dense network they reside in.

Linear Mode Connectivity

Giving Commands to a Self-Driving Car: How to Deal with Uncertain Situations?

1 code implementation8 Jun 2021 Thierry Deruyttere, Victor Milewski, Marie-Francine Moens

This paper proposes a model that detects uncertain situations when a command is given and finds the visual objects causing it.

Referring Expression Self-Driving Cars

Autoregressive Reasoning over Chains of Facts with Transformers

1 code implementation COLING 2020 Ruben Cartuyvels, Graham Spinks, Marie-Francine Moens

This paper proposes an iterative inference algorithm for multi-hop explanation regeneration, that retrieves relevant factual evidence in the form of text snippets, given a natural language question and its answer.

Learning-To-Rank

Decoding Language Spatial Relations to 2D Spatial Arrangements

1 code implementation Findings of the Association for Computational Linguistics 2020 Gorjan Radevski, Guillem Collell, Marie-Francine Moens, Tinne Tuytelaars

We address the problem of multimodal spatial understanding by decoding a set of language-expressed spatial relations to a set of 2D spatial arrangements in a multi-object and multi-relationship setting.

Are scene graphs good enough to improve Image Captioning?

1 code implementation Asian Chapter of the Association for Computational Linguistics 2020 Victor Milewski, Marie-Francine Moens, Iacer Calixto

Overall, we find no significant difference between models that use scene graph features and models that only use object detection features across different captioning metrics, which suggests that existing scene graph generation models are still too noisy to be useful in image captioning.

Graph Attention Graph Generation +5

Checkworthiness in Automatic Claim Detection Models: Definitions and Analysis of Datasets

no code implementations20 Aug 2020 Liesbeth Allein, Marie-Francine Moens

Public, professional and academic interest in automated fact-checking has drastically increased over the past decade, with many aiming to automate one of the first steps in a fact-check procedure: the selection of so-called checkworthy claims.

Fact Checking

Structured (De)composable Representations Trained with Neural Networks

no code implementations7 Jul 2020 Graham Spinks, Marie-Francine Moens

The paper proposes a novel technique for representing templates and instances of concept classes.

Retrieval

Convolutional Generation of Textured 3D Meshes

1 code implementation NeurIPS 2020 Dario Pavllo, Graham Spinks, Thomas Hofmann, Marie-Francine Moens, Aurelien Lucchi

A key contribution of our work is the encoding of the mesh and texture as 2D representations, which are semantically aligned and can be easily modeled by a 2D convolutional GAN.

Binary and Multitask Classification Model for Dutch Anaphora Resolution: Die/Dat Prediction

no code implementations9 Jan 2020 Liesbeth Allein, Artuur Leeuwenberg, Marie-Francine Moens

Drawing on previous research conducted on neural context-dependent dt-mistake correction models (Heyman et al. 2018), this study constructs the first neural network model for Dutch demonstrative and relative pronoun resolution that specifically focuses on the correction and part-of-speech prediction of these two pronouns.

Binary Classification General Classification +3

User Profiling Using Hinge-loss Markov Random Fields

no code implementations5 Jan 2020 Golnoosh Farnadi, Lise Getoor, Marie-Francine Moens, Martine De Cock

In this paper, we propose a mechanism to infer a variety of user characteristics, such as, age, gender and personality traits, which can then be compiled into a user profile.

Relational Reasoning

Distance-based Composable Representations with Neural Networks

no code implementations25 Sep 2019 Graham Spinks, Marie-Francine Moens

We introduce a new deep learning technique that builds individual and class representations based on distance estimates to randomly generated contextual dimensions for different modalities.

Multi-Label Image Classification Retrieval

Talk2Car: Taking Control of Your Self-Driving Car

1 code implementation IJCNLP 2019 Thierry Deruyttere, Simon Vandenhende, Dusan Grujicic, Luc van Gool, Marie-Francine Moens

Or more specifically, we consider the problem in an autonomous driving setting, where a passenger requests an action that can be associated with an object found in a street scene.

Autonomous Driving Object +2

Attention-based Fusion for Outfit Recommendation

no code implementations28 Aug 2019 Katrien Laenen, Marie-Francine Moens

This paper describes an attention-based fusion method for outfit recommendation which fuses the information in the product image and description to capture the most important, fine-grained product features into the item representation.

Justifying Diagnosis Decisions by Deep Neural Networks

no code implementations12 Jul 2019 Graham Spinks, Marie-Francine Moens

This textual representation is decoded into a diagnosis and the associated textual justification that will help a clinician evaluate the outcome.

Medical Diagnosis

Learning Unsupervised Multilingual Word Embeddings with Incremental Multilingual Hubs

no code implementations NAACL 2019 Geert Heyman, Bregt Verreet, Ivan Vuli{\'c}, Marie-Francine Moens

We learn a shared multilingual embedding space for a variable number of languages by incrementally adding new languages one by one to the current multilingual space.

Bilingual Lexicon Induction Cross-Lingual Word Embeddings +4

Evaluating Textual Representations through Image Generation

no code implementations WS 2018 Graham Spinks, Marie-Francine Moens

The method is illustrated on a medical dataset where the correct representation of spatial information and shorthands are of particular importance.

Image Generation

Temporal Information Extraction by Predicting Relative Time-lines

1 code implementation EMNLP 2018 Artuur Leeuwenberg, Marie-Francine Moens

The current leading paradigm for temporal information extraction from text consists of three phases: (1) recognition of events and temporal expressions, (2) recognition of temporal relations among them, and (3) time-line construction from the temporal relations.

Temporal Information Extraction

Word-Level Loss Extensions for Neural Temporal Relation Classification

1 code implementation COLING 2018 Artuur Leeuwenberg, Marie-Francine Moens

In this work, we extend our classification model's task loss with an unsupervised auxiliary loss on the word-embedding level of the model.

Classification General Classification +4

Do Neural Network Cross-Modal Mappings Really Bridge Modalities?

no code implementations ACL 2018 Guillem Collell, Marie-Francine Moens

Feed-forward networks are widely used in cross-modal applications to bridge modalities by mapping distributed vectors of one modality to the other, or to a shared space.

Retrieval

Generating Continuous Representations of Medical Texts

no code implementations NAACL 2018 Graham Spinks, Marie-Francine Moens

During training the input to the system is a dataset of captions for medical X-Rays.

Learning Representations Specialized in Spatial Knowledge: Leveraging Language and Vision

1 code implementation TACL 2018 Guillem Collell, Marie-Francine Moens

Here, we move one step forward in this direction and learn such representations by leveraging a task consisting in predicting continuous 2D spatial arrangements of objects given object-relationship-object instances (e. g., {``}cat under chair{''}) and a simple neural network model that learns the task from annotated images.

Dependency Parsing Object +4

Acquiring Common Sense Spatial Knowledge through Implicit Spatial Templates

1 code implementation18 Nov 2017 Guillem Collell, Luc van Gool, Marie-Francine Moens

In contrast with prior work that restricts spatial templates to explicit spatial prepositions (e. g., "glass on table"), here we extend this concept to implicit spatial language, i. e., those relationships (generally actions) for which the spatial arrangement of the objects is only implicitly implied (e. g., "man riding horse").

Common Sense Reasoning Question Answering +1

Fast Top-k Area Topics Extraction with Knowledge Base

no code implementations13 Oct 2017 Fang Zhang, Xiaochen Wang, Jingfei Han, Jie Tang, Shiyin Wang, Marie-Francine Moens

We leverage a large-scale knowledge base (Wikipedia) to generate topic embeddings using neural networks and use this kind of representations to help capture the representativeness of topics for given areas.

Speech-Based Visual Question Answering

1 code implementation1 May 2017 Ted Zhang, Dengxin Dai, Tinne Tuytelaars, Marie-Francine Moens, Luc van Gool

This paper introduces speech-based visual question answering (VQA), the task of generating an answer given an image and a spoken question.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Improving Implicit Semantic Role Labeling by Predicting Semantic Frame Arguments

no code implementations IJCNLP 2017 Quynh Ngoc Thi Do, Steven Bethard, Marie-Francine Moens

Implicit semantic role labeling (iSRL) is the task of predicting the semantic roles of a predicate that do not appear as explicit arguments, but rather regard common sense knowledge or are mentioned earlier in the discourse.

Common Sense Reasoning Semantic Role Labeling

Structured Learning for Temporal Relation Extraction from Clinical Records

1 code implementation EACL 2017 Artuur Leeuwenberg, Marie-Francine Moens

We propose a scalable structured learning model that jointly predicts temporal relations between events and temporal expressions (TLINKS), and the relation between these events and the document creation time (DCTR).

Relation Temporal Information Extraction +1

Bilingual Lexicon Induction by Learning to Combine Word-Level and Character-Level Representations

no code implementations EACL 2017 Geert Heyman, Ivan Vuli{\'c}, Marie-Francine Moens

We study the problem of bilingual lexicon induction (BLI) in a setting where some translation resources are available, but unknown translations are sought for certain, possibly domain-specific terminology.

Bilingual Lexicon Induction Classification +10

Learning to Predict: A Fast Re-constructive Method to Generate Multimodal Embeddings

no code implementations25 Mar 2017 Guillem Collell, Teddy Zhang, Marie-Francine Moens

Integrating visual and linguistic information into a single multimodal representation is an unsolved problem with wide-reaching applications to both natural language processing and computer vision.

Facing the most difficult case of Semantic Role Labeling: A collaboration of word embeddings and co-training

no code implementations COLING 2016 Quynh Ngoc Thi Do, Steven Bethard, Marie-Francine Moens

We present a successful collaboration of word embeddings and co-training to tackle in the most difficult test case of semantic role labeling: predicting out-of-domain and unseen semantic frames.

Semantic Role Labeling Word Embeddings

A Dataset for Multimodal Question Answering in the Cultural Heritage Domain

no code implementations WS 2016 Shurong Sheng, Luc van Gool, Marie-Francine Moens

In this paper, we introduce the construction of a golden standard dataset that will aid research of multimodal question answering in the cultural heritage domain.

Question Answering Speech Recognition +1

Is an Image Worth More than a Thousand Words? On the Fine-Grain Semantic Differences between Visual and Linguistic Representations

no code implementations COLING 2016 Guillem Collell, Marie-Francine Moens

Human concept representations are often grounded with visual information, yet some aspects of meaning cannot be visually represented or are better described with language.

Semi-automatically Alignment of Predicates between Speech and OntoNotes data

no code implementations LREC 2016 Niraj Shrestha, Marie-Francine Moens

We still lack suitable corpora of transcribed speech annotated with semantic roles that can be used for semantic role labeling (SRL), which is not the case for written data.

Semantic Role Labeling Sentence

Deep Embedding for Spatial Role Labeling

no code implementations28 Mar 2016 Oswaldo Ludwig, Xiao Liu, Parisa Kordjamshidi, Marie-Francine Moens

This paper introduces the visually informed embedding of word (VIEW), a continuous vector representation for a word extracted from a deep neural model trained using the Microsoft COCO data set to forecast the spatial arrangements between visual objects, given a textual description.

Bilingual Distributed Word Representations from Document-Aligned Comparable Data

no code implementations24 Sep 2015 Ivan Vulić, Marie-Francine Moens

We propose a new model for learning bilingual word representations from non-parallel document-aligned data.

Representation Learning Sentence +2

Annotating Story Timelines as Temporal Dependency Structures

no code implementations LREC 2012 Steven Bethard, Oleks Kolomiyets, R, Marie-Francine Moens

We present an approach to annotating timelines in stories where events are linked together by temporal relations into a temporal dependency tree.

Reading Comprehension Relation

Cannot find the paper you are looking for? You can Submit a new open access paper.