Search Results for author: Nick Campbell

Found 18 papers, 0 papers with code

An audiovisual political speech analysis incorporating eye-tracking and perception data

no code implementations LREC 2012 Stefan Scherer, Georg Layher, John Kane, Heiko Neumann, Nick Campbell

Additionally, we compare the gaze behavior of the human subjects to evaluate saliency regions in the multimodal and visual only conditions.

Persuasiveness

The D-ANS corpus: the Dublin-Autonomous Nervous System corpus of biosignal and multimodal recordings of conversational speech

no code implementations LREC 2014 Shannon Hennig, Ryad Chellali, Nick Campbell

We believe this corpus is particularly relevant to researchers interested in unscripted social conversation as well as to researchers with a specific interest in observing the dynamics of biosignals during informal social conversation rich with examples of laughter, conversational turn-taking, and non-task-based interaction.

Capturing Chat: Annotation and Tools for Multiparty Casual Conversation.

no code implementations LREC 2016 Emer Gilmartin, Nick Campbell

Casual multiparty conversation is an understudied but very common genre of spoken interaction, whose analysis presents a number of challenges in terms of data scarcity and annotation.

The ILMT-s2s Corpus ― A Multimodal Interlingual Map Task Corpus

no code implementations LREC 2016 Akira Hayakawa, Saturnino Luz, Loredana Cerrato, Nick Campbell

The corpus design is inspired by the HCRC Map Task Corpus which was initially designed to support the investigation of linguistic phenomena, and has been the focus of a variety of studies of communicative behaviour.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Incorporating Global Visual Features into Attention-Based Neural Machine Translation

no code implementations23 Jan 2017 Iacer Calixto, Qun Liu, Nick Campbell

We introduce multi-modal, attention-based neural machine translation (NMT) models which incorporate visual features into different parts of both the encoder and the decoder.

Multimodal Machine Translation NMT +2

Multilingual Multi-modal Embeddings for Natural Language Processing

no code implementations3 Feb 2017 Iacer Calixto, Qun Liu, Nick Campbell

We propose a novel discriminative model that learns embeddings from multilingual and multi-modal data, meaning that our model can take advantage of images and descriptions in multiple languages to improve embedding quality.

Machine Translation NMT +5

Doubly-Attentive Decoder for Multi-modal Neural Machine Translation

no code implementations ACL 2017 Iacer Calixto, Qun Liu, Nick Campbell

We introduce a Multi-modal Neural Machine Translation model in which a doubly-attentive decoder naturally incorporates spatial visual features obtained using pre-trained convolutional neural networks, bridging the gap between image description and translation.

Multimodal Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.