no code implementations • PVLAM (LREC) 2022 • Marc Tanti, Shaun Abdilla, Adrian Muscat, Claudia Borg, Reuben A. Farrugia, Albert Gatt
To encourage the development of more human-focused descriptions, we developed a new data set of facial descriptions based on the CelebA image data set.
1 code implementation • DeepLo 2022 • Kurt Micallef, Albert Gatt, Marc Tanti, Lonneke van der Plas, Claudia Borg
We also present a newly created corpus for Maltese, and determine the effect that the pre-training data size and domain have on the downstream performance.
1 code implementation • EMNLP (BlackboxNLP) 2021 • Marc Tanti, Lonneke van der Plas, Claudia Borg, Albert Gatt
Recent work has shown evidence that the knowledge acquired by multilingual BERT (mBERT) has two components: a language-specific and a language-neutral one.
1 code implementation • 14 May 2021 • Marc Tanti, Camille Berruyer, Paul Tafforeau, Adrian Muscat, Reuben Farrugia, Kenneth Scerri, Gianluca Valentino, V. Armando Solé, Johann A. Briffa
Propagation Phase Contrast Synchrotron Microtomography (PPC-SR${\mu}$CT) is the gold standard for non-invasive and non-destructive access to internal structures of archaeological remains.
1 code implementation • 9 Nov 2019 • Marc Tanti, Albert Gatt, Kenneth P. Camilleri
We also observe that the merge architecture can have its recurrent neural network pre-trained in a text-only language model (transfer learning) rather than be initialised randomly as usual.
no code implementations • WS 2019 • Somayeh Jafaritazehjani, Albert Gatt, Marc Tanti
Natural Language Inference (NLI) is the task of determining the semantic relationship between a premise and a hypothesis.
no code implementations • 21 Sep 2019 • Somaye Jafaritazehjani, Albert Gatt, Marc Tanti
Natural Language Inference (NLI) is the task of determining the semantic relationship between a premise and a hypothesis.
1 code implementation • 1 Jan 2019 • Marc Tanti, Albert Gatt, Kenneth P. Camilleri
When designing a neural caption generator, a convolutional neural network can be used to extract image features.
1 code implementation • 12 Oct 2018 • Marc Tanti, Albert Gatt, Adrian Muscat
Image caption generation systems are typically evaluated against reference outputs.
1 code implementation • 12 Oct 2018 • Marc Tanti, Albert Gatt, Kenneth P. Camilleri
This paper addresses the sensitivity of neural image caption generators to their visual input.
1 code implementation • COLING 2018 • Hoa Trong Vu, Claudio Greco, Aliia Erofeeva, Somayeh Jafaritazehjan, Guido Linders, Marc Tanti, Alberto Testoni, Raffaella Bernardi, Albert Gatt
Capturing semantic relations between sentences, such as entailment, is a long-standing challenge for computational semantics.
Ranked #2 on
Natural Language Inference
on V-SNLI
1 code implementation • LREC 2018 • Albert Gatt, Marc Tanti, Adrian Muscat, Patrizia Paggio, Reuben A. Farrugia, Claudia Borg, Kenneth P. Camilleri, Mike Rosner, Lonneke van der Plas
To gain a better understanding of the variation we find in face description and the possible issues that this may raise, we also conducted an annotation study on a subset of the corpus.
no code implementations • WS 2017 • Hoa Trong Vu, Thuong-Hai Pham, Xiaoyu Bai, Marc Tanti, Lonneke van der Plas, Albert Gatt
System using BiLSTM and max pooling.
4 code implementations • WS 2017 • Marc Tanti, Albert Gatt, Kenneth P. Camilleri
This view suggests that the RNN should only be used to encode linguistic features and that only the final representation should be `merged' with the image features at a later stage.
12 code implementations • 27 Mar 2017 • Marc Tanti, Albert Gatt, Kenneth P. Camilleri
When a recurrent neural network language model is used for caption generation, the image information can be fed to the neural network either by directly incorporating it in the RNN -- conditioning the language model by `injecting' image features -- or in a layer following the RNN -- conditioning the language model by `merging' image features.