no code implementations • NAACL 2022 • Ao Jia, Yu He, Yazhou Zhang, Sagar Uprety, Dawei Song, Christina Lioma
Desire is a strong wish to do or have something, which involves not only a linguistic expression, but also underlying cognitive phenomena driving human feelings.
no code implementations • 1 Mar 2024 • Simone Borg Bruun, Christina Lioma, Maria Maistro
Our models cope with data scarcity by learning from multiple sessions and different types of user actions.
1 code implementation • 24 Feb 2024 • Ziyi Ye, Jingtao Zhan, Qingyao Ai, Yiqun Liu, Maarten de Rijke, Christina Lioma, Tuukka Ruotsalo
If the quality of the initially retrieved documents is low, then the effectiveness of query augmentation would be limited as well.
no code implementations • 20 Feb 2024 • Sara Vera Marjanović, Isabelle Augenstein, Christina Lioma
In this large-scale empirical study, we insert different levels of noise perturbations and measure the effect on the output of pre-trained language models and different uncertainty metrics.
no code implementations • 26 Jan 2024 • Xiansong Meng, Deming Kong, Kwangwoong Kim, Qiuchi Li, Po Dong, Ingemar J. Cox, Christina Lioma, Hao Hu
Here, we propose a digital-analog hybrid optical computing architecture for ONNs, which utilizes digital optical inputs in the form of binary words.
1 code implementation • 16 Nov 2023 • Ziyi Ye, Qingyao Ai, Yiqun Liu, Maarten de Rijke, Min Zhang, Christina Lioma, Tuukka Ruotsalo
Inspired by recent research that revealed associations between the brain and the large computational language models, we propose a generative language BCI that utilizes the capacity of a large language model (LLM) jointly with a semantic brain decoder to directly generate language from functional magnetic resonance imaging (fMRI) input.
1 code implementation • 2 Nov 2023 • Theresia Veronika Rampisela, Maria Maistro, Tuukka Ruotsalo, Christina Lioma
To our knowledge, this is the first critical comparison of individual item fairness measures in recommender systems.
1 code implementation • 29 May 2023 • Pepa Atanasova, Oana-Maria Camburu, Christina Lioma, Thomas Lukasiewicz, Jakob Grue Simonsen, Isabelle Augenstein
Explanations of neural models aim to reveal a model's decision-making process for its predictions.
no code implementations • 24 Feb 2023 • Qiuchi Li, Benyou Wang, Yudong Zhu, Christina Lioma, Qun Liu
The emerging classical-quantum transfer learning paradigm has brought a decent performance to quantum computational models in many tasks, such as computer vision, by enabling a combination of quantum models and classical pre-trained neural networks.
1 code implementation • 26 Jan 2023 • Simone Borg Bruun, Kacper Kenji Lesniak, Mirko Biasini, Vittorio Carmignani, Panagiotis Filianos, Christina Lioma, Maria Maistro
We propose a graph-based recommender model which utilizes heterogeneous interactions between users and content of different types and is able to operate well on small-scale datasets.
no code implementations • 6 Dec 2022 • Qiuchi Li, Christina Lioma
Text generation has long been a popular research topic in NLP.
1 code implementation • 1 Dec 2022 • Maria Maistro, Lucas Chaves Lima, Jakob Grue Simonsen, Christina Lioma
Information Retrieval evaluation has traditionally focused on defining principled ways of assessing the relevance of a ranked list of documents with respect to a query.
1 code implementation • 28 Nov 2022 • Simone Borg Bruun, Maria Maistro, Christina Lioma
To address this, we present a recurrent neural network recommendation model that uses past user sessions as signals for learning recommendations.
no code implementations • 5 Apr 2022 • Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, Isabelle Augenstein
To this end, we are the first to study what information FC models consider sufficient by introducing a novel task and advancing it with three main contributions.
no code implementations • 8 Sep 2021 • Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, Isabelle Augenstein
When such annotations are not available, explanations are often selected as those portions of the input that maximise a downstream task's performance, which corresponds to optimising an explanation's Faithfulness to a given model.
1 code implementation • 26 Mar 2021 • Christian Hansen, Casper Hansen, Jakob Grue Simonsen, Christina Lioma
While this is highly efficient, each bit dimension is equally weighted, which means that potentially discriminative information of the data is lost.
1 code implementation • 26 Mar 2021 • Christian Hansen, Casper Hansen, Jakob Grue Simonsen, Stephen Alstrup, Christina Lioma
In this work, we propose Multi-Index Semantic Hashing (MISH), an unsupervised hashing model that learns hash codes that are both effective and highly efficient by being optimized for multi-index hashing.
no code implementations • ICLR 2021 • Benyou Wang, Lifeng Shang, Christina Lioma, Xin Jiang, Hao Yang, Qun Liu, Jakob Grue Simonsen
Various Position Embeddings (PEs) have been proposed in Transformer based architectures~(e. g. BERT) to model word order.
1 code implementation • 22 Dec 2020 • Dongsheng Wang, Casper Hansen, Lucas Chaves Lima, Christian Hansen, Maria Maistro, Jakob Grue Simonsen, Christina Lioma
The state of the art in learning meaningful semantic representations of words is the Transformer model and its attention mechanisms.
no code implementations • 25 Nov 2020 • Lucas Chaves Lima, Casper Hansen, Christian Hansen, Dongsheng Wang, Maria Maistro, Birger Larsen, Jakob Grue Simonsen, Christina Lioma
This report describes the participation of two Danish universities, University of Copenhagen and Aalborg University, in the international search engine competition on COVID-19 (the 2020 TREC-COVID Challenge) organised by the U. S. National Institute of Standards and Technology (NIST) and its Text Retrieval Conference (TREC) division.
1 code implementation • EMNLP 2020 • Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, Isabelle Augenstein
Recent developments in machine learning have introduced models that approach human performance at the cost of increased architectural complexity.
1 code implementation • 1 Jul 2020 • Casper Hansen, Christian Hansen, Jakob Grue Simonsen, Stephen Alstrup, Christina Lioma
Inspired by this, we present Semantic Hashing with Pairwise Reconstruction (PairRec), which is a discrete variational autoencoder based hashing model.
1 code implementation • 17 Jun 2020 • Christian Hansen, Casper Hansen, Jakob Grue Simonsen, Birger Larsen, Stephen Alstrup, Christina Lioma
We study whether it is possible to infer if a news headline is true or false using only the movement of the human eyes when reading news headlines.
1 code implementation • 31 May 2020 • Casper Hansen, Christian Hansen, Jakob Grue Simonsen, Stephen Alstrup, Christina Lioma
NeuHash-CF is modelled as an autoencoder architecture, consisting of two joint hashing components for generating user and item hash codes.
no code implementations • ACL 2020 • Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, Isabelle Augenstein
Most existing work on automated fact checking is concerned with predicting the veracity of claims based on metadata, social network spread, language used in claims, and, more recently, evidence supporting or denying claims.
1 code implementation • ICLR 2020 • Benyou Wang, Donghao Zhao, Christina Lioma, Qiuchi Li, Peng Zhang, Jakob Grue Simonsen
The benefit of continuous functions over variable positions is that word representations shift smoothly with increasing positions.
no code implementations • 25 Sep 2019 • Casper Hansen, Christian Hansen, Jakob Grue Simonsen, Stephen Alstrup, Christina Lioma
To this end, we propose an end-to-end trainable variational hashing-based collaborative filtering approach that uses the novel concept of self-masking: the user hash code acts as a mask on the items (using the Boolean AND operation), such that it learns to encode which bits are important to the user, rather than the user's preference towards the underlying item property that the bits represent.
no code implementations • IJCNLP 2019 • Isabelle Augenstein, Christina Lioma, Dongsheng Wang, Lucas Chaves Lima, Casper Hansen, Christian Hansen, Jakob Grue Simonsen
We contribute the largest publicly available dataset of naturally occurring factual claims for the purpose of automatic claim verification.
1 code implementation • 3 Jun 2019 • Casper Hansen, Christian Hansen, Stephen Alstrup, Jakob Grue Simonsen, Christina Lioma
Word embeddings predict a word from its neighbours by learning small, dense embedding vectors.
no code implementations • 3 Jun 2019 • Casper Hansen, Christian Hansen, Jakob Grue Simonsen, Stephen Alstrup, Christina Lioma
We present a novel unsupervised generative semantic hashing approach, \textit{Ranking based Semantic Hashing} (RBSH) that consists of both a variational and a ranking based component.
1 code implementation • ICLR 2019 • Christian Hansen, Casper Hansen, Stephen Alstrup, Jakob Grue Simonsen, Christina Lioma
We present Structural-Jump-LSTM: the first neural speed reading model to both skip and jump text during inference.
1 code implementation • 20 Mar 2019 • Christian Hansen, Casper Hansen, Stephen Alstrup, Jakob Grue Simonsen, Christina Lioma
Modelling sequential music skips provides streaming companies the ability to better understand the needs of the user base, resulting in a better user experience by reducing the need to manually skip certain music tracks.
no code implementations • 20 Mar 2019 • Casper Hansen, Christian Hansen, Stephen Alstrup, Jakob Grue Simonsen, Christina Lioma
Automatic fact-checking systems detect misinformation, such as fake news, by (i) selecting check-worthy sentences for fact-checking, (ii) gathering related information to the sentences, and (iii) inferring the factuality of the sentences.
no code implementations • 20 Mar 2019 • Dongsheng Wang, Quichi Li, Lucas Chaves Lima, Jakob Grue Simonsen, Christina Lioma
In this paper, we operationalize the viewpoint that compositionality is contextual rather than deterministic, i. e., that whether a phrase is compositional or non-compositional depends on its context.
no code implementations • 12 Sep 2017 • Christina Lioma
A prerequisite for processing text semantics, common to the above examples, is having some computational representation of text as an abstract object.
no code implementations • 5 Apr 2017 • Christina Lioma, Birger Larsen, Wei Lu
Typically, every part in most coherent text has some plausible reason for its presence, some function that it performs to the overall semantics of the text.
no code implementations • 10 Mar 2017 • Christina Lioma, Niels Dalum Hansen
Compositionality in language refers to how much the meaning of some phrase can be decomposed into the meaning of its constituents and the way these constituents are combined.
no code implementations • 22 Aug 2016 • Brian Brost, Yevgeny Seldin, Ingemar J. Cox, Christina Lioma
Online ranker evaluation can be modeled by dueling ban- dits, a mathematical model for online learning under limited feedback from pairwise comparisons.
no code implementations • 29 Jul 2015 • Casper Petersen, Christina Lioma, Jakob Grue Simonsen, Birger Larsen
We present two novel models of document coherence and their application to information retrieval (IR).
4 code implementations • 8 Jul 2015 • Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob G. Simonsen, Jian-Yun Nie
Our novel hierarchical recurrent encoder-decoder architecture allows the model to be sensitive to the order of queries in the context while avoiding data sparsity.