Search Results for author: Steven Schockaert

Found 81 papers, 25 papers with code

Self-Supervised Intermediate Fine-Tuning of Biomedical Language Models for Interpreting Patient Case Descriptions

1 code implementation COLING 2022 Israa Alghanmi, Luis Espinosa-Anke, Steven Schockaert

Interpreting patient case descriptions has emerged as a challenging problem for biomedical NLP, where the aim is typically to predict diagnoses, to recommended treatments, or to answer questions about cases more generally.

Pre-Training Language Models for Identifying Patronizing and Condescending Language: An Analysis

no code implementations LREC 2022 Carla Perez Almendros, Luis Espinosa Anke, Steven Schockaert

Patronizing and Condescending Language (PCL) is a subtle but harmful type of discourse, yet the task of recognizing PCL remains under-studied by the NLP community.

Combining BERT with Static Word Embeddings for Categorizing Social Media

no code implementations EMNLP (WNUT) 2020 Israa Alghanmi, Luis Espinosa Anke, Steven Schockaert

A particularly striking example is the performance of AraBERT, an LM for the Arabic language, which is successful in categorizing social media posts in Arabic dialects, despite only having been trained on Modern Standard Arabic.

Word Embeddings

Differentiable Reasoning about Knowledge Graphs with Region-based Graph Neural Networks

no code implementations13 Jun 2024 Aleksandar Pavlovic, Emanuel Sallinger, Steven Schockaert

By modeling relations as geometric regions in high-dimensional vector spaces, such models can explicitly capture semantic regularities in terms of the spatial arrangement of these regions.

Graph Neural Network Knowledge Graphs

Modelling Commonsense Commonalities with Multi-Facet Concept Embeddings

1 code implementation25 Mar 2024 Hanane Kteich, Na Li, Usashi Chatterjee, Zied Bouraoui, Steven Schockaert

We show that this leads to embeddings which capture a more diverse range of commonsense properties, and consistently improves results in downstream tasks such as ultra-fine entity typing and ontology completion.

Entity Typing

Ontology Completion with Natural Language Inference and Concept Embeddings: An Analysis

no code implementations25 Mar 2024 Na Li, Thomas Bailleux, Zied Bouraoui, Steven Schockaert

One line of work treats this task as a Natural Language Inference (NLI) problem, thus relying on the knowledge captured by language models to identify the missing knowledge.

Natural Language Inference Taxonomy Expansion

Ranking Entities along Conceptual Space Dimensions with LLMs: An Analysis of Fine-Tuning Strategies

no code implementations23 Feb 2024 Nitesh Kumar, Usashi Chatterjee, Steven Schockaert

We focus in particular on the task of ranking entities according to a given conceptual space dimension.

Entity or Relation Embeddings? An Analysis of Encoding Strategies for Relation Extraction

no code implementations18 Dec 2023 Frank Mtumbuka, Steven Schockaert

Relation extraction is essentially a text classification problem, which can be tackled by fine-tuning a pre-trained language model (LM).

Entity Embeddings Language Modelling +6

Solving Hard Analogy Questions with Relation Embedding Chains

1 code implementation18 Oct 2023 Nitesh Kumar, Steven Schockaert

A common strategy is to rely on knowledge graphs (KGs) such as ConceptNet, and to model the relation between two concepts as a set of paths.

Knowledge Graphs Language Modelling +1

Cabbage Sweeter than Cake? Analysing the Potential of Large Language Models for Learning Conceptual Spaces

no code implementations9 Oct 2023 Usashi Chatterjee, Amit Gajbhiye, Steven Schockaert

The theory of Conceptual Spaces is an influential cognitive-linguistic framework for representing the meaning of concepts.

RelBERT: Embedding Relations with Language Models

1 code implementation30 Sep 2023 Asahi Ushio, Jose Camacho-Collados, Steven Schockaert

In particular, we show that masked language models such as RoBERTa can be straightforwardly fine-tuned for this purpose, using only a small amount of training data.

Knowledge Graphs

RAGAS: Automated Evaluation of Retrieval Augmented Generation

2 code implementations26 Sep 2023 Shahul ES, Jithin James, Luis Espinosa-Anke, Steven Schockaert

We introduce RAGAs (Retrieval Augmented Generation Assessment), a framework for reference-free evaluation of Retrieval Augmented Generation (RAG) pipelines.

Retrieval

Inductive Knowledge Graph Completion with GNNs and Rules: An Analysis

1 code implementation14 Aug 2023 Akash Anil, Víctor Gutiérrez-Basulto, Yazmín Ibañéz-García, Steven Schockaert

The task of inductive knowledge graph completion requires models to learn inference patterns from a training graph, which can then be used to make predictions on a disjoint test graph.

Inductive knowledge graph completion Link Prediction

A RelEntLess Benchmark for Modelling Graded Relations between Named Entities

no code implementations24 May 2023 Asahi Ushio, Jose Camacho Collados, Steven Schockaert

Relations such as "is influenced by", "is known for" or "is a competitor of" are inherently graded: we can rank entity pairs based on how well they satisfy these relations, but it is hard to draw a line between those pairs that satisfy them and those that do not.

Knowledge Graphs Relation

EnCore: Fine-Grained Entity Typing by Pre-Training Entity Encoders on Coreference Chains

1 code implementation22 May 2023 Frank Mtumbuka, Steven Schockaert

In this paper, we propose to improve on this process by pre-training an entity encoder such that embeddings of coreferring entities are more similar to each other than to the embeddings of other entities.

Entity Embeddings Entity Typing

Ultra-Fine Entity Typing with Prior Knowledge about Labels: A Simple Clustering Based Strategy

no code implementations22 May 2023 Na Li, Zied Bouraoui, Steven Schockaert

In this paper, we show that the performance of existing methods can be improved using a simple technique: we use pre-trained label embeddings to cluster the labels into semantic domains and then treat these domains as additional types.

Entity Typing

Distilling Semantic Concept Embeddings from Contrastively Fine-Tuned Language Models

1 code implementation16 May 2023 Na Li, Hanane Kteich, Zied Bouraoui, Steven Schockaert

Second, concept embeddings should capture the semantic properties of concepts, whereas contextualised word vectors are also affected by other factors.

Contrastive Learning Sentence +1

What's the Meaning of Superhuman Performance in Today's NLU?

no code implementations15 May 2023 Simone Tedeschi, Johan Bos, Thierry Declerck, Jan Hajic, Daniel Hershcovich, Eduard H. Hovy, Alexander Koller, Simon Krek, Steven Schockaert, Rico Sennrich, Ekaterina Shutova, Roberto Navigli

In the last five years, there has been a significant focus in Natural Language Processing (NLP) on developing larger Pretrained Language Models (PLMs) and introducing benchmarks such as SuperGLUE and SQuAD to measure their abilities in language understanding, reasoning, and reading comprehension.

Position Reading Comprehension

Embeddings as Epistemic States: Limitations on the Use of Pooling Operators for Accumulating Knowledge

no code implementations11 Oct 2022 Steven Schockaert

In particular, we find that when the epistemic pooling principle is satisfied, in most cases it is impossible to verify the satisfaction of propositional formulas using linear scoring functions, with two exceptions: (i) max-pooling with embeddings that are upper-bounded and (ii) Hadamard pooling with non-negative embeddings.

Modelling Commonsense Properties using Pre-Trained Bi-Encoders

1 code implementation COLING 2022 Amit Gajbhiye, Luis Espinosa-Anke, Steven Schockaert

Grasping the commonsense properties of everyday concepts is an important prerequisite to language understanding.

Hypernym Discovery

Distilling Relation Embeddings from Pre-trained Language Models

1 code implementation21 Sep 2021 Asahi Ushio, Jose Camacho-Collados, Steven Schockaert

Among others, this makes it possible to distill high-quality word vectors from pre-trained language models.

Knowledge Graphs Language Modelling +2

Deriving Word Vectors from Contextualized Language Models using Topic-Aware Mention Selection

1 code implementation ACL (RepL4NLP) 2021 Yixiao Wang, Zied Bouraoui, Luis Espinosa Anke, Steven Schockaert

Second, rather than learning a word vector directly, we use a topic model to partition the contexts in which words appear, and then learn different topic-specific vectors for each word.

Sentence Word Embeddings

Probing Pre-Trained Language Models for Disease Knowledge

1 code implementation Findings (ACL) 2021 Israa Alghanmi, Luis Espinosa-Anke, Steven Schockaert

Pre-trained language models such as ClinicalBERT have achieved impressive results on tasks such as medical Natural Language Inference.

Binary Classification Natural Language Inference

Few-shot Image Classification with Multi-Facet Prototypes

no code implementations1 Feb 2021 Kun Yan, Zied Bouraoui, Ping Wang, Shoaib Jameel, Steven Schockaert

The aim of few-shot learning (FSL) is to learn how to recognize image categories from a small number of training examples.

Classification Few-Shot Image Classification +2

Modelling General Properties of Nouns by Selectively Averaging Contextualised Embeddings

no code implementations4 Dec 2020 Na Li, Zied Bouraoui, Jose Camacho Collados, Luis Espinosa-Anke, Qing Gu, Steven Schockaert

While the success of pre-trained language models has largely eliminated the need for high-quality static word vectors in many NLP applications, such vectors continue to play an important role in tasks where words need to be modelled in the absence of linguistic context.

Knowledge Base Completion

A Mixture-of-Experts Model for Learning Multi-Facet Entity Embeddings

1 code implementation COLING 2020 Rana Alshaikh, Zied Bouraoui, Shelan Jeawak, Steven Schockaert

This is exploited by an associated gating network, which uses pre-trained word vectors to encourage the properties that are modelled by a given embedding to be semantically coherent, i. e. to encourage each of the individual embeddings to capture a meaningful facet.

Entity Embeddings

Don't Patronize Me! An Annotated Dataset with Patronizing and Condescending Language towards Vulnerable Communities

no code implementations COLING 2020 Carla Pérez-Almendros, Luis Espinosa-Anke, Steven Schockaert

In this paper, we introduce a new annotated dataset which is aimed at supporting the development of NLP models to identify and categorize language that is patronizing or condescending towards vulnerable communities (e. g. refugees, homeless people, poor families).

Plausible Reasoning about EL-Ontologies using Concept Interpolation

no code implementations25 Jun 2020 Yazmín Ibáñez-García, Víctor Gutiérrez-Basulto, Steven Schockaert

In this paper, we instead propose an inductive inference mechanism which is based on a clear model-theoretic semantics, and can thus be tightly integrated with standard deductive reasoning.

Modelling Semantic Categories using Conceptual Neighborhood

no code implementations3 Dec 2019 Zied Bouraoui, Jose Camacho-Collados, Luis Espinosa-Anke, Steven Schockaert

Unfortunately, meaningful regions can be difficult to estimate, especially since we often have few examples of individuals that belong to a given category.

Inducing Relational Knowledge from BERT

no code implementations28 Nov 2019 Zied Bouraoui, Jose Camacho-Collados, Steven Schockaert

Starting from a few seed instances of a given relation, we first use a large text corpus to find sentences that are likely to express this relation.

Language Modelling Relation +1

Learning Conceptual Spaces with Disentangled Facets

no code implementations CONLL 2019 Rana Alshaikh, Zied Bouraoui, Steven Schockaert

To address this gap, we analyze how, and to what extent, a given vector space embedding can be decomposed into meaningful facets in an unsupervised fashion.

Word Embeddings

Meemi: A Simple Method for Post-processing and Integrating Cross-lingual Word Embeddings

no code implementations16 Oct 2019 Yerai Doval, Jose Camacho-Collados, Luis Espinosa-Anke, Steven Schockaert

While monolingual word embeddings encode information about words in the context of a particular language, cross-lingual embeddings define a multilingual space where word embeddings from two or more languages are integrated together.

Cross-Lingual Natural Language Inference Cross-Lingual Word Embeddings +3

Learning Household Task Knowledge from WikiHow Descriptions

1 code implementation WS 2019 Yilun Zhou, Julie A. Shah, Steven Schockaert

Commonsense procedural knowledge is important for AI agents and robots that operate in a human environment.

On the Robustness of Unsupervised and Semi-supervised Cross-lingual Word Embedding Learning

no code implementations LREC 2020 Yerai Doval, Jose Camacho-Collados, Luis Espinosa-Anke, Steven Schockaert

Cross-lingual word embeddings are vector representations of words in different languages where words with similar meaning are represented by similar vectors, regardless of the language.

Cross-Lingual Word Embeddings Word Embeddings

Collocation Classification with Unsupervised Relation Vectors

1 code implementation ACL 2019 Luis Espinosa Anke, Steven Schockaert, Leo Wanner

Lexical relation classification is the task of predicting whether a certain relation holds between a given pair of words.

Classification General Classification +3

Word and Document Embedding with vMF-Mixture Priors on Context Word Vectors

no code implementations ACL 2019 Shoaib Jameel, Steven Schockaert

To this end, our model relies on the assumption that context word vectors are drawn from a mixture of von Mises-Fisher (vMF) distributions, where the parameters of this mixture distribution are jointly optimized with the word vectors.

Document Embedding

Relational Word Embeddings

1 code implementation ACL 2019 Jose Camacho-Collados, Luis Espinosa-Anke, Steven Schockaert

While word embeddings have been shown to implicitly encode various forms of attributional knowledge, the extent to which they capture relational information is far more limited.

Word Embeddings

Predicting ConceptNet Path Quality Using Crowdsourced Assessments of Naturalness

1 code implementation21 Feb 2019 Yilun Zhou, Steven Schockaert, Julie A. Shah

In this paper we instead propose to learn to predict path quality from crowdsourced human assessments.

Knowledge Graphs

Interpretable Emoji Prediction via Label-Wise Attention LSTMs

no code implementations EMNLP 2018 Francesco Barbieri, Luis Espinosa-Anke, Jose Camacho-Collados, Steven Schockaert, Horacio Saggion

Human language has evolved towards newer forms of communication such as social media, where emojis (i. e., ideograms bearing a visual meaning) play a key role.

Emotion Recognition Information Retrieval +3

SeVeN: Augmenting Word Embeddings with Unsupervised Relation Vectors

1 code implementation COLING 2018 Luis Espinosa-Anke, Steven Schockaert

For example, by examining clusters of relation vectors, we observe that relational similarities can be identified at a more abstract level than with traditional word vector differences.

Relation Text Categorization +2

Knowledge Representation with Conceptual Spaces

no code implementations COLING 2018 Steven Schockaert

Conceptual spaces, as proposed by Grdenfors, are similar to entity embeddings, but provide more structure.

Entity Embeddings Information Retrieval +2

Relation Induction in Word Embeddings Revisited

no code implementations COLING 2018 Zied Bouraoui, Shoaib Jameel, Steven Schockaert

Given a set of instances of some relation, the relation induction task is to predict which other word pairs are likely to be related in the same way.

Knowledge Base Completion regression +3

Unsupervised Learning of Distributional Relation Vectors

no code implementations ACL 2018 Shoaib Jameel, Zied Bouraoui, Steven Schockaert

Word embedding models such as GloVe rely on co-occurrence statistics to learn vector representations of word meaning.

Relation Relation Extraction +1

Syntactically Aware Neural Architectures for Definition Extraction

no code implementations NAACL 2018 Luis Espinosa-Anke, Steven Schockaert

Automatically identifying definitional knowledge in text corpora (Definition Extraction or DE) is an important task with direct applications in, among others, Automatic Glossary Generation, Taxonomy Learning, Question Answering and Semantic Search.

Benchmarking Binary Classification +7

From Knowledge Graph Embedding to Ontology Embedding? An Analysis of the Compatibility between Vector Space Representations and Rules

no code implementations26 May 2018 Víctor Gutiérrez-Basulto, Steven Schockaert

To address this shortcoming, in this paper we introduce a general framework based on a view of relations as regions, which allows us to study the compatibility between ontological knowledge and different types of vector space embeddings.

Knowledge Graph Embedding Knowledge Graphs +1

Learning Conceptual Space Representations of Interrelated Concepts

no code implementations3 May 2018 Zied Bouraoui, Steven Schockaert

Several recently proposed methods aim to learn conceptual space representations from large text collections.

Knowledge Base Completion

VC-Dimension Based Generalization Bounds for Relational Learning

no code implementations17 Apr 2018 Ondrej Kuzelka, Yuyi Wang, Steven Schockaert

In many applications of relational learning, the available data can be seen as a sample from a larger relational structure (e. g. we may be given a small fragment from some social network).

Generalization Bounds Relational Reasoning

PAC-Reasoning in Relational Domains

no code implementations15 Mar 2018 Ondrej Kuzelka, Yuyi Wang, Jesse Davis, Steven Schockaert

We consider the problem of predicting plausible missing facts in relational data, given a set of imperfect logical rules.

Modeling Semantic Relatedness using Global Relation Vectors

no code implementations14 Nov 2017 Shoaib Jameel, Zied Bouraoui, Steven Schockaert

Word embedding models such as GloVe rely on co-occurrence statistics from a large corpus to learn vector representations of word meaning.

Relation

Stacked Structure Learning for Lifted Relational Neural Networks

no code implementations5 Oct 2017 Gustav Sourek, Martin Svatos, Filip Zelezny, Steven Schockaert, Ondrej Kuzelka

Lifted Relational Neural Networks (LRNNs) describe relational domains using weighted first-order rules which act as templates for constructing feed-forward neural networks.

Relational Marginal Problems: Theory and Estimation

no code implementations18 Sep 2017 Ondrej Kuzelka, Yuyi Wang, Jesse Davis, Steven Schockaert

In the propositional setting, the marginal problem is to find a (maximum-entropy) distribution that has some given marginals.

Probabilistic Relation Induction in Vector Space Embeddings

no code implementations21 Aug 2017 Zied Bouraoui, Shoaib Jameel, Steven Schockaert

Word embeddings have been found to capture a surprisingly rich amount of syntactic and semantic knowledge.

Relation Word Embeddings

Modeling Context Words as Regions: An Ordinal Regression Approach to Word Embedding

no code implementations CONLL 2017 Shoaib Jameel, Steven Schockaert

Although region representations of word meaning offer a natural alternative to word vectors, only few methods have been proposed that can effectively learn word regions.

regression Word Embeddings

Jointly Learning Word Embeddings and Latent Topics

no code implementations21 Jun 2017 Bei Shi, Wai Lam, Shoaib Jameel, Steven Schockaert, Kwun Ping Lai

Word embedding models such as Skip-gram learn a vector-space representation for each word, based on the local word collocation patterns that are observed in a text corpus.

Learning Word Embeddings Topic Models

Induction of Interpretable Possibilistic Logic Theories from Relational Data

no code implementations19 May 2017 Ondrej Kuzelka, Jesse Davis, Steven Schockaert

Compared to Markov Logic Networks (MLNs), our method is faster and produces considerably more interpretable models.

Relational Reasoning

D-GloVe: A Feasible Least Squares Model for Estimating Word Embedding Densities

no code implementations COLING 2016 Shoaib Jameel, Steven Schockaert

We propose a new word embedding model, inspired by GloVe, which is formulated as a feasible least squares optimization problem.

Word Embeddings

Stratified Knowledge Bases as Interpretable Probabilistic Models (Extended Abstract)

no code implementations18 Nov 2016 Ondrej Kuzelka, Jesse Davis, Steven Schockaert

In this paper, we advocate the use of stratified logical theories for representing probabilistic models.

Learning Possibilistic Logic Theories from Default Rules

no code implementations18 Apr 2016 Ondrej Kuzelka, Jesse Davis, Steven Schockaert

We introduce a setting for learning possibilistic logic theories from defaults of the form "if alpha then typically beta".

Learning Theory

Entity Embeddings with Conceptual Subspaces as a Basis for Plausible Reasoning

no code implementations18 Feb 2016 Shoaib Jameel, Steven Schockaert

Conceptual spaces are geometric representations of conceptual knowledge, in which entities correspond to points, natural properties correspond to convex regions, and the dimensions of the space correspond to salient features.

Entity Embeddings

Solving stable matching problems using answer set programming

no code implementations16 Dec 2015 Sofie De Clercq, Steven Schockaert, Martine De Cock, Ann Nowé

Since the introduction of the stable marriage problem (SMP) by Gale and Shapley (1962), several variants and extensions have been investigated.

Encoding Markov Logic Networks in Possibilistic Logic

1 code implementation3 Jun 2015 Ondrej Kuzelka, Jesse Davis, Steven Schockaert

Markov logic uses weighted formulas to compactly encode a probability distribution over possible worlds.

Realizing RCC8 networks using convex regions

no code implementations9 Oct 2014 Steven Schockaert, Sanjiang Li

First, we identify all ways in which the set of RCC8 base relations can be restricted to guarantee that consistent networks can be convexly realized in respectively 1D, 2D, 3D, and 4D.

Characterizing and Extending Answer Set Semantics using Possibility Theory

no code implementations30 Nov 2013 Kim Bauters, Steven Schockaert, Martine De Cock, Dirk Vermeir

In particular, while the complexity of most reasoning tasks coincides with standard disjunctive ASP, we find that brave reasoning for programs with weak disjunctions is easier.

Modeling Stable Matching Problems with Answer Set Programming

no code implementations28 Feb 2013 Sofie De Clercq, Steven Schockaert, Martine De Cock, Ann Nowé

Our encoding can easily be extended and adapted to the needs of specific applications.

Cannot find the paper you are looking for? You can Submit a new open access paper.