Search Results for author: Zied Bouraoui

Found 31 papers, 7 papers with code

Enhancing DR Classification with Swin Transformer and Shifted Window Attention

no code implementations20 Apr 2025 Meher Boulaabi, Takwa Ben Aïcha Gader, Afef Kacem Echi, Zied Bouraoui

Diabetic retinopathy (DR) is a leading cause of blindness worldwide, underscoring the importance of early detection for effective treatment.

Data Augmentation Image Cropping

Grounding Agent Reasoning in Image Schemas: A Neurosymbolic Approach to Embodied Cognition

no code implementations31 Mar 2025 François Olivier, Zied Bouraoui

Despite advances in embodied AI, agent reasoning systems still struggle to capture the fundamental conceptual structures that humans naturally use to understand and interact with their environment.

AI Agent

Modelling Multi-modal Cross-interaction for ML-FSIC Based on Local Feature Selection

no code implementations18 Dec 2024 Kun Yan, Zied Bouraoui, Fangyun Wei, Chang Xu, Ping Wang, Shoaib Jameel, Steven Schockaert

A key feature of the multi-label setting is that images often have several labels, which typically refer to objects appearing in different regions of the image.

feature selection Few-Shot Image Classification +1

REFINE-LM: Mitigating Language Model Stereotypes via Reinforcement Learning

no code implementations18 Aug 2024 Rameez Qureshi, Naïm Es-Sebbani, Luis Galárraga, Yvette Graham, Miguel Couceiro, Zied Bouraoui

With the introduction of (large) language models, there has been significant concern about the unintended bias such models may inherit from their training data.

Language Modeling Language Modelling +2

Modelling Commonsense Commonalities with Multi-Facet Concept Embeddings

1 code implementation25 Mar 2024 Hanane Kteich, Na Li, Usashi Chatterjee, Zied Bouraoui, Steven Schockaert

We show that this leads to embeddings which capture a more diverse range of commonsense properties, and consistently improves results in downstream tasks such as ultra-fine entity typing and ontology completion.

Entity Typing

Ontology Completion with Natural Language Inference and Concept Embeddings: An Analysis

no code implementations25 Mar 2024 Na Li, Thomas Bailleux, Zied Bouraoui, Steven Schockaert

One line of work treats this task as a Natural Language Inference (NLI) problem, thus relying on the knowledge captured by language models to identify the missing knowledge.

Natural Language Inference Taxonomy Expansion

Vector Field Oriented Diffusion Model for Crystal Material Generation

no code implementations20 Dec 2023 Astrid Klipfel, Yaël Fregier, Adlane Sayede, Zied Bouraoui

Discovering crystal structures with specific chemical properties has become an increasingly important focus in material science.

Unified Model for Crystalline Material Generation

1 code implementation7 Jun 2023 Astrid Klipfel, Yaël Frégier, Adlane Sayede, Zied Bouraoui

One of the greatest challenges facing our society is the discovery of new innovative crystal materials with specific properties.

model

Optimized Crystallographic Graph Generation for Material Science

1 code implementation7 Jun 2023 Astrid Klipfel, Yaël Frégier, Adlane Sayede, Zied Bouraoui

With the aim of training graph-based generative models of new material discovery, we propose an efficient tool to generate cutoff graphs and k-nearest-neighbours graphs of periodic structures within GPU optimization.

Graph Generation

Ultra-Fine Entity Typing with Prior Knowledge about Labels: A Simple Clustering Based Strategy

no code implementations22 May 2023 Na Li, Zied Bouraoui, Steven Schockaert

In this paper, we show that the performance of existing methods can be improved using a simple technique: we use pre-trained label embeddings to cluster the labels into semantic domains and then treat these domains as additional types.

Entity Typing

Distilling Semantic Concept Embeddings from Contrastively Fine-Tuned Language Models

1 code implementation16 May 2023 Na Li, Hanane Kteich, Zied Bouraoui, Steven Schockaert

Second, concept embeddings should capture the semantic properties of concepts, whereas contextualised word vectors are also affected by other factors.

Contrastive Learning Sentence +1

Deriving Word Vectors from Contextualized Language Models using Topic-Aware Mention Selection

1 code implementation ACL (RepL4NLP) 2021 Yixiao Wang, Zied Bouraoui, Luis Espinosa Anke, Steven Schockaert

Second, rather than learning a word vector directly, we use a topic model to partition the contexts in which words appear, and then learn different topic-specific vectors for each word.

Sentence Word Embeddings

Few-shot Image Classification with Multi-Facet Prototypes

no code implementations1 Feb 2021 Kun Yan, Zied Bouraoui, Ping Wang, Shoaib Jameel, Steven Schockaert

The aim of few-shot learning (FSL) is to learn how to recognize image categories from a small number of training examples.

Classification Few-Shot Image Classification +2

Modelling General Properties of Nouns by Selectively Averaging Contextualised Embeddings

no code implementations4 Dec 2020 Na Li, Zied Bouraoui, Jose Camacho Collados, Luis Espinosa-Anke, Qing Gu, Steven Schockaert

While the success of pre-trained language models has largely eliminated the need for high-quality static word vectors in many NLP applications, such vectors continue to play an important role in tasks where words need to be modelled in the absence of linguistic context.

Knowledge Base Completion

A Mixture-of-Experts Model for Learning Multi-Facet Entity Embeddings

1 code implementation COLING 2020 Rana Alshaikh, Zied Bouraoui, Shelan Jeawak, Steven Schockaert

This is exploited by an associated gating network, which uses pre-trained word vectors to encourage the properties that are modelled by a given embedding to be semantically coherent, i. e. to encourage each of the individual embeddings to capture a meaningful facet.

Entity Embeddings Mixture-of-Experts

Modelling Semantic Categories using Conceptual Neighborhood

no code implementations3 Dec 2019 Zied Bouraoui, Jose Camacho-Collados, Luis Espinosa-Anke, Steven Schockaert

Unfortunately, meaningful regions can be difficult to estimate, especially since we often have few examples of individuals that belong to a given category.

Inducing Relational Knowledge from BERT

no code implementations28 Nov 2019 Zied Bouraoui, Jose Camacho-Collados, Steven Schockaert

Starting from a few seed instances of a given relation, we first use a large text corpus to find sentences that are likely to express this relation.

Language Modeling Language Modelling +2

Learning Conceptual Spaces with Disentangled Facets

no code implementations CONLL 2019 Rana Alshaikh, Zied Bouraoui, Steven Schockaert

To address this gap, we analyze how, and to what extent, a given vector space embedding can be decomposed into meaningful facets in an unsupervised fashion.

Word Embeddings

Relation Induction in Word Embeddings Revisited

no code implementations COLING 2018 Zied Bouraoui, Shoaib Jameel, Steven Schockaert

Given a set of instances of some relation, the relation induction task is to predict which other word pairs are likely to be related in the same way.

Knowledge Base Completion regression +3

Unsupervised Learning of Distributional Relation Vectors

no code implementations ACL 2018 Shoaib Jameel, Zied Bouraoui, Steven Schockaert

Word embedding models such as GloVe rely on co-occurrence statistics to learn vector representations of word meaning.

Relation Relation Extraction +1

Learning Conceptual Space Representations of Interrelated Concepts

no code implementations3 May 2018 Zied Bouraoui, Steven Schockaert

Several recently proposed methods aim to learn conceptual space representations from large text collections.

Knowledge Base Completion

Modeling Semantic Relatedness using Global Relation Vectors

no code implementations14 Nov 2017 Shoaib Jameel, Zied Bouraoui, Steven Schockaert

Word embedding models such as GloVe rely on co-occurrence statistics from a large corpus to learn vector representations of word meaning.

Relation

Probabilistic Relation Induction in Vector Space Embeddings

no code implementations21 Aug 2017 Zied Bouraoui, Shoaib Jameel, Steven Schockaert

Word embeddings have been found to capture a surprisingly rich amount of syntactic and semantic knowledge.

Relation Word Embeddings

Cannot find the paper you are looking for? You can Submit a new open access paper.