Word Embeddings

1108 papers with code • 0 benchmarks • 52 datasets

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

Techniques for learning word embeddings can include Word2Vec, GloVe, and other neural network-based approaches that train on an NLP task such as language modeling or document classification.

( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )

Predicting postoperative risks using large language models

cja5553/LLMs_in_perioperative_care 27 Feb 2024

Adapting models through self-supervised finetuning further improved performance by 3. 2% for AUROC & 1. 5% for AUPRC Incorporating labels into the finetuning procedure further boosted performances, with semi-supervised finetuning improving by 1. 8% for AUROC & 2% for AUPRC & foundational modelling improving by 3. 6% for AUROC & 2. 6% for AUPRC compared to self-supervised finetuning.

0
27 Feb 2024

A Systematic Comparison of Contextualized Word Embeddings for Lexical Semantic Change

francescoperiti/cssdetection 19 Feb 2024

Our evaluation is performed across different languages on eight available benchmarks for LSC, and shows that (i) APD outperforms other approaches for GCD; (ii) XL-LEXEME outperforms other contextualized models for WiC, WSI, and GCD, while being comparable to GPT-4; (iii) there is a clear need for improving the modeling of word meanings, as well as focus on how, when, and why these meanings change, rather than solely focusing on the extent of semantic change.

2
19 Feb 2024

Semi-Supervised Learning for Bilingual Lexicon Induction

gguinet/semisupervised-alignment 10 Feb 2024

It was recently shown that it is possible to infer such lexicon, without using any parallel data, by aligning word embeddings trained on monolingual data.

4
10 Feb 2024

Inducing Systematicity in Transformers by Attending to Structurally Quantized Embeddings

jiangyctarheel/sq-transformer 9 Feb 2024

Transformers generalize to novel compositions of structures and entities after being trained on a complex dataset, but easily overfit on datasets of insufficient complexity.

10
09 Feb 2024

Deep Semantic-Visual Alignment for Zero-Shot Remote Sensing Image Scene Classification

wenjiaxu/rs_scene_zsl 3 Feb 2024

Besides, pioneer ZSL models use convolutional neural networks pre-trained on ImageNet, which focus on the main objects appearing in each image, neglecting the background context that also matters in RS scene classification.

5
03 Feb 2024

Graph-based Clustering for Detecting Semantic Change Across Time and Languages

xiaohaima/lexical-dynamic-graph 1 Feb 2024

To address this issue, we propose a graph-based clustering approach to capture nuanced changes in both high- and low-frequency word senses across time and languages, including the acquisition and loss of these senses over time.

0
01 Feb 2024

SWEA: Updating Factual Knowledge in Large Language Models via Subject Word Embedding Altering

xpq-tech/swea 31 Jan 2024

In particular, local editing methods, which directly update model parameters, are more suitable for updating a small amount of knowledge.

0
31 Jan 2024

Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs

stepantita/space-model 30 Jan 2024

We show that a linear transformation of the text representation from any transformer model using the task-specific concept operator results in a projection onto the latent concept space, referred to as context attribution in this paper.

5
30 Jan 2024

Pre-training and Diagnosing Knowledge Base Completion Models

vid-koci/kbctransferlearning 27 Jan 2024

The method works for both canonicalized knowledge bases and uncanonicalized or open knowledge bases, i. e., knowledge bases where more than one copy of a real-world entity or relation may exist.

14
27 Jan 2024

Contrastive Learning in Distilled Models

kennethlimjf/contrastive-learning-in-distilled-models 23 Jan 2024

Natural Language Processing models like BERT can provide state-of-the-art word embeddings for downstream NLP tasks.

0
23 Jan 2024