Embeddings Evaluation

9 papers with code • 0 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

A Survey of Word Embeddings Evaluation Methods

avi-jit/SWOW-eval 21 Jan 2018

Word embeddings are real-valued word representations able to capture lexical semantics and trained on natural language corpora.

Expert Concept-Modeling Ground Truth Construction for Word Embeddings Evaluation in Concept-Focused Domains

yoortwijn/quine-ground-truth COLING 2020

We present a novel, domain expert-controlled, replicable procedure for the construction of concept-modeling ground truths with the aim of evaluating the application of word embeddings.

Collection Space Navigator: An Interactive Visualization Interface for Multidimensional Datasets

collection-space-navigator/csn 11 May 2023

We introduce the Collection Space Navigator (CSN), a browser-based visualization tool to explore, research, and curate large collections of visual digital artifacts that are associated with multidimensional data, such as vector embeddings or tables of metadata.

This is not correct! Negation-aware Evaluation of Language Generation Systems

MiriUll/negation_aware_evaluation 26 Jul 2023

Based on this dataset, we fine-tuned a sentence transformer and an evaluation metric to improve their negation sensitivity.

The Limitations of Cross-language Word Embeddings Evaluation

bakarov/cross-lang-embeddings SEMEVAL 2018

The aim of this work is to explore the possible limitations of existing methods of cross-language word embeddings evaluation, addressing the lack of correlation between intrinsic and extrinsic cross-language evaluation methods.

SART - Similarity, Analogies, and Relatedness for Tatar Language: New Benchmark Datasets for Word Embeddings Evaluation

tat-nlp/SART 31 Mar 2019

We evaluate state-of-the-art word embedding models for two languages using our proposed datasets for Tatar and the original datasets for English and report our findings on performance comparison.

Embeddings Evaluation Using a Novel Measure of Semantic Similarity

Crisp-Unimib/TaxoSS Cognitive Computation 2022

Then, we train several embedding models on a text corpus and select the best model, that is, the model that maximizes the correlation between the HSS and the cosine similarity of the pair of words that are in both the taxonomy and the corpus.

TWLV-I: Analysis and Insights from Holistic Evaluation on Video Foundation Models

twelvelabs-io/video-embeddings-evaluation-framework 21 Aug 2024

In this work, we discuss evaluating video foundation models in a fair and robust manner.

Probabilistic Embeddings for Frozen Vision-Language Models: Uncertainty Quantification with Gaussian Process Latent Variable Models

vaishwarya96/GroVE 8 May 2025

Vision-Language Models (VLMs) learn joint representations by mapping images and text into a shared latent space.