Learning Semantic Representations
12 papers with code • 0 benchmarks • 1 datasets
Benchmarks
These leaderboards are used to track progress in Learning Semantic Representations
Latest papers
Unsupervised Object Representation Learning using Translation and Rotation Group Equivariant VAE
Here, we consider the problem of learning semantic representations of objects that are invariant to pose and location in a fully unsupervised manner.
Seeing the advantage: visually grounding word embeddings to better capture human semantic knowledge
In this paper we create visually grounded word embeddings by combining English text and images and compare them to popular text-based methods, to see if visual information allows our model to better capture cognitive aspects of word meaning.
Modeling User Behavior with Graph Convolution for Personalized Product Search
Our approach can be seamlessly integrated with existing latent space based methods and be potentially applied in any product retrieval method that uses purchase history to model user preferences.
VarCLR: Variable Semantic Representation Pre-training via Contrastive Learning
Machine learning-based program analysis methods use variable name representations for a wide range of tasks, such as suggesting new variable names and bug detection.
Learning cortical representations through perturbed and adversarial dreaming
We support this hypothesis by implementing a cortical architecture inspired by generative adversarial networks (GANs).
Semantic sentence similarity: size does not always matter
This study addresses the question whether visually grounded speech recognition (VGS) models learn to capture sentence semantics without access to any prior linguistic knowledge.
On Learning Semantic Representations for Million-Scale Free-Hand Sketches
Specifically, we use our dual-branch architecture as a universal representation framework to design two sketch-specific deep models: (i) We propose a deep hashing model for sketch retrieval, where a novel hashing loss is specifically designed to accommodate both the abstract and messy traits of sketches.
Neural Collective Entity Linking Based on Recurrent Random Walk Network Learning
However, most neural collective EL methods depend entirely upon neural networks to automatically model the semantic dependencies between different EL decisions, which lack of the guidance from external knowledge.
Learning semantic sentence representations from visually grounded language without lexical knowledge
The system achieves state-of-the-art results on several of these benchmarks, which shows that a system trained solely on multimodal data, without assuming any word representations, is able to capture sentence level semantics.
Learning Semantic Representations for Novel Words: Leveraging Both Form and Context
The general problem setting is that word embeddings are induced on an unlabeled training corpus and then a model is trained that embeds novel words into this induced embedding space.