Learning Semantic Representations
13 papers with code • 0 benchmarks • 1 datasets
Benchmarks
These leaderboards are used to track progress in Learning Semantic Representations
Most implemented papers
Multilingual Models for Compositional Distributed Semantics
We present a novel technique for learning semantic representations, which extends the distributional hypothesis to multilingual data and joint-space embeddings.
Learning Semantic Representations for Unsupervised Domain Adaptation
Prior domain adaptation methods address this problem through aligning the global distribution statistics between source domain and target domain, but a drawback of prior methods is that they ignore the semantic information contained in samples, e. g., features of backpacks in target domain might be mapped near features of cars in source domain.
Learning Semantic Representations for Novel Words: Leveraging Both Form and Context
The general problem setting is that word embeddings are induced on an unlabeled training corpus and then a model is trained that embeds novel words into this induced embedding space.
Learning semantic sentence representations from visually grounded language without lexical knowledge
The system achieves state-of-the-art results on several of these benchmarks, which shows that a system trained solely on multimodal data, without assuming any word representations, is able to capture sentence level semantics.
Neural Collective Entity Linking Based on Recurrent Random Walk Network Learning
However, most neural collective EL methods depend entirely upon neural networks to automatically model the semantic dependencies between different EL decisions, which lack of the guidance from external knowledge.
On Learning Semantic Representations for Million-Scale Free-Hand Sketches
Specifically, we use our dual-branch architecture as a universal representation framework to design two sketch-specific deep models: (i) We propose a deep hashing model for sketch retrieval, where a novel hashing loss is specifically designed to accommodate both the abstract and messy traits of sketches.
Semantic sentence similarity: size does not always matter
This study addresses the question whether visually grounded speech recognition (VGS) models learn to capture sentence semantics without access to any prior linguistic knowledge.
Learning cortical representations through perturbed and adversarial dreaming
We support this hypothesis by implementing a cortical architecture inspired by generative adversarial networks (GANs).
VarCLR: Variable Semantic Representation Pre-training via Contrastive Learning
Machine learning-based program analysis methods use variable name representations for a wide range of tasks, such as suggesting new variable names and bug detection.
Modeling User Behavior with Graph Convolution for Personalized Product Search
Our approach can be seamlessly integrated with existing latent space based methods and be potentially applied in any product retrieval method that uses purchase history to model user preferences.