Learning Semantic Representations

6 papers with code • 0 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Greatest papers with code

Learning Semantic Representations for Unsupervised Domain Adaptation

Mid-Push/Moving-Semantic-Transfer-Network ICML 2018

Prior domain adaptation methods address this problem through aligning the global distribution statistics between source domain and target domain, but a drawback of prior methods is that they ignore the semantic information contained in samples, e. g., features of backpacks in target domain might be mapped near features of cars in source domain.

Learning Semantic Representations Unsupervised Domain Adaptation

Multilingual Models for Compositional Distributed Semantics

karlmoritz/bicvm ACL 2014

We present a novel technique for learning semantic representations, which extends the distributional hypothesis to multilingual data and joint-space embeddings.

Cross-Lingual Document Classification Document Classification +3

On Learning Semantic Representations for Million-Scale Free-Hand Sketches

PengBoXiangShang/EdgeMap345C_Dataset 7 Jul 2020

Specifically, we use our dual-branch architecture as a universal representation framework to design two sketch-specific deep models: (i) We propose a deep hashing model for sketch retrieval, where a novel hashing loss is specifically designed to accommodate both the abstract and messy traits of sketches.

Learning Semantic Representations Zero-Shot Learning

Neural Collective Entity Linking Based on Recurrent Random Walk Network Learning

DeepLearnXMU/RRWEL 20 Jun 2019

However, most neural collective EL methods depend entirely upon neural networks to automatically model the semantic dependencies between different EL decisions, which lack of the guidance from external knowledge.

Entity Disambiguation Entity Linking +1

Learning Semantic Representations for Novel Words: Leveraging Both Form and Context

timoschick/form-context-model 9 Nov 2018

The general problem setting is that word embeddings are induced on an unlabeled training corpus and then a model is trained that embeds novel words into this induced embedding space.

Learning Semantic Representations Word Embeddings

Learning semantic sentence representations from visually grounded language without lexical knowledge

DannyMerkx/caption2image 27 Mar 2019

The system achieves state-of-the-art results on several of these benchmarks, which shows that a system trained solely on multimodal data, without assuming any word representations, is able to capture sentence level semantics.

Learning Semantic Representations Semantic Similarity +3