About

Benchmarks

No evaluation results yet. Help compare methods by submit evaluation metrics.

Datasets

Greatest papers with code

Learning Semantic Representations for Unsupervised Domain Adaptation

ICML 2018 Mid-Push/Moving-Semantic-Transfer-Network

Prior domain adaptation methods address this problem through aligning the global distribution statistics between source domain and target domain, but a drawback of prior methods is that they ignore the semantic information contained in samples, e. g., features of backpacks in target domain might be mapped near features of cars in source domain.

LEARNING SEMANTIC REPRESENTATIONS UNSUPERVISED DOMAIN ADAPTATION

On Learning Semantic Representations for Million-Scale Free-Hand Sketches

7 Jul 2020PengBoXiangShang/EdgeMap345C_Dataset

Specifically, we use our dual-branch architecture as a universal representation framework to design two sketch-specific deep models: (i) We propose a deep hashing model for sketch retrieval, where a novel hashing loss is specifically designed to accommodate both the abstract and messy traits of sketches.

LEARNING SEMANTIC REPRESENTATIONS ZERO-SHOT LEARNING

Neural Collective Entity Linking Based on Recurrent Random Walk Network Learning

20 Jun 2019DeepLearnXMU/RRWEL

However, most neural collective EL methods depend entirely upon neural networks to automatically model the semantic dependencies between different EL decisions, which lack of the guidance from external knowledge.

ENTITY DISAMBIGUATION ENTITY LINKING LEARNING SEMANTIC REPRESENTATIONS

Learning Semantic Representations for Novel Words: Leveraging Both Form and Context

9 Nov 2018timoschick/form-context-model

The general problem setting is that word embeddings are induced on an unlabeled training corpus and then a model is trained that embeds novel words into this induced embedding space.

LEARNING SEMANTIC REPRESENTATIONS WORD EMBEDDINGS

Learning semantic sentence representations from visually grounded language without lexical knowledge

27 Mar 2019DannyMerkx/caption2image

The system achieves state-of-the-art results on several of these benchmarks, which shows that a system trained solely on multimodal data, without assuming any word representations, is able to capture sentence level semantics.

LEARNING SEMANTIC REPRESENTATIONS SEMANTIC SIMILARITY SEMANTIC TEXTUAL SIMILARITY SENTENCE EMBEDDINGS WORD EMBEDDINGS