# Towards cross-lingual distributed representations without parallel text trained with adversarial autoencoders

Antonio Valerio Miceli Barone

Current approaches to learning vector representations of text that are compatible between different languages usually require some amount of parallel text, aligned at word, sentence or at least document level. We hypothesize however, that different natural languages share enough semantic structure that it should be possible, in principle, to learn compatible vector representations just by analyzing the monolingual distribution of words... (read more)

PDF Abstract