NMT-based Cross-lingual Document Embeddings

29 Jul 2018  ·  Wei Li, Brian Mak ·

This paper investigates a cross-lingual document embedding method that improves the current Neural machine Translation framework based Document Vector (NTDV or simply NV). NV is developed with a self-attention mechanism under the neural machine translation (NMT) framework. In NV, each pair of parallel documents in different languages are projected to the same shared layer in the model. However, the pair of NV embeddings are not guaranteed to be similar. This paper further adds a distance constraint to the training objective function of NV so that the two embeddings of a parallel document are required to be as close as possible. The new method will be called constrained NV (cNV). In a cross-lingual document classification task, the new cNV performs as well as NV and outperforms other published studies that require forward-pass decoding. Compared with the previous NV, cNV does not need a translator during testing, and so the method is lighter and more flexible.

PDF Abstract


Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.


No methods listed for this paper. Add relevant methods here