Assessing Wikipedia-Based Cross-Language Retrieval Models

10 Jan 2014  ·  Benjamin Roth ·

This work compares concept models for cross-language retrieval: First, we adapt probabilistic Latent Semantic Analysis (pLSA) for multilingual documents. Experiments with different weighting schemes show that a weighting method favoring documents of similar length in both language sides gives best results. Considering that both monolingual and multilingual Latent Dirichlet Allocation (LDA) behave alike when applied for such documents, we use a training corpus built on Wikipedia where all documents are length-normalized and obtain improvements over previously reported scores for LDA. Another focus of our work is on model combination. For this end we include Explicit Semantic Analysis (ESA) in the experiments. We observe that ESA is not competitive with LDA in a query based retrieval task on CLEF 2000 data. The combination of machine translation with concept models increased performance by 21.1% map in comparison to machine translation alone. Machine translation relies on parallel corpora, which may not be available for many language pairs. We further explore how much cross-lingual information can be carried over by a specific information source in Wikipedia, namely linked text. The best results are obtained using a language modeling approach, entirely without information from parallel corpora. The need for smoothing raises interesting questions on soundness and efficiency. Link models capture only a certain kind of information and suggest weighting schemes to emphasize particular words. For a combined model, another interesting question is therefore how to integrate different weighting schemes. Using a very simple combination scheme, we obtain results that compare favorably to previously reported results on the CLEF 2000 dataset.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods