英文摘要:Incorporating topic level estimation into language models has been shown to be beneficial for information retrieval(IR) models such as cluster-based retrieval and LDA-based document representation. Neural embedding models, such as paragraph vector (PV) models, on the other hand have shown their eeffectiveness and efficiency in learning semantic representations of documents and words in multiple Natural Language Processing (NLP) tasks. However, their effectiveness in information retrieval is mostly unknown. In this paper, we study how to effectively use the PV model to improve ad-hoc retrieval. We propose three major improvements over the original PV model to adapt it for the IR scenario: (1) we use a document frequency-based rather than the corpus frequency-based negative sampling strategy so that the importance of frequent words will not be sup-pressed excessively; (2) we introduce regularization over the document representation to prevent the model overtting short documents along with the learning iterations; and (3) we employ a joint learning objective which considers both the document-word and word-context associations to produce better word probability estimation. By incorporating this enhanced PV model into the language modeling frame-work, we show that it can significantly outperform the state-of-the-art topic enhanced language models
下载链接:https://ciir-publications.cs.umass.edu/pub/web/getpdf.php?id=1227