Our paper "The Healing Power of Poison: Helpful Non-relevant Documents in Feedback", with Samira Abnar and Jaap Kamps, has been accepted as a short paper at The 25th ACM International Conference on Information and Knowledge Management (CIKM'16). \o/
[perfectpullquote align=”full” cite=”” link=”” color=”” class=”” size=”16″]Often, the only difference between a medicine and a poison is the dose. Some substances are extremely toxic, and therefore, are primarily known as a poison. Yet, even poisons can have medicinal value.
Paracelsus, Father of Toxicology[/perfectpullquote]
Query expansion based on feedback information is one of the classic approaches for improving the performance of information retrieval systems, especially when the user information need are complex to express precisely in a few keywords.
True Relevance Feedback (TRF) systems try to enrich the user query using a set of judged documents, that their relevance is assessed either explicitly by the user or implicitly inferred from the user behavior. However, this information is not always available. Alternatively, Pseudo Relevance Feedback (PRF) methods, also called blind relevance feedback, assumes that the top-ranked documents in the initial retrieved results are all relevant and use them for the feedback model.
Normally feedback documents that are annotated as relevant are considered to be beneficial for feedback and feedback documents that are annotated as non-relevant are expected to be poisonous, i.e. they supposedly decrease the performance of the feedback systems if they are used as positive feedback. Based on this assumption, some of the TRF methods, use non-relevant documents as negative feedback and some PRF methods try to avoid using these documents. For example, some PRF methods attempt on detecting non-relevant documents in order for being robust against their noises, or they manage to partially use their content in the feedback procedure, like some of their passages. Although PRF methods use non-relevant documents, they do not directly intend to take advantage of them as helpful documents. In other words, most of the time, removing non-relevant documents from the feedback set of PRF methods leads to a better performance.
However, it has been shown that the assumption that all relevant documents improve the performance of feedback systems as positive feedback documents is not always true and sometimes even the relevant documents act like “poison pills” and decrease the performance. As a counterpart fact, we speculate that non-relevant documents might sometimes be helpful as positive feedback. Thus, in this research, we are investigating the potential healing power of poisonous documents.
The question is “Do we really need to think of dealing with non-relevant documents when we are only taking top-scored documents into consideration for the feedback?”
Based on an analysis of standard test collections, the answer is: yes we do; because they are very prevalent in the top rank positions.
Figure 1 depicts the percentage of topics with different ratio of non-relevant documents in top-10 results in the two standard TREC test collections: Robust04 and WT10G. It shows that for instance, about 30% of topics have seven or eight explicitly judged non-relevant document in their top-10 results for both datasets.
Figure 2 also demonstrates the percentage of topics with non-relevant documents at different ranks. For instance, in WT10G, in more than 50% of topics, the top rank document is non-relevant or in more than 50% of topics, document at rank two is non-relevant in both datasets. So, in general, there is a high probability of hitting a non-relevant document in top rank positions and ignoring their potential helpfulness is turning a blind eye to a lot of useful information.
We believe that every high-scored retrieved document, either judged as relevant or non-relevant, may contain information that can be a clue for understanding the complex information need of the user. Hence, if an ideal system is able to perfectly control the amount and the way each document contributes to the feedback model for each topic, not only it will not be hurt by a non-relevant document, but it will even be able to take advantage of its information to further improve the performance. Generally, the main aim of this paper is to investigate the helpfulness of highly ranked non-relevant documents for improving further results by being used in the feedback methods. We break this down into the following research questions:
Generally, the main aim of this research is to investigate the helpfulness of highly ranked non-relevant documents for improving further results by being used in the feedback methods. We break this down into the following research questions:
- RQ1. How can a non-relevant document help to improve retrieval performance?
- RQ2. How large is the contribution of helpful non-relevant feedback documents?
- RQ3. Does the helpfulness of the non-relevant documents depend on the quality of the initial retrieved results?
For the sake of this study, we selected the clearest and most explicit examples of helpful non-relevant documents (HNR), i.e. retrieved documents that are judged as non-relevant but considering them in the feedback set leads to improvement in the performance. of retrieval, utilizing the existing state-of-the-art feedback methods. Based on our observations, we divide HNR into two groups: Bridge Helpful Non-relevant (BHNR) documents that are able to improve the performance individually, and Complementary Helpful Nonrelevant (CHNR) that further improve the performance if they are employed together with a set of relevant documents. As it is pointed out in the introduction, we think that every high scored retrieved document, either judged as relevant or non-relevant, can help feedback systems improve the performance. However, in this research, we are not able to reveal the healing power of some of them, maybe because of the design of our experiments or due to the shortcoming of the existing feedback methods. Hence, we do not call them “bad” relevant document, and instead, we call them, Stubborn Non-relevant (SNR) documents.
Based on these types of documents, we addressed our research questions. You can find the details on our paper:
- Mostafa Dehghani, S. Abnar, and J. Kamps. “The Healing Power of Poison: Helpful Non-relevant Documents in Feedback”, To be appeared in the in the proceedings of The ACM International Conference on Information and Knowledge Management (CIKM’16), 2016.