Learning to Attend, Copy, and Generate for Session-Based Query Suggestion

Our paper "Learning to Attend, Copy, and Generate for Session-Based Query Suggestion", with Sascha Rothe, Enrique Alfonseca, and  Pascal Fleury, has been accepted as a long paper at the international Conference on Information and Knowledge Management (CIKM'17). This paper is on the outcome of my internship at Google Research. \o/

Users interact with search engines during search sessions and try to direct their search by submitting a sequence of queries. Based on these interactions, search engines provide a prominent feature, in which they assist their users to formulate their queries to better represent their intent during Web search by providing suggestions for the next query.

 

Query suggestion might address the need for disambiguation of the user queries to make the direction of the search more clear for both, the user and the search engine.
It might help users by providing a precise and succinct query when they are not familiar with the specific terminology or when they lack understanding of the internal vocabulary and structures in order to be able to formulate an effective query. It has been shown that in general, query suggestion accelerates search satisfaction by either diving deeper into the current search direction or by moving to a different aspect of a search task.

There has been a lot of research on the task of query suggestion and similar tasks like query auto-completion. A large body of methods leverages the idea of the "wisdom of crowds" by analyzing the search logs to use either query co-occurrences in the search logs, or document clicks information. However, co-occurrence based models suffer from data sparsity and lack of coverage for rare or unseen queries.  On the other hand, considering the previously issued queries in the session, i.e context queries, and their order as a sequence of attempts for finding relevant information is of crucial for providing an effective suggestion.  Dealing with these highly diverse sessions makes using co-occurrence based model almost impossible.

Sessions are driven by query reformulations and users modifying existing queries in order to pursue new search results. Taking the structure of the context queries into account is important as query suggestion is well tightened to the understanding of query reformulation behaviors. A good query suggestion system should be able to reproduce natural reformulation patterns from users. there are several patterns in query reformulation like term addition, removal, and retention. It has been shown that retained terms make up a large proportion of query reformulation in search sessions. For example, an average of 62% of the terms in a query are retained from their preceding queries. More than 39% of the users repeat at least one term from their previous query. On the other hand retained terms are clearly core terms indicating the user’s information need, hence, they are usually discriminative terms and entities. Based on statistics from the AOL query log, more than 67% of the retained terms in the sessions are from the bottom 10% of terms ordered by their frequency.

The recent success of sequence-to-sequence (seq2seq) models in which recurrent neural networks (RNNs) both read and freely generate text makes it possible to generate the next query by reading the previously issued queries in the session. Although generic seq2seq models are promising in generating text, they have some shortcomings in the task of query suggestion. The first problem of directly employing the generic seq2seq model for the task of query suggestion is that it considers the input data as a sequence of words, ignoring the query level information. To address this, Sordoni et al.1 proposed a context-aware seq2seq model in which they use a hierarchical architecture to encode the previously issued queries in the session and generate the most likely sequence of words as the next query. e second shortage of a generic word-based seq2seq model is that it’s unable to deal with out-of-vocabulary words (OOV). Besides, these models are less likely to generate terms with very low frequency. is makes them unable to effectively model term retention, which is the most common reformulation patterns for the next query generation.

In our paper, we propose an architecture that addresses these two issues in the context of session-based query suggestion. We augment the standard seq2seq model with query-aware attention mechanism enabling the model to Attend to the promising scope of the session for generating the next query. Furthermore, we incorporate the copy mechanism by adding a copier component which lets the decoder Copy terms from the session context that improves the performance by modeling the term retention and handling of OOVs. e model still has the ability to Generate new words through a generator component. Our model, which we are going to call ACG in the rest of the paper, is trained in a multi-objective learning process.

Example of generating a suggestion query given the previous queries in the session. Th‡e suggestion query is generated during three time steps. ‡The heatmap indicates the attention, red for query-level attention and blue for word-level attention. Th‡e pie chart shows if the network decides to copy or to generate.

The above figure illustrates an example2 of the output of our model as the suggestion for the next query, given the previously submitted queries in a session. This example session is composed of three queries: bob dylanforever young dylan → dylan photo, which were submitted sequentially. Our model outputs the sequence of the words bob, dylan, and bio. At each time step, the heatmap of the query level attention (red) and word level attention (blue) is illustrated. Furthermore, the output of the copier, of the generator, and the probability of the network deciding to copy a term from the previous queries or to generate a new term is given for each time step. At time step #1, the first query in the session and in this query, word bob has the highest attention. e outputs of both copier and generator are the same, but the network decides to copy the term bob (probably from the first query). At time step #2, dylan is an OOV. So the output of the generator is the ⟨OOV ⟩ token and based on the learned attention, the network decides to copy dylan from queries in the session. At time step #3, the last query in the session and in this session term photo has the highest attention, and the network decides to generate the new term bio.

Besides proposing a seq2seq model which learns to effectively attend, copy and generate for the task of session-based query suggestion, we introduce new metrics for evaluating the output of generative models for the task of query suggestion in terms of their ability to generate good suggestions, not their ability to discriminate good suggestions. We train and evaluate ACG on the AOL query log data and compare it to the state-of-the-art models both in terms of the ability to discriminate and the ability to generate. e results suggest that ACG as a discriminative model is able to effectively score good candidates and as a generative model generates better queries compared to the baseline models.

For more details about the results and analysis and the evaluation paradigm we introduced, please take a look at our paper:

  1. Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, and Jian-Yun Nie. 2015. A hierarchical recurrent encoder-decoder for generative context-aware query suggestion. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. 553–562.
  2. This example is not from real data.