Universal Transformers

Thanks to Stephan Gouws for his help on writing and improving this blog post. Transformers have recently become a competitive alternative to RNNs for a range of sequence modeling tasks. They address a significant shortcoming of RNNs, i.e. their inherently sequential computation which prevents parallelization across elements of the input sequence, whilst still addressing the […]

Learning to Transform, Combine, and Reason in Open-Domain Question Answering

Our paper “Learning to Transform, Combine, and Reason in Open-Domain Question Answering”, with Hosein Azarbonyad, Jaap Kamps, and Maarten de Rijke, has been accepted as a long paper at 12th ACM International Conference on Web Search and Data Mining (WSDM 2019).\o/ We have all come to expect getting direct answers to complex questions from search […]

SIGIR2018 Workshop on Learning From Noisy/Limited Data for IR

We are organizing the “Learning From Noisy/Limited Data for Information Retrieval” workshop which is co-located with SIGIR 2018. This is the first edition of this workshop and The goal of the workshop is to bring together researchers from industry, where data is plentiful but noisy, with researchers from academia, where data is sparse but clean, to […]

Fidelity-Weighted Learning

Our paper “Fidelity-Weighted Learning”, with Arash Mehrjou, Stephan Gouws, Jaap Kamps, Bernhard Schölkopf, has been accepted at Sixth International Conference on Learning Representations (ICLR2018). \o/ [perfectpullquote align=”full” bordertop=”false” cite=”” link=”” color=”” class=”#16989D” size=”16″] tl;dr Fidelity-weighted learning (FWL) is a semi-supervised student-teacher approach for training deep neural networks using weakly-labeled data. It modulates the parameter updates to […]

Avoiding Your Teacher’s Mistakes: Training Neural Networks with Controlled Weak Supervision

This post is about the project I’ve done in collaboration with Aliaksei Severyn, Sascha Rothe, and Jaap Kamps, during my internship at Google Research. [latexpage] Deep neural networks have shown impressive results in a lot of tasks in computer vision, natural language processing, and information retrieval. However, their success is conditioned on the availability of […]

Learning to Attend, Copy, and Generate for Session-Based Query Suggestion

Our paper “Learning to Attend, Copy, and Generate for Session-Based Query Suggestion”, with Sascha Rothe, Enrique Alfonseca, and Pascal Fleury, has been accepted as a long paper at the international Conference on Information and Knowledge Management (CIKM’17). This paper is on the outcome of my internship at Google Research. \o/ Users interact with search engines […]

Share your Model instead of your Data!

Our paper “Share your Model instead of your Data: Privacy Preserving Mimic Learning for Ranking”, with Hosein Azarbonyad, Jaap Kamps, and Maarten de Rijke, has been accepted at Neu-IR: SIGIR Workshop on Neural Information Retrieval (NeuIR’17). \o/ [perfectpullquote align=”full” cite=”” link=”” color=”” class=”” size=”14″]In this paper, we aim to lay the groundwork for the idea […]

On Search Powered Navigation

Our paper “On Search Powered Navigation”, with Glorianna Jagfeld, Hosein Azarbonyad, Alex Olieman, Jaap Kamps, Maarten Marx, has been accepted as a short paper at The 3rd ACM International Conference on the Theory of Information Retrieval (ICTIR2017). \o/ Knowledge graphs and other hierarchical domain ontologies hold great promise for complex information seeking tasks, yet their […]

Beating the Teacher: Neural Ranking Models with Weak Supervision

Our paper “Neural Ranking Models with Weak Supervision”, with Hamed Zamani, Aliaksei Severyn, Jaap Kamps, and W. Bruce Croft, has been accepted as a long paper at The 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR2017). \o/ This paper is on the outcome of my pet project during my internship […]