Search Results for author: Bevan Koopman

Found 24 papers, 12 papers with code

Understanding and Mitigating the Threat of Vec2Text to Dense Retrieval Systems

1 code implementation20 Feb 2024 Shengyao Zhuang, Bevan Koopman, Xiaoran Chu, Guido Zuccon

In this paper, we investigate various aspects of embedding models that could influence the recoverability of text using Vec2Text.

Quantization Retrieval

ReSLLM: Large Language Models are Strong Resource Selectors for Federated Search

no code implementations31 Jan 2024 Shuai Wang, Shengyao Zhuang, Bevan Koopman, Guido Zuccon

Our ReSLLM method exploits LLMs to drive the selection of resources in federated search in a zero-shot setting.

TPRF: A Transformer-based Pseudo-Relevance Feedback Model for Efficient and Effective Retrieval

no code implementations24 Jan 2024 Chuting Yu, Hang Li, Ahmed Mourad, Bevan Koopman, Guido Zuccon

This paper considers Pseudo-Relevance Feedback (PRF) methods for dense retrievers in a resource constrained environment such as that of cheap cloud instances or embedded systems (e. g., smartphones and smartwatches), where memory and CPU are limited and GPUs are not present.

Retrieval

A Reproducibility Study of Goldilocks: Just-Right Tuning of BERT for TAR

1 code implementation16 Jan 2024 Xinyu Mao, Bevan Koopman, Guido Zuccon

In this context, we show that there is no need for further pre-training if a domain-specific BERT backbone is used within the active learning pipeline.

Active Learning TAR +2

Zero-shot Generative Large Language Models for Systematic Review Screening Automation

no code implementations12 Jan 2024 Shuai Wang, Harrisen Scells, Shengyao Zhuang, Martin Potthast, Bevan Koopman, Guido Zuccon

Systematic reviews are crucial for evidence-based medicine as they comprehensively analyse published research findings on specific questions.

Open-source Large Language Models are Strong Zero-shot Query Likelihood Models for Document Ranking

1 code implementation20 Oct 2023 Shengyao Zhuang, Bing Liu, Bevan Koopman, Guido Zuccon

In the field of information retrieval, Query Likelihood Models (QLMs) rank documents based on the probability of generating the query given the content of a document.

Document Ranking Information Retrieval +3

A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models

1 code implementation14 Oct 2023 Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon

Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking.

Document Ranking

ChatGPT Hallucinates when Attributing Answers

no code implementations17 Sep 2023 Guido Zuccon, Bevan Koopman, Razia Shaik

We find that ChatGPT provides correct or partially correct answers in about half of the cases (50. 6% of the times), but its suggested references only exist 14% of the times.

Attribute

Generating Natural Language Queries for More Effective Systematic Review Screening Prioritisation

1 code implementation11 Sep 2023 Shuai Wang, Harrisen Scells, Martin Potthast, Bevan Koopman, Guido Zuccon

Our best approach is not only viable based on the information available at the time of screening, but also has similar effectiveness to the final title.

Natural Language Queries

Longitudinal Data and a Semantic Similarity Reward for Chest X-Ray Report Generation

1 code implementation19 Jul 2023 Aaron Nicolson, Jason Dowling, Bevan Koopman

To improve diagnostic accuracy, we propose a CXR report generator that integrates aspects of the radiologist workflow and is trained with our proposed reward for reinforcement learning.

Face Model Multi-Task Learning +3

Dr ChatGPT, tell me what I want to hear: How prompt knowledge impacts health answer correctness

no code implementations23 Feb 2023 Guido Zuccon, Bevan Koopman

Aside from measuring the effectiveness of ChatGPT in this context, we show that the knowledge passed in the prompt can overturn the knowledge encoded in the model and this is, in our experiments, to the detriment of answer correctness.

Question Answering

Can ChatGPT Write a Good Boolean Query for Systematic Review Literature Search?

no code implementations3 Feb 2023 Shuai Wang, Harrisen Scells, Bevan Koopman, Guido Zuccon

The ability of ChatGPT to follow complex instructions and generate queries with high precision makes it a valuable tool for researchers conducting systematic reviews, particularly for rapid reviews where time is a constraint and often trading-off higher precision for lower recall is acceptable.

AgAsk: An Agent to Help Answer Farmer's Questions From Scientific Documents

1 code implementation21 Dec 2022 Bevan Koopman, Ahmed Mourad, Hang Li, Anton van der Vegt, Shengyao Zhuang, Simon Gibson, Yash Dang, David Lawrence, Guido Zuccon

On the basis of these needs we release an information retrieval test collection comprising real questions, a large collection of scientific documents split in passages, and ground truth relevance assessments indicating which passages are relevant to each question.

Information Retrieval Retrieval

Automated MeSH Term Suggestion for Effective Query Formulation in Systematic Reviews Literature Search

1 code implementation19 Sep 2022 Shuai Wang, Harrisen Scells, Bevan Koopman, Guido Zuccon

However, identifying the correct MeSH terms to include in a query is difficult: information experts are often unfamiliar with the MeSH database and unsure about the appropriateness of MeSH terms for a query.

How does Feedback Signal Quality Impact Effectiveness of Pseudo Relevance Feedback for Passage Retrieval?

no code implementations12 May 2022 Hang Li, Ahmed Mourad, Bevan Koopman, Guido Zuccon

Pseudo-Relevance Feedback (PRF) assumes that the top results retrieved by a first-stage ranker are relevant to the original query and uses them to improve the query representation for a second round of retrieval.

Passage Retrieval Retrieval

Improving Chest X-Ray Report Generation by Leveraging Warm Starting

1 code implementation24 Jan 2022 Aaron Nicolson, Jason Dowling, Bevan Koopman

Our experimental investigation demonstrates that the Convolutional vision Transformer (CvT) ImageNet-21K and the Distilled Generative Pre-trained Transformer 2 (DistilGPT2) checkpoints are best for warm starting the encoder and decoder, respectively.

Text Generation

Semantic Search for Large Scale Clinical Ontologies

no code implementations1 Jan 2022 Duy-Hoa Ngo, Madonna Kemp, Donna Truran, Bevan Koopman, Alejandro Metke-Jimenez

Finding concepts in large clinical ontologies can be challenging when queries use different vocabularies.

Ontology Matching

Pseudo Relevance Feedback with Deep Language Models and Dense Retrievers: Successes and Pitfalls

1 code implementation25 Aug 2021 Hang Li, Ahmed Mourad, Shengyao Zhuang, Bevan Koopman, Guido Zuccon

Text-based PRF results show that the use of PRF had a mixed effect on deep rerankers across different datasets.

Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.