1 code implementation • 24 Oct 2024 • Aryo Pradipta Gema, Chen Jin, Ahmed Abdulaal, Tom Diethe, Philip Teare, Beatrice Alex, Pasquale Minervini, Amrutha Saseendran
Large Language Models (LLMs) often hallucinate, producing unfaithful or factually incorrect outputs by misrepresenting the provided context or incorrectly recalling internal knowledge.
no code implementations • 4 Oct 2024 • Ahmed Abdulaal, Hugo Fry, Nina Montaña-Brown, Ayodeji Ijishakin, Jack Gao, Stephanie Hyland, Daniel C. Alexander, Daniel C. Castro
Using an off-the-shelf language model, we distil ground-truth reports into radiological descriptions for each SAE feature, which we then compile into a full report for each image, eliminating the need for fine-tuning large models for this task.
no code implementations • 28 Aug 2024 • Ayodeji Ijishakin, Ana Lawry Aguila, Elizabeth Levitis, Ahmed Abdulaal, Andre Altmann, James Cole
Existing harmonization techniques, which use statistical models to remove such effects, have been shown to incompletely remove site effects while also failing to preserve biological variability.
no code implementations • 19 Jul 2024 • Ayodeji Ijishakin, Adamos Hadjasavilou, Ahmed Abdulaal, Nina Montana-Brown, Florence Townend, Edoardo Spinelli, Massimo Fillipi, Federica Agosta, James Cole, Andrea Malaspina
To our knowledge, this is the first use of normative modelling within a diffusion autoencoder, as well as the first application of normative modelling to ALS.
no code implementations • 10 Aug 2023 • Tiantian He, Elinor Thompson, Anna Schroder, Neil P. Oxtoby, Ahmed Abdulaal, Frederik Barkhof, Daniel C. Alexander
We account for the heterogeneity of disease by fitting the model at the individual level, allowing the epicenters and rate of progression to vary among subjects.
1 code implementation • 5 Jun 2023 • Ayodeji Ijishakin, Ahmed Abdulaal, Adamos Hadjivasiliou, Sophie Martin, James Cole
Therefore, this work stands as a contribution to the pertinent development of accurate and interpretable deep learning within medical imaging.