no code implementations • 15 Nov 2023 • Muhammad Waleed Gondal, Jochen Gast, Inigo Alonso Ruiz, Richard Droste, Tommaso Macri, Suren Kumar, Luitpold Staudigl
Large vision-language representation learning models like CLIP have demonstrated impressive performance for zero-shot transfer to downstream tasks while largely benefiting from inter-modal (image-text) alignment via contrastive objectives.
no code implementations • 28 Aug 2021 • Yuan Gao, Lok Hin Lee, Richard Droste, Rachel Craik, Sridevi Beriwal, Aris Papageorghiou, Alison Noble
This paper presents a novel approach to automatic fetal brain biometry motivated by needs in low- and medium- income countries.
no code implementations • 8 Jul 2020 • Richard Droste, Lior Drukker, Aris T. Papageorghiou, J. Alison Noble
Evaluations for 3 standard plane types show that the model provides a useful guidance signal with an accuracy of 88. 8% for goal prediction and 90. 9% for action prediction.
2 code implementations • ECCV 2020 • Richard Droste, Jianbo Jiao, J. Alison Noble
We evaluate our method on the video saliency datasets DHF1K, Hollywood-2 and UCF-Sports, and the image saliency datasets SALICON and MIT300.
no code implementations • 28 Feb 2020 • Jianbo Jiao, Richard Droste, Lior Drukker, Aris T. Papageorghiou, J. Alison Noble
Therefore, there is significant interest in learning representations from unlabelled raw data.
no code implementations • 22 Jan 2020 • Richard Droste, Pierre Chatelain, Lior Drukker, Harshita Sharma, Aris T. Papageorghiou, J. Alison Noble
In this paper, in contrast, we present a method to automatically discover and localize anatomical landmarks in medical images.
no code implementations • 7 Mar 2019 • Richard Droste, Yifan Cai, Harshita Sharma, Pierre Chatelain, Lior Drukker, Aris T. Papageorghiou, J. Alison Noble
Secondly, we train a simple softmax regression on the feature activations of each CNN layer in order to evaluate the representations independently of transfer learning hyper-parameters.