1 code implementation • 7 Mar 2025 • Bill Cassidy, Christian Mcbride, Connah Kendrick, Neil D. Reeves, Joseph M. Pappachan, Shaghayegh Raad, Moi Hoon Yap
This paper presents the first study to focus on integrating patient data into a chronic wound segmentation workflow.
1 code implementation • 4 Oct 2024 • Bill Cassidy, Christian Mcbride, Connah Kendrick, Neil D. Reeves, Joseph M. Pappachan, Cornelius J. Fernandez, Elias Chacko, Raphael Brüngel, Christoph M. Friedrich, Metib Alotaibi, Abdullah Abdulaziz AlWabel, Mohammad Alderwish, Kuan-Ying Lai, Moi Hoon Yap
This paper presents the first study to focus on darker-skin tones for chronic wound segmentation using models trained only on wound images exhibiting lighter skin.
1 code implementation • 23 Jun 2023 • Samuel William Pewton, Bill Cassidy, Connah Kendrick, Moi Hoon Yap
This paper provides a new guideline for skin lesions analysis with an emphasis on reproducibility.
no code implementations • 1 May 2023 • Md Mahamudul Hasan, Moi Hoon Yap, Md Kamrul Hasan
We propose to reduce the four classes into two since both class wounds can be interpreted as the simultaneous occurrence of infection and ischaemia and none class wounds as the absence of infection and ischaemia.
no code implementations • 25 Apr 2023 • Imran Chowdhury Dipto, Bill Cassidy, Connah Kendrick, Neil D. Reeves, Joseph M. Pappachan, Vishnu Chandrabalan, Moi Hoon Yap
This research conducts an investigation on the effect of visually similar images within a publicly available diabetic foot ulcer dataset when training deep learning classification networks.
no code implementations • 24 Apr 2023 • Connah Kendrick, Bill Cassidy, Neil D. Reeves, Joseph M. Pappachan, Claire O'Shea, Vishnu Chandrabalan, Moi Hoon Yap
The Diabetic Foot Ulcer Challenge 2022 focused on the task of diabetic foot ulcer segmentation, based on the work completed in previous DFU challenges.
no code implementations • CVPR 2023 • Matthias Eisenmann, Annika Reinke, Vivienn Weru, Minu Dietlinde Tizabi, Fabian Isensee, Tim J. Adler, Sharib Ali, Vincent Andrearczyk, Marc Aubreville, Ujjwal Baid, Spyridon Bakas, Niranjan Balu, Sophia Bano, Jorge Bernal, Sebastian Bodenstedt, Alessandro Casella, Veronika Cheplygina, Marie Daum, Marleen de Bruijne, Adrien Depeursinge, Reuben Dorent, Jan Egger, David G. Ellis, Sandy Engelhardt, Melanie Ganz, Noha Ghatwary, Gabriel Girard, Patrick Godau, Anubha Gupta, Lasse Hansen, Kanako Harada, Mattias Heinrich, Nicholas Heller, Alessa Hering, Arnaud Huaulmé, Pierre Jannin, Ali Emre Kavur, Oldřich Kodym, Michal Kozubek, Jianning Li, Hongwei Li, Jun Ma, Carlos Martín-Isla, Bjoern Menze, Alison Noble, Valentin Oreiller, Nicolas Padoy, Sarthak Pati, Kelly Payette, Tim Rädsch, Jonathan Rafael-Patiño, Vivek Singh Bawa, Stefanie Speidel, Carole H. Sudre, Kimberlin Van Wijnen, Martin Wagner, Donglai Wei, Amine Yamlahi, Moi Hoon Yap, Chun Yuan, Maximilian Zenk, Aneeq Zia, David Zimmerer, Dogu Baran Aydogan, Binod Bhattarai, Louise Bloch, Raphael Brüngel, Jihoon Cho, Chanyeol Choi, Qi Dou, Ivan Ezhov, Christoph M. Friedrich, Clifton Fuller, Rebati Raman Gaire, Adrian Galdran, Álvaro García Faura, Maria Grammatikopoulou, SeulGi Hong, Mostafa Jahanifar, Ikbeom Jang, Abdolrahim Kadkhodamohammadi, Inha Kang, Florian Kofler, Satoshi Kondo, Hugo Kuijf, Mingxing Li, Minh Huan Luu, Tomaž Martinčič, Pedro Morais, Mohamed A. Naser, Bruno Oliveira, David Owen, Subeen Pang, Jinah Park, Sung-Hong Park, Szymon Płotka, Elodie Puybareau, Nasir Rajpoot, Kanghyun Ryu, Numan Saeed, Adam Shephard, Pengcheng Shi, Dejan Štepec, Ronast Subedi, Guillaume Tochon, Helena R. Torres, Helene Urien, João L. Vilaça, Kareem Abdul Wahid, Haojie Wang, Jiacheng Wang, Liansheng Wang, Xiyue Wang, Benedikt Wiestler, Marek Wodzinski, Fangfang Xia, Juanying Xie, Zhiwei Xiong, Sen yang, Yanwu Yang, Zixuan Zhao, Klaus Maier-Hein, Paul F. Jäger, Annette Kopp-Schneider, Lena Maier-Hein
The "typical" lead of a winning team is a computer scientist with a doctoral degree, five years of experience in biomedical image analysis, and four years of experience in deep learning.
no code implementations • 16 Dec 2022 • Matthias Eisenmann, Annika Reinke, Vivienn Weru, Minu Dietlinde Tizabi, Fabian Isensee, Tim J. Adler, Patrick Godau, Veronika Cheplygina, Michal Kozubek, Sharib Ali, Anubha Gupta, Jan Kybic, Alison Noble, Carlos Ortiz de Solórzano, Samiksha Pachade, Caroline Petitjean, Daniel Sage, Donglai Wei, Elizabeth Wilden, Deepak Alapatt, Vincent Andrearczyk, Ujjwal Baid, Spyridon Bakas, Niranjan Balu, Sophia Bano, Vivek Singh Bawa, Jorge Bernal, Sebastian Bodenstedt, Alessandro Casella, Jinwook Choi, Olivier Commowick, Marie Daum, Adrien Depeursinge, Reuben Dorent, Jan Egger, Hannah Eichhorn, Sandy Engelhardt, Melanie Ganz, Gabriel Girard, Lasse Hansen, Mattias Heinrich, Nicholas Heller, Alessa Hering, Arnaud Huaulmé, Hyunjeong Kim, Bennett Landman, Hongwei Bran Li, Jianning Li, Jun Ma, Anne Martel, Carlos Martín-Isla, Bjoern Menze, Chinedu Innocent Nwoye, Valentin Oreiller, Nicolas Padoy, Sarthak Pati, Kelly Payette, Carole Sudre, Kimberlin Van Wijnen, Armine Vardazaryan, Tom Vercauteren, Martin Wagner, Chuanbo Wang, Moi Hoon Yap, Zeyun Yu, Chun Yuan, Maximilian Zenk, Aneeq Zia, David Zimmerer, Rina Bao, Chanyeol Choi, Andrew Cohen, Oleh Dzyubachyk, Adrian Galdran, Tianyuan Gan, Tianqi Guo, Pradyumna Gupta, Mahmood Haithami, Edward Ho, Ikbeom Jang, Zhili Li, Zhengbo Luo, Filip Lux, Sokratis Makrogiannis, Dominik Müller, Young-tack Oh, Subeen Pang, Constantin Pape, Gorkem Polat, Charlotte Rosalie Reed, Kanghyun Ryu, Tim Scherr, Vajira Thambawita, Haoyu Wang, Xinliang Wang, Kele Xu, Hung Yeh, Doyeob Yeo, Yixuan Yuan, Yan Zeng, Xin Zhao, Julian Abbing, Jannes Adam, Nagesh Adluru, Niklas Agethen, Salman Ahmed, Yasmina Al Khalil, Mireia Alenyà, Esa Alhoniemi, Chengyang An, Talha Anwar, Tewodros Weldebirhan Arega, Netanell Avisdris, Dogu Baran Aydogan, Yingbin Bai, Maria Baldeon Calisto, Berke Doga Basaran, Marcel Beetz, Cheng Bian, Hao Bian, Kevin Blansit, Louise Bloch, Robert Bohnsack, Sara Bosticardo, Jack Breen, Mikael Brudfors, Raphael Brüngel, Mariano Cabezas, Alberto Cacciola, Zhiwei Chen, Yucong Chen, Daniel Tianming Chen, Minjeong Cho, Min-Kook Choi, Chuantao Xie Chuantao Xie, Dana Cobzas, Julien Cohen-Adad, Jorge Corral Acero, Sujit Kumar Das, Marcela de Oliveira, Hanqiu Deng, Guiming Dong, Lars Doorenbos, Cory Efird, Sergio Escalera, Di Fan, Mehdi Fatan Serj, Alexandre Fenneteau, Lucas Fidon, Patryk Filipiak, René Finzel, Nuno R. Freitas, Christoph M. Friedrich, Mitchell Fulton, Finn Gaida, Francesco Galati, Christoforos Galazis, Chang Hee Gan, Zheyao Gao, Shengbo Gao, Matej Gazda, Beerend Gerats, Neil Getty, Adam Gibicar, Ryan Gifford, Sajan Gohil, Maria Grammatikopoulou, Daniel Grzech, Orhun Güley, Timo Günnemann, Chunxu Guo, Sylvain Guy, Heonjin Ha, Luyi Han, Il Song Han, Ali Hatamizadeh, Tian He, Jimin Heo, Sebastian Hitziger, SeulGi Hong, Seungbum Hong, Rian Huang, Ziyan Huang, Markus Huellebrand, Stephan Huschauer, Mustaffa Hussain, Tomoo Inubushi, Ece Isik Polat, Mojtaba Jafaritadi, SeongHun Jeong, Bailiang Jian, Yuanhong Jiang, Zhifan Jiang, Yueming Jin, Smriti Joshi, Abdolrahim Kadkhodamohammadi, Reda Abdellah Kamraoui, Inha Kang, Junghwa Kang, Davood Karimi, April Khademi, Muhammad Irfan Khan, Suleiman A. Khan, Rishab Khantwal, Kwang-Ju Kim, Timothy Kline, Satoshi Kondo, Elina Kontio, Adrian Krenzer, Artem Kroviakov, Hugo Kuijf, Satyadwyoom Kumar, Francesco La Rosa, Abhi Lad, Doohee Lee, Minho Lee, Chiara Lena, Hao Li, Ling Li, Xingyu Li, Fuyuan Liao, Kuanlun Liao, Arlindo Limede Oliveira, Chaonan Lin, Shan Lin, Akis Linardos, Marius George Linguraru, Han Liu, Tao Liu, Di Liu, Yanling Liu, João Lourenço-Silva, Jingpei Lu, Jiangshan Lu, Imanol Luengo, Christina B. Lund, Huan Minh Luu, Yi Lv, Uzay Macar, Leon Maechler, Sina Mansour L., Kenji Marshall, Moona Mazher, Richard McKinley, Alfonso Medela, Felix Meissen, Mingyuan Meng, Dylan Miller, Seyed Hossein Mirjahanmardi, Arnab Mishra, Samir Mitha, Hassan Mohy-ud-Din, Tony Chi Wing Mok, Gowtham Krishnan Murugesan, Enamundram Naga Karthik, Sahil Nalawade, Jakub Nalepa, Mohamed Naser, Ramin Nateghi, Hammad Naveed, Quang-Minh Nguyen, Cuong Nguyen Quoc, Brennan Nichyporuk, Bruno Oliveira, David Owen, Jimut Bahan Pal, Junwen Pan, Wentao Pan, Winnie Pang, Bogyu Park, Vivek Pawar, Kamlesh Pawar, Michael Peven, Lena Philipp, Tomasz Pieciak, Szymon Plotka, Marcel Plutat, Fattaneh Pourakpour, Domen Preložnik, Kumaradevan Punithakumar, Abdul Qayyum, Sandro Queirós, Arman Rahmim, Salar Razavi, Jintao Ren, Mina Rezaei, Jonathan Adam Rico, ZunHyan Rieu, Markus Rink, Johannes Roth, Yusely Ruiz-Gonzalez, Numan Saeed, Anindo Saha, Mostafa Salem, Ricardo Sanchez-Matilla, Kurt Schilling, Wei Shao, Zhiqiang Shen, Ruize Shi, Pengcheng Shi, Daniel Sobotka, Théodore Soulier, Bella Specktor Fadida, Danail Stoyanov, Timothy Sum Hon Mun, Xiaowu Sun, Rong Tao, Franz Thaler, Antoine Théberge, Felix Thielke, Helena Torres, Kareem A. Wahid, Jiacheng Wang, Yifei Wang, Wei Wang, Xiong Wang, Jianhui Wen, Ning Wen, Marek Wodzinski, Ye Wu, Fangfang Xia, Tianqi Xiang, Chen Xiaofei, Lizhan Xu, Tingting Xue, Yuxuan Yang, Lin Yang, Kai Yao, Huifeng Yao, Amirsaeed Yazdani, Michael Yip, Hwanseung Yoo, Fereshteh Yousefirizi, Shunkai Yu, Lei Yu, Jonathan Zamora, Ramy Ashraf Zeineldin, Dewen Zeng, Jianpeng Zhang, Bokai Zhang, Jiapeng Zhang, Fan Zhang, Huahong Zhang, Zhongchen Zhao, Zixuan Zhao, Jiachen Zhao, Can Zhao, Qingshuo Zheng, Yuheng Zhi, Ziqi Zhou, Baosheng Zou, Klaus Maier-Hein, Paul F. Jäger, Annette Kopp-Schneider, Lena Maier-Hein
Of these, 84% were based on standard architectures.
2 code implementations • 26 Apr 2022 • Gongping Chen, Yu Dai, Jianxun Zhang, Moi Hoon Yap
Different from existing attention mechanisms, the hybrid adaptive attention module can guide the network to adaptively select more robust representation in channel and space dimensions to cope with more complex breast lesions segmentation.
1 code implementation • 22 Apr 2022 • Connah Kendrick, Bill Cassidy, Joseph M. Pappachan, Claire O'Shea, Cornelious J. Fernandez, Elias Chacko, Koshy Jacob, Neil D. Reeves, Moi Hoon Yap
This paper demonstrates that image processing using refined contour as ground truth can provide better agreement with machine predicted results.
no code implementations • 2 Jan 2022 • Jireh Jam, Connah Kendrick, Vincent Drouard, Kevin Walker, Moi Hoon Yap
The RSTL layer easily adapts dual encoders by increasing the unique semantic information through direct communication.
no code implementations • 1 Jan 2022 • Moi Hoon Yap, Connah Kendrick, Neil D. Reeves, Manu Goyal, Joseph M. Pappachan, Bill Cassidy
This paper provides conceptual foundation and procedures used in the development of diabetic foot ulcer datasets over the past decade, with a timeline to demonstrate progress.
no code implementations • 19 Nov 2021 • Bill Cassidy, Connah Kendrick, Neil D. Reeves, Joseph M. Pappachan, Claire O'Shea, David G. Armstrong, Moi Hoon Yap
Diabetic foot ulcer classification systems use the presence of wound infection (bacteria present within the wound) and ischaemia (restricted blood supply) as vital clinical indicators for treatment and prediction of wound healing.
1 code implementation • 18 Oct 2021 • Ricard Durall, Jireh Jam, Dominik Strassel, Moi Hoon Yap, Janis Keuper
We then incorporate the geometry information of a segmentation mask to provide a fine-grained manipulation of facial attributes.
no code implementations • 17 May 2021 • Bill Cassidy, Neil D. Reeves, Joseph M. Pappachan, Naseer Ahmad, Samantha Haycocks, David Gillespie, Moi Hoon Yap
This research proposes a mobile and cloud-based framework for the automatic detection of diabetic foot ulcers and conducts an investigation of its performance.
no code implementations • 13 May 2021 • Chuin Hong Yap, Moi Hoon Yap, Adrian K. Davison, Connah Kendrick, Jingting Li, SuJing Wang, Ryan Cunningham
Facial expression spotting is the preliminary step for micro- and macro-expression analysis.
no code implementations • 7 May 2021 • Jireh Jam, Connah Kendrick, Vincent Drouard, Kevin Walker, Moi Hoon Yap
It introduces the use of foreground segmentation masks to preserve the fidelity.
no code implementations • 7 Apr 2021 • Moi Hoon Yap, Bill Cassidy, Joseph M. Pappachan, Claire O'Shea, David Gillespie, Neil Reeves
We describe the data preparation of DFUC2021 for ground truth annotation, data curation and data analysis.
1 code implementation • 16 Nov 2020 • David Gillespie, Connah Kendrick, Ian Boon, Cheng Boon, Tim Rattay, Moi Hoon Yap
Deep learning has been identified as a potential new technology for the delivery of precision radiotherapy in prostate cancer, where accurate prostate segmentation helps in cancer detection and therapy.
no code implementations • 7 Oct 2020 • Moi Hoon Yap, Ryo Hachiuma, Azadeh Alavi, Raphael Brungel, Bill Cassidy, Manu Goyal, Hongtao Zhu, Johannes Ruckert, Moshe Olshansky, Xiao Huang, Hideo Saito, Saeed Hassanpour, Christoph M. Friedrich, David Ascher, Anping Song, Hiroki Kajita, David Gillespie, Neil D. Reeves, Joseph Pappachan, Claire O'Shea, Eibe Frank
DFUC2020 provided participants with a comprehensive dataset consisting of 2, 000 images for training and 2, 000 images for testing.
2 code implementations • 11 Aug 2020 • Jireh Jam, Connah Kendrick, Vincent Drouard, Kevin Walker, Gee-Sern Hsu, Moi Hoon Yap
We address the problem by proposing a Wasserstein GAN combined with a new reverse mask operator, namely Reverse Masking Network (R-MNet), a perceptual adversarial network for image inpainting.
1 code implementation • 24 Apr 2020 • Bill Cassidy, Neil D. Reeves, Pappachan Joseph, David Gillespie, Claire O'Shea, Satyan Rajbhandari, Arun G. Maiya, Eibe Frank, Andrew Boulton, David Armstrong, Bijan Najafi, Justina Wu, Moi Hoon Yap
Every 20 seconds, a limb is amputated somewhere in the world due to diabetes.
no code implementations • 6 Mar 2020 • Connah Kendrick, David Gillespie, Moi Hoon Yap
We develop a novel architecture that can be applied to existing latent vector based GAN structures that allows them to generate on-the-fly images of any size.
no code implementations • 11 Jan 2020 • Jireh Jam, Connah Kendrick, Vincent Drouard, Kevin Walker, Gee-Sern Hsu, Moi Hoon Yap
Additionally, we propose a Wasserstein-Perceptual loss function to preserve colour and maintain realism on a reconstructed image.
2 code implementations • 18 Dec 2019 • Ying He, Su-Jing Wang, Jingting Li, Moi Hoon Yap
Both macro- and micro-expression intervals in CAS(ME)$^2$ and SAMM Long Videos are spotted by employing the method of Main Directional Maximal Difference Analysis (MDMD).
no code implementations • 4 Nov 2019 • Chuin Hong Yap, Connah Kendrick, Moi Hoon Yap
We conduct facial expression spotting using this dataset and compare it with the baseline of MEGC III.
no code implementations • 14 Aug 2019 • Manu Goyal, Neil Reeves, Satyan Rajbhandari, Naseer Ahmad, Chuan Wang, Moi Hoon Yap
We found that our proposed Ensemble CNN deep learning algorithms performed better for both classification tasks as compared to handcrafted machine learning algorithms, with 90% accuracy in ischaemia classification and 73% in infection classification.
no code implementations • 2 Feb 2019 • Manu Goyal, Amanda Oakley, Priyanka Bansal, Darren Dancey, Moi Hoon Yap
In this work, we propose the use of fully automated deep learning ensemble methods for accurate lesion boundary segmentation in dermoscopic images.
no code implementations • 26 Dec 2018 • Jingting Li, Catherine Soladie, Renaud Sguier, Su-Jing Wang, Moi Hoon Yap
This paper presents baseline results for the first Micro-Expression Spotting Challenge 2019 by evaluating local temporal pattern (LTP) on SAMM and CAS(ME)2.
no code implementations • 27 Jul 2018 • Manu Goyal, Moi Hoon Yap, Saeed Hassanpour
In addition, we developed an automated natural data-augmentation method from ROI detection to produce augmented copies of dermoscopic images, as a pre-processing step in the segmentation of skin lesions to further improve the performance of the current state-of-the-art deep learning algorithm.
no code implementations • 24 Jul 2018 • Manu Goyal, Jiahua Ng, Moi Hoon Yap
Usually, deep classification networks are used for the lesion diagnosis to determine different types of skin lesions.
no code implementations • 7 May 2018 • Walied Merghani, Adrian K. Davison, Moi Hoon Yap
Facial micro-expressions are very brief, spontaneous facial expressions that appear on the face of humans when they either deliberately or unconsciously conceal an emotion.
no code implementations • 1 Jan 2018 • Ezak Ahmad, Manu Goyal, Jamie S. McPhee, Hans Degens, Moi Hoon Yap
This paper presents an end-to-end solution for MRI thigh quadriceps segmentation.
no code implementations • 28 Nov 2017 • Manu Goyal, Neil D. Reeves, Adrian K. Davison, Satyan Rajbhandari, Jennifer Spragg, Moi Hoon Yap
In this paper, we have proposed the use of traditional computer vision features for detecting foot ulcers among diabetic patients, which represent a cost-effective, remote and convenient healthcare solution.
no code implementations • 28 Nov 2017 • Manu Goyal, Moi Hoon Yap, Saeed Hassanpour
Melanoma is clinically difficult to distinguish from common benign skin lesions, particularly melanocytic naevus and seborrhoeic keratosis.
no code implementations • 24 Aug 2017 • Adrian K. Davison, Walied Merghani, Moi Hoon Yap
The best result achieves 86. 35\% accuracy when classifying the proposed 5 classes on CASME II using HOG 3D, outperforming the result of the state-of-the-art 5-class emotional-based classification in CASME II.
no code implementations • 6 Aug 2017 • Manu Goyal, Neil D. Reeves, Satyan Rajbhandari, Jennifer Spragg, Moi Hoon Yap
Using 5-fold cross-validation, the proposed two-tier transfer learning FCN Models achieve a Dice Similarity Coefficient of 0. 794 ($\pm$0. 104) for ulcer region, 0. 851 ($\pm$0. 148) for surrounding skin region, and 0. 899 ($\pm$0. 072) for the combination of both regions.
no code implementations • 6 Aug 2017 • Omaima FathElrahman Osman, Remah Mutasim Ibrahim Elbashir, Imad Eldain Abbass, Connah Kendrick, Manu Goyal, Moi Hoon Yap
The face was divided into ten predefined regions, where the wrinkles in each region was extracted.
no code implementations • 27 Jul 2017 • Sean Maudsley-Barton, Jamie McPheey, Anthony Bukowski, Daniel Leightley, Moi Hoon Yap
The analysis of human motion as a clinical tool can bring many benefits such as the early detection of disease and the monitoring of recovery, so in turn helping people to lead independent lives.
no code implementations • 15 Dec 2016 • Adrian K. Davison, Cliff Lansley, Choon Ching Ng, Kevin Tan, Moi Hoon Yap
This paper proposes an individualised baseline micro-movement detection method using 3D Histogram of Oriented Gradients (3D HOG) temporal difference method.