4 code implementations • 12 May 2018 • Hoel Kervadec, Jose Dolz, Meng Tang, Eric Granger, Yuri Boykov, Ismail Ben Ayed
To the best of our knowledge, the method of [Pathak et al., 2015] is the only prior work that addresses deep CNNs with linear constraints in weakly supervised segmentation.
5 code implementations • 17 Dec 2018 • Hoel Kervadec, Jihene Bouchtiba, Christian Desrosiers, Eric Granger, Jose Dolz, Ismail Ben Ayed
We propose a boundary loss, which takes the form of a distance metric on the space of contours, not regions.
Brain Lesion Segmentation From Mri Ischemic Stroke Lesion Segmentation +4
1 code implementation • 8 Apr 2019 • Hoel Kervadec, Jose Dolz, Jing Yuan, Christian Desrosiers, Eric Granger, Ismail Ben Ayed
While sub-optimality is not guaranteed for non-convex problems, this result shows that log-barrier extensions are a principled way to approximate Lagrangian optimization for constrained CNNs via implicit dual variables.
1 code implementation • 10 Apr 2019 • Hoel Kervadec, Jose Dolz, Eric Granger, Ismail Ben Ayed
This study investigates a curriculum-style strategy for semi-supervised CNN segmentation, which devises a regression network to learn image-level information such as the size of a target region.
1 code implementation • 8 Aug 2019 • Mathilde Bateson, Jose Dolz, Hoel Kervadec, Hervé Lombaert, Ismail Ben Ayed
We propose to adapt segmentation networks with a constrained formulation, which embeds domain-invariant prior knowledge about the segmentation regions.
no code implementations • 15 Aug 2019 • Jizong Peng, Hoel Kervadec, Jose Dolz, Ismail Ben Ayed, Marco Pedersoli, Christian Desrosiers
An efficient strategy for weakly-supervised segmentation is to impose constraints or regularization priors on target regions.
no code implementations • MIDL 2019 • Haoyun Liang, Yu Gong, Hoel Kervadec, Jing Yuan, Hairong Zheng, Shanshan Wang
A Laplacian pyramid-based complex neural network, CLP-Net, is proposed to reconstruct high-quality magnetic resonance images from undersampled k-space data.
1 code implementation • MIDL 2019 • Hoel Kervadec, Jose Dolz, Shan-Shan Wang, Eric Granger, Ismail Ben Ayed
Particularly, we leverage a classical tightness prior to a deep learning setting via imposing a set of constraints on the network outputs.
5 code implementations • 7 May 2020 • Mathilde Bateson, Hoel Kervadec, Jose Dolz, Herve Lombaert, Ismail Ben Ayed
Our formulation is based on minimizing a label-free entropy loss defined over target-domain data, which we further guide with a domain invariant prior on the segmentation regions.
2 code implementations • CVPR 2021 • Malik Boudiaf, Hoel Kervadec, Ziko Imtiaz Masud, Pablo Piantanida, Ismail Ben Ayed, Jose Dolz
We show that the way inference is performed in few-shot segmentation tasks has a substantial effect on performances -- an aspect often overlooked in the literature in favor of the meta-learning paradigm.
Ranked #3 on Few-Shot Semantic Segmentation on COCO-20i (10-shot)
1 code implementation • 3 May 2021 • Hoel Kervadec, Houda Bahig, Laurent Letourneau-Guillon, Jose Dolz, Ismail Ben Ayed
We also found that shape descriptors can be a valid way to encode anatomical priors about the task, enabling to leverage expert knowledge without additional annotations.
1 code implementation • 6 Aug 2021 • Mathilde Bateson, Hoel Kervadec, Jose Dolz, Hervé Lombaert, Ismail Ben Ayed
Our method yields comparable results to several state of the art adaptation techniques, despite having access to much less information, as the source images are entirely absent in our adaptation phase.
no code implementations • 9 Apr 2023 • Hoel Kervadec, Marleen de Bruijne
In the past few years, in the context of fully-supervised semantic segmentation, several losses -- such as cross-entropy and dice -- have emerged as de facto standards to supervise neural networks.
no code implementations • 6 Nov 2023 • Eva Breznik, Hoel Kervadec, Filip Malmberg, Joel Kullberg, Håkan Ahlström, Marleen de Bruijne, Robin Strand
Hence it is intuitively inappropriate for weak supervision, where the ground truth label may be much smaller than the actual object and a certain amount of false positives (w. r. t.