Search Results for author: Michel de Mathelin

Found 13 papers, 4 papers with code

Articulated Clinician Detection Using 3D Pictorial Structures on RGB-D Data

1 code implementation10 Feb 2016 Abdolrahim Kadkhodamohammadi, Afshin Gangi, Michel de Mathelin, Nicolas Padoy

Proposed methods for the operating room (OR) rely either on foreground estimation using a multi-camera system, which is a challenge in real ORs due to color similarities and frequent illumination changes, or on wearable sensors or markers, which are invasive and therefore difficult to introduce in the room.

Pose Estimation

Single- and Multi-Task Architectures for Surgical Workflow Challenge at M2CAI 2016

no code implementations27 Oct 2016 Andru P. Twinanda, Didier Mutter, Jacques Marescaux, Michel de Mathelin, Nicolas Padoy

On top of these architectures we propose to use two different approaches to enforce the temporal constraints of the surgical workflow: (1) HMM-based and (2) LSTM-based pipelines.

Surgical phase recognition

Single- and Multi-Task Architectures for Tool Presence Detection Challenge at M2CAI 2016

no code implementations27 Oct 2016 Andru P. Twinanda, Didier Mutter, Jacques Marescaux, Michel de Mathelin, Nicolas Padoy

The tool presence detection challenge at M2CAI 2016 consists of identifying the presence/absence of seven surgical tools in the images of cholecystectomy videos.

A Multi-view RGB-D Approach for Human Pose Estimation in Operating Rooms

1 code implementation25 Jan 2017 Abdolrahim Kadkhodamohammadi, Afshin Gangi, Michel de Mathelin, Nicolas Padoy

In this paper, we propose an approach for multi-view 3D human pose estimation from RGB-D images and demonstrate the benefits of using the additional depth channel for pose refinement beyond its use for the generation of improved features.

3D Human Pose Estimation

MVOR: A Multi-view RGB-D Operating Room Dataset for 2D and 3D Human Pose Estimation

1 code implementation24 Aug 2018 Vinkle Srivastav, Thibaut Issenhuth, Abdolrahim Kadkhodamohammadi, Michel de Mathelin, Afshin Gangi, Nicolas Padoy

In this paper, we present the dataset, its annotations, as well as baseline results from several recent person detection and 2D/3D pose estimation methods.

3D Human Pose Estimation 3D Pose Estimation +2

Using spatial-temporal ensembles of convolutional neural networks for lumen segmentation in ureteroscopy

no code implementations5 Apr 2021 Jorge F. Lazo, Aldo Marzullo, Sara Moccia, Michele Catellani, Benoit Rosa, Michel de Mathelin, Elena De Momi

Of these, two architectures are taken as core-models, namely U-Net based in residual blocks($m_1$) and Mask-RCNN($m_2$), which are fed with single still-frames $I(t)$.

Segmentation

A transfer-learning approach for lesion detection in endoscopic images from the urinary tract

no code implementations8 Apr 2021 Jorge F. Lazo, Sara Moccia, Aldo Marzullo, Michele Catellani, Ottavio De Cobelli, Benoit Rosa, Michel de Mathelin, Elena De Momi

In this work we study the implementation of 3 different Convolutional Neural Networks (CNNs), using a 2-steps training strategy, to classify images from the urinary tract with and without lesions.

Lesion Detection Transfer Learning

Autonomous Intraluminal Navigation of a Soft Robot using Deep-Learning-based Visual Servoing

no code implementations1 Jul 2022 Jorge F. Lazo, Chun-Feng Lai, Sara Moccia, Benoit Rosa, Michele Catellani, Michel de Mathelin, Giancarlo Ferrigno, Paul Breedveld, Jenny Dankelman, Elena De Momi

Navigation inside luminal organs is an arduous task that requires non-intuitive coordination between the movement of the operator's hand and the information obtained from the endoscopic video.

Autonomous Navigation Decision Making +1

Semi-supervised Bladder Tissue Classification in Multi-Domain Endoscopic Images

no code implementations21 Dec 2022 Jorge F. Lazo, Benoit Rosa, Michele Catellani, Matteo Fontana, Francesco A. Mistretta, Gennaro Musi, Ottavio De Cobelli, Michel de Mathelin, Elena De Momi

We address the challenge of tissue classification when annotations are available only in one domain, in our case WLI, and the endoscopic images correspond to an unpaired dataset, i. e. there is no exact equivalent for every image in both NBI and WLI domains.

Classification Generative Adversarial Network +2

Spatiotemporal modeling of grip forces captures proficiency in manual robot control

no code implementations3 Mar 2023 Rongrong Liu, John M. Wandeto, Florent Nageotte, Philippe Zanne, Michel de Mathelin, Birgitta Dresp-Langley

This paper builds on our previous work by exploiting Artificial Intelligence to predict individual grip force variability in manual robot control.

Cannot find the paper you are looking for? You can Submit a new open access paper.