Searching through large volumes of medical data to retrieve relevant information is a challenging yet crucial task for clinical care.
no code implementations • 27 Dec 2021 • Deepak Alapatt, Pietro Mascagni, Armine Vardazaryan, Alain Garcia, Nariaki Okamoto, Didier Mutter, Jacques Marescaux, Guido Costamagna, Bernard Dallemagne, Nicolas Padoy
A major obstacle to building models for effective semantic segmentation, and particularly video semantic segmentation, is a lack of large and well annotated datasets.
To achieve this task, we introduce our new model, the Rendezvous (RDV), which recognizes triplets directly from surgical videos by leveraging attention at two different levels.
Ranked #1 on Action Triplet Recognition on CholecT50
no code implementations • 6 Apr 2021 • Pietro Mascagni, Maria Rita Rodriguez-Luna, Takeshi Urade, Emanuele Felli, Patrick Pessaux, Didier Mutter, Jacques Marescaux, Guido Costamagna, Bernard Dallemagne, Nicolas Padoy
The primary endpoint was to compare the rate of CVS achievement between LCs performed in the year before and the year after the 5-second rule.
Conclusion: In this work, we present a multi-task multi-stage temporal convolutional network for surgical activity recognition, which shows improved results compared to single-task models on the Bypass40 gastric bypass dataset with multi-level annotations.
Recognition of surgical activity is an essential component to develop context-aware decision support for the operating room.
Ranked #1 on Action Triplet Recognition on CholecT40
Results: We build a baseline tracker on top of the CNN model and demonstrate that our approach based on the ConvLSTM outperforms the baseline in tool presence detection, spatial localization, and motion tracking by over 5. 0%, 13. 9%, and 12. 6%, respectively.
Ranked #1 on Surgical tool detection on Cholec80
Vision algorithms capable of interpreting scenes from a real-time video stream are necessary for computer-assisted surgery systems to achieve context-aware behavior.
This work presents a novel approach for the early recognition of the type of a laparoscopic surgery from its video.
We propose a deep architecture, trained solely on image level annotations, that can be used for both tool presence detection and localization in surgical videos.
Ranked #3 on Surgical tool detection on Cholec80
In this work, we propose a new self-supervised pre-training approach based on the prediction of remaining surgery duration (RSD) from laparoscopic videos.
In this paper, we propose a deep learning pipeline, referred to as RSDNet, which automatically estimates the remaining surgery duration (RSD) intraoperatively by using only visual information from laparoscopic videos.
The tool presence detection challenge at M2CAI 2016 consists of identifying the presence/absence of seven surgical tools in the images of cholecystectomy videos.
On top of these architectures we propose to use two different approaches to enforce the temporal constraints of the surgical workflow: (1) HMM-based and (2) LSTM-based pipelines.
Esophageal adenocarcinoma arises from Barrett's esophagus, which is the most serious complication of gastroesophageal reflux disease.
It is our first contribution to exploit ORBSLAM, one of the best performing monocular SLAM algorithms, to estimate both of the endoscope location, and 3D structure of the surgical scene.
In the literature, two types of features are typically used to perform this task: visual features and tool usage signals.
Ranked #4 on Surgical tool detection on Cholec80