Search Results for author: Baoru Huang

Found 14 papers, 6 papers with code

3D Guidewire Shape Reconstruction from Monoplane Fluoroscopic Images

no code implementations19 Nov 2023 Tudor Jianu, Baoru Huang, Pierre Berthet-Rayne, Sebastiano Fichera, Anh Nguyen

Endovascular navigation, essential for diagnosing and treating endovascular diseases, predominantly hinges on fluoroscopic images due to the constraints in sensory feedback.

Shape-Sensitive Loss for Catheter and Guidewire Segmentation

no code implementations19 Nov 2023 Chayun Kongtongvattana, Baoru Huang, Jingxuan Kang, Hoan Nguyen, Olajide Olufemi, Anh Nguyen

By computing the cosine similarity between these feature vectors, we gain a nuanced understanding of image similarity that goes beyond the limitations of traditional overlap-based measures.

Grasp-Anything: Large-scale Grasp Dataset from Foundation Models

1 code implementation18 Sep 2023 An Dinh Vuong, Minh Nhat Vu, Hieu Le, Baoru Huang, Binh Huynh, Thieu Vo, Andreas Kugi, Anh Nguyen

Foundation models such as ChatGPT have made significant strides in robotic tasks due to their universal representation of real-world domains.

Robotic Grasping World Knowledge

Detecting the Sensing Area of A Laparoscopic Probe in Minimally Invasive Cancer Surgery

1 code implementation7 Jul 2023 Baoru Huang, Yicheng Hu, Anh Nguyen, Stamatia Giannarou, Daniel S. Elson

In surgical oncology, it is challenging for surgeons to identify lymph nodes and completely resect cancer even with pre-operative imaging systems like PET and CT, because of the lack of reliable intraoperative visualization tools.

Translating Simulation Images to X-ray Images via Multi-Scale Semantic Matching

no code implementations16 Apr 2023 Jingxuan Kang, Tudor Jianu, Baoru Huang, Binod Bhattarai, Ngan Le, Frans Coenen, Anh Nguyen

In this paper, we propose a new method to translate simulation images from an endovascular simulator to X-ray images.

Image-to-Image Translation

A Compacted Structure for Cross-domain learning on Monocular Depth and Flow Estimation

no code implementations25 Aug 2022 Yu Chen, Xu Cao, Xiaoyi Lin, Baoru Huang, Xiao-Yun Zhou, Jian-Qing Zheng, Guang-Zhong Yang

A dual-head mechanism is used to predict optical flow for rigid and non-rigid motion based on a divide-and-conquer manner, which significantly improves the optical flow estimation performance.

Autonomous Driving Optical Flow Estimation

When CNN Meet with ViT: Towards Semi-Supervised Learning for Multi-Class Medical Image Semantic Segmentation

2 code implementations12 Aug 2022 Ziyang Wang, Tianze Li, Jian-Qing Zheng, Baoru Huang

A topological exploration of all alternative supervision modes with CNN and ViT are detailed validated, demonstrating the most promising performance and specific setting of our method on semi-supervised medical image segmentation tasks.

Image Segmentation Pseudo Label +2

Recursive Deformable Image Registration Network with Mutual Attention

no code implementations4 Jun 2022 Jian-Qing Zheng, Ziyang Wang, Baoru Huang, Ngee Han Lim, Tonia Vincent, Bartlomiej W. Papiez

Deformable image registration, estimating the spatial transformation between different images, is an important task in medical imaging.

Computed Tomography (CT) Image Registration

Residual Aligner Network

no code implementations7 Mar 2022 Jian-Qing Zheng, Ziyang Wang, Baoru Huang, Ngee Han Lim, Bartlomiej W. Papiez

Image registration is important for medical imaging, the estimation of the spatial transformation between different images.

Image Registration

H-Net: Unsupervised Attention-based Stereo Depth Estimation Leveraging Epipolar Geometry

no code implementations22 Apr 2021 Baoru Huang, Jian-Qing Zheng, Stamatia Giannarou, Daniel S. Elson

To enforce the epipolar constraint, the mutual epipolar attention mechanism has been designed which gives more emphasis to correspondences of features which lie on the same epipolar line while learning mutual information between the input stereo pair.

Stereo Depth Estimation Stereo Matching

Cannot find the paper you are looking for? You can Submit a new open access paper.