no code implementations • 6 Oct 2024 • Peiqi Wang, Barbara D. Lam, Yingcheng Liu, Ameneh Asgari-Targhi, Rameswar Panda, William M. Wells, Tina Kapur, Polina Golland
We present a novel approach to calibrating linguistic expressions of certainty, e. g., "Maybe" and "Likely".
no code implementations • 18 Sep 2024 • Maximilian Fehrentz, Mohammad Farid Azampour, Reuben Dorent, Hassan Rasheed, Colin Galvin, Alexandra Golby, William M. Wells, Sarah Frisken, Nassir Navab, Nazim Haouchine
We present in this paper a novel approach for 3D/2D intraoperative registration during neurosurgery via cross-modal inverse neural rendering.
1 code implementation • 15 Sep 2023 • Reuben Dorent, Nazim Haouchine, Fryderyk Kögl, Samuel Joutard, Parikshit Juvekar, Erickson Torio, Alexandra Golby, Sebastien Ourselin, Sarah Frisken, Tom Vercauteren, Tina Kapur, William M. Wells
We introduce MHVAE, a deep hierarchical variational auto-encoder (VAE) that synthesizes missing images from various modalities.
no code implementations • 25 Apr 2023 • Peiqi Wang, Yingcheng Liu, Ching-Yun Ko, William M. Wells, Seth Berkowitz, Steven Horng, Polina Golland
Self-supervised representation learning on image-text data facilitates crucial medical applications, such as image classification, visual grounding, and cross-modal retrieval.
1 code implementation • 15 Feb 2023 • Ruben T. Lucassen, Mohammad H. Jafari, Nicole M. Duggan, Nick Jowkar, Alireza Mehrtash, Chanel Fischetti, Denie Bernier, Kira Prentice, Erik P. Duhaime, Mike Jin, Purang Abolmaesumi, Friso G. Heslinga, Mitko Veta, Maria A. Duran-Mendicuti, Sarah Frisken, Paul B. Shyn, Alexandra J. Golby, Edward Boyer, William M. Wells, Andrew J. Goldsmith, Tina Kapur
B-line artifacts in LUS videos are key findings associated with pulmonary congestion.
no code implementations • 11 Dec 2022 • Peiqi Wang, William M. Wells, Seth Berkowitz, Steven Horng, Polina Golland
Image-text multimodal representation learning aligns data across modalities and enables important medical applications, e. g., image classification, visual grounding, and cross-modal retrieval.
no code implementations • 13 Oct 2022 • Tengfei Xue, Fan Zhang, Leo R. Zekelman, Chaoyi Zhang, Yuqian Chen, Suheyla Cetin-Karayumak, Steve Pieper, William M. Wells, Yogesh Rathi, Nikos Makris, Weidong Cai, Lauren J. O'Donnell
We propose a novel deep regression method, namely TractoSCR, that allows full supervision for contrastive learning in regression tasks using diffusion MRI tractography.
no code implementations • 15 May 2022 • Sean I. Young, Yaël Balbastre, Adrian V. Dalca, William M. Wells, Juan Eugenio Iglesias, Bruce Fischl
In recent years, learning-based image registration methods have gradually moved away from direct supervision with target warps to instead use self-supervision, with excellent results in several registration benchmarks.
1 code implementation • Nature Communications 2021 • Walid M. Abdelmoula, Begona Gimenez-Cassina Lopez, Elizabeth C. Randall, Tina Kapur, Jann N. Sarkaria, Forest M. White, Jeffrey N. Agar, William M. Wells, Nathalie Y. R. Agar
Mass spectrometry imaging (MSI) is an emerging technology that holds potential for improving, biomarker discovery, metabolomics research, pharmaceutical applications and clinical diagnosis.
no code implementations • 18 Mar 2021 • Daniel Moyer, Esra Abaci Turk, P Ellen Grant, William M. Wells, Polina Golland
The transformation is then derived in closed form from the outputs of the filters.
1 code implementation • 8 Mar 2021 • Ruizhi Liao, Daniel Moyer, Miriam Cha, Keegan Quigley, Seth Berkowitz, Steven Horng, Polina Golland, William M. Wells
We propose and demonstrate a representation learning approach by maximizing the mutual information between local features of images and text.
1 code implementation • 5 Oct 2020 • Ruizhi Liao, Daniel Moyer, Polina Golland, William M. Wells
Estimating mutual information between continuous random variables is often intractable and extremely challenging for high-dimensional data.