no code implementations • 6 Nov 2022 • Zafir Stojanovski, Karsten Roth, Zeynep Akata
Large pre-trained, zero-shot capable models have shown considerable success both for standard transfer and adaptation tasks, with particular robustness towards distribution shifts.
1 code implementation • 13 Oct 2022 • Karsten Roth, Mark Ibrahim, Zeynep Akata, Pascal Vincent, Diane Bouchacourt
We show that the use of HFS consistently facilitates disentanglement and recovery of ground-truth factors across a variety of correlation settings and benchmarks, even under severe training correlations and correlation shifts, with in parts over $+60\%$ in relative improvement over existing disentanglement methods.
1 code implementation • 8 Jul 2022 • Michael Kirchhof, Karsten Roth, Zeynep Akata, Enkelejda Kasneci
We model images as directional von Mises-Fisher (vMF) distributions on the hypersphere that can reflect image-intrinsic uncertainties.
no code implementations • ICLR 2022 • Natalie Dullerud, Karsten Roth, Kimia Hamidieh, Nicolas Papernot, Marzyeh Ghassemi
Deep metric learning (DML) enables learning with less supervision through its emphasis on the similarity structure of representations.
1 code implementation • 23 Mar 2022 • Haoran Zhang, Natalie Dullerud, Karsten Roth, Lauren Oakden-Rayner, Stephen Robert Pfohl, Marzyeh Ghassemi
We also find that methods which achieve group fairness do so by worsening performance for all groups.
1 code implementation • CVPR 2022 • Karsten Roth, Oriol Vinyals, Zeynep Akata
This causes learned embedding spaces to encode incomplete semantic context and misrepresent the semantic relation between classes, impacting the generalizability of the learned metric space.
Ranked #5 on
Metric Learning
on CARS196
(using extra training data)
1 code implementation • CVPR 2022 • Karsten Roth, Oriol Vinyals, Zeynep Akata
Deep Metric Learning (DML) aims to learn representation spaces on which semantic relations can simply be expressed through predefined distance metrics.
Ranked #8 on
Metric Learning
on CUB-200-2011
(using extra training data)
2 code implementations • NeurIPS 2021 • Timo Milbich, Karsten Roth, Samarth Sinha, Ludwig Schmidt, Marzyeh Ghassemi, Björn Ommer
Finally, we propose few-shot DML as an efficient way to consistently improve generalization in response to unknown test shifts presented in ooDML.
15 code implementations • CVPR 2022 • Karsten Roth, Latha Pemula, Joaquin Zepeda, Bernhard Schölkopf, Thomas Brox, Peter Gehler
Being able to spot defective parts is a critical component in large-scale industrial manufacturing.
Ranked #2 on
Anomaly Detection
on MVTec AD
1 code implementation • 17 Sep 2020 • Karsten Roth, Timo Milbich, Björn Ommer, Joseph Paul Cohen, Marzyeh Ghassemi
Deep Metric Learning (DML) provides a crucial tool for visual similarity and zero-shot applications by learning generalizing embedding spaces, although recent work in DML has shown strong performance saturation across training objectives.
Ranked #7 on
Metric Learning
on CARS196
(using extra training data)
no code implementations • 30 Jun 2020 • Samarth Sinha, Karsten Roth, Anirudh Goyal, Marzyeh Ghassemi, Hugo Larochelle, Animesh Garg
Deep Neural Networks have shown great promise on a variety of downstream applications; but their ability to adapt and generalize to new data and tasks remains a challenge.
6 code implementations • 22 Jun 2020 • Joseph Paul Cohen, Paul Morrison, Lan Dao, Karsten Roth, Tim Q Duong, Marzyeh Ghassemi
This dataset currently contains hundreds of frontal view X-rays and is the largest public resource for COVID-19 image and prognostic data, making it a necessary resource to develop and evaluate tools to aid in the treatment of COVID-19.
6 code implementations • 24 May 2020 • Joseph Paul Cohen, Lan Dao, Paul Morrison, Karsten Roth, Yoshua Bengio, Beiyi Shen, Almas Abbasi, Mahsa Hoshmand-Kochi, Marzyeh Ghassemi, Haifang Li, Tim Q Duong
In this study, we present a severity score prediction model for COVID-19 pneumonia for frontal chest X-ray images.
2 code implementations • ECCV 2020 • Timo Milbich, Karsten Roth, Homanga Bharadhwaj, Samarth Sinha, Yoshua Bengio, Björn Ommer, Joseph Paul Cohen
Visual Similarity plays an important role in many computer vision applications.
Ranked #10 on
Metric Learning
on CUB-200-2011
(using extra training data)
no code implementations • 12 Apr 2020 • Timo Milbich, Karsten Roth, Biagio Brattoli, Björn Ommer
The common paradigm is discriminative metric learning, which seeks an embedding that separates different training classes.
1 code implementation • CVPR 2020 • Karsten Roth, Timo Milbich, Björn Ommer
Learning visual similarity requires to learn relations, typically between triplets of images.
Ranked #14 on
Metric Learning
on CUB-200-2011
(using extra training data)
8 code implementations • ICML 2020 • Karsten Roth, Timo Milbich, Samarth Sinha, Prateek Gupta, Björn Ommer, Joseph Paul Cohen
Deep Metric Learning (DML) is arguably one of the most influential lines of research for learning visual similarities with many proposed approaches every year.
2 code implementations • ICCV 2019 • Karsten Roth, Biagio Brattoli, Björn Ommer
In contrast, we propose to explicitly learn the latent characteristics that are shared by and go across object classes.
Ranked #16 on
Metric Learning
on CUB-200-2011
(using extra training data)
no code implementations • 14 Aug 2019 • Karsten Roth, Jürgen Hesser, Tomasz Konopczyński
We propose a novel procedure to improve liver and lesion segmentation from CT scans for U-Net based models.
1 code implementation • 9 May 2019 • Karsten Roth, Tomasz Konopczyński, Jürgen Hesser
At present, lesion segmentation is still performed manually (or semi-automatically) by medical experts.
Ranked #3 on
Liver Segmentation
on LiTS2017
(Dice metric)
6 code implementations • 13 Jan 2019 • Patrick Bilic, Patrick Christ, Hongwei Bran Li, Eugene Vorontsov, Avi Ben-Cohen, Georgios Kaissis, Adi Szeskin, Colin Jacobs, Gabriel Efrain Humpire Mamani, Gabriel Chartrand, Fabian Lohöfer, Julian Walter Holch, Wieland Sommer, Felix Hofmann, Alexandre Hostettler, Naama Lev-Cohain, Michal Drozdzal, Michal Marianne Amitai, Refael Vivantik, Jacob Sosna, Ivan Ezhov, Anjany Sekuboyina, Fernando Navarro, Florian Kofler, Johannes C. Paetzold, Suprosanna Shit, Xiaobin Hu, Jana Lipková, Markus Rempfler, Marie Piraud, Jan Kirschke, Benedikt Wiestler, Zhiheng Zhang, Christian Hülsemeyer, Marcel Beetz, Florian Ettlinger, Michela Antonelli, Woong Bae, Míriam Bellver, Lei Bi, Hao Chen, Grzegorz Chlebus, Erik B. Dam, Qi Dou, Chi-Wing Fu, Bogdan Georgescu, Xavier Giró-i-Nieto, Felix Gruen, Xu Han, Pheng-Ann Heng, Jürgen Hesser, Jan Hendrik Moltz, Christian Igel, Fabian Isensee, Paul Jäger, Fucang Jia, Krishna Chaitanya Kaluva, Mahendra Khened, Ildoo Kim, Jae-Hun Kim, Sungwoong Kim, Simon Kohl, Tomasz Konopczynski, Avinash Kori, Ganapathy Krishnamurthi, Fan Li, Hongchao Li, Junbo Li, Xiaomeng Li, John Lowengrub, Jun Ma, Klaus Maier-Hein, Kevis-Kokitsi Maninis, Hans Meine, Dorit Merhof, Akshay Pai, Mathias Perslev, Jens Petersen, Jordi Pont-Tuset, Jin Qi, Xiaojuan Qi, Oliver Rippel, Karsten Roth, Ignacio Sarasua, Andrea Schenk, Zengming Shen, Jordi Torres, Christian Wachinger, Chunliang Wang, Leon Weninger, Jianrong Wu, Daguang Xu, Xiaoping Yang, Simon Chun-Ho Yu, Yading Yuan, Miao Yu, Liping Zhang, Jorge Cardoso, Spyridon Bakas, Rickmer Braren, Volker Heinemann, Christopher Pal, An Tang, Samuel Kadoury, Luc Soler, Bram van Ginneken, Hayit Greenspan, Leo Joskowicz, Bjoern Menze
In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018.