1 code implementation • 6 Nov 2024 • Pedro R. A. S. Bassi, Wenxuan Li, Yucheng Tang, Fabian Isensee, Zifu Wang, Jieneng Chen, Yu-Cheng Chou, Yannick Kirchhoff, Maximilian Rokuss, Ziyan Huang, Jin Ye, Junjun He, Tassilo Wald, Constantin Ulrich, Michael Baumgartner, Saikat Roy, Klaus H. Maier-Hein, Paul Jaeger, Yiwen Ye, Yutong Xie, Jianpeng Zhang, Ziyang Chen, Yong Xia, Zhaohu Xing, Lei Zhu, Yousef Sadegheih, Afshin Bozorgpour, Pratibha Kumari, Reza Azad, Dorit Merhof, Pengcheng Shi, Ting Ma, Yuxin Du, Fan Bai, Tiejun Huang, Bo Zhao, Haonan Wang, Xiaomeng Li, Hanxue Gu, Haoyu Dong, Jichen Yang, Maciej A. Mazurowski, Saumya Gupta, Linshan Wu, Jiaxin Zhuang, Hao Chen, Holger Roth, Daguang Xu, Matthew B. Blaschko, Sergio Decherchi, Andrea Cavalli, Alan L. Yuille, Zongwei Zhou
We are committed to expanding this benchmark to encourage more innovation of AI algorithms for the medical domain.
no code implementations • 20 Oct 2024 • Han Zhou, Jordy Van Landeghem, Teodora Popordanoska, Matthew B. Blaschko
The selective classifier (SC) has garnered increasing interest in areas such as medical diagnostics, autonomous driving, and the justice system.
no code implementations • 11 Sep 2024 • Zehao Wang, Han Zhou, Matthew B. Blaschko, Tinne Tuytelaars, Minye Wu
Based on this matrix, we use the Intra-List Diversity (ILD) metric to assess camera redundancy, formulating the camera selection task as an optimization problem.
no code implementations • 11 Sep 2024 • Wangduo Xie, Richard Schoonhoven, Tristan van Leeuwen, Matthew B. Blaschko
In this work, we incorporate a powerful prior: the total number of material categories of objects.
1 code implementation • 1 Jul 2024 • Enshu Liu, Junyi Zhu, Zinan Lin, Xuefei Ning, Matthew B. Blaschko, Shengen Yan, Guohao Dai, Huazhong Yang, Yu Wang
For example, we demonstrate that pruning up to 75% of experts in Mixtral $8\times7$B-Instruct results in a substantial reduction in parameters with minimal performance loss.
1 code implementation • 23 Jun 2024 • Junyi Zhu, Shuochen Liu, Yu Yu, Bo Tang, Yibo Yan, Zhiyu Li, Feiyu Xiong, Tong Xu, Matthew B. Blaschko
Large language models (LLMs) excel in generating coherent text, but they often struggle with context awareness, leading to inaccuracies in tasks requiring faithful adherence to provided information.
1 code implementation • 20 Jun 2024 • Xuefei Ning, Zifu Wang, Shiyao Li, Zinan Lin, Peiran Yao, Tianyu Fu, Matthew B. Blaschko, Guohao Dai, Huazhong Yang, Yu Wang
We reveal some findings: (1) Teaching materials that make it easier for students to learn have clearer and more accurate logic when using in-context learning as the student's "learning" method; (2) Weak-to-strong generalization: LbT might help improve strong models by teaching weak models; (3) Diversity in students might help: teaching multiple students could be better than teaching one student or the teacher itself.
no code implementations • 18 Jun 2024 • Chang Tian, Matthew B. Blaschko, Wenpeng Yin, Mingzhe Xing, Yinliang Yue, Marie-Francine Moens
To address these shortcomings, we introduce a method that successfully detects fine-grained clusters of semantically similar texts guided by a novel objective function.
no code implementations • 3 May 2024 • Jiayang Shi, Junyi Zhu, Daniel M. Pelt, K. Joost Batenburg, Matthew B. Blaschko
Recognizing that CT often involves scanning similar subjects, we propose a novel approach to improve reconstruction quality through joint reconstruction of multiple objects using INRs.
1 code implementation • 2 Apr 2024 • Enshu Liu, Junyi Zhu, Zinan Lin, Xuefei Ning, Matthew B. Blaschko, Sergey Yekhanin, Shengen Yan, Guohao Dai, Huazhong Yang, Yu Wang
For example, LCSC achieves better performance using 1 number of function evaluation (NFE) than the base model with 2 NFE on consistency distillation, and decreases the NFE of DM from 15 to 9 while maintaining the generation quality on CIFAR-10.
1 code implementation • 22 Feb 2024 • Abhishek Jha, Matthew B. Blaschko, Yuki M. Asano, Tinne Tuytelaars
Last couple of years have witnessed a tremendous progress in self-supervised learning (SSL), the success of which can be attributed to the introduction of useful inductive biases in the learning process to learn meaningful visual representations while avoiding collapse.
no code implementations • 26 Jan 2024 • Mingshi Li, Dusan Grujicic, Steven De Saeger, Stien Heremans, Ben Somers, Matthew B. Blaschko
The synergy of machine learning and satellite imagery analysis has demonstrated significant productivity in this field, as evidenced by several studies.
no code implementations • 14 Dec 2023 • Teodora Popordanoska, Sebastian G. Gruber, Aleksei Tiulpin, Florian Buettner, Matthew B. Blaschko
Proper scoring rules evaluate the quality of probabilistic predictions, playing an essential role in the pursuit of accurate and well-calibrated models.
no code implementations • 14 Dec 2023 • Teodora Popordanoska, Gorjan Radevski, Tinne Tuytelaars, Matthew B. Blaschko
In the face of dataset shift, model calibration plays a pivotal role in ensuring the reliability of machine learning systems.
1 code implementation • 11 Dec 2023 • Teodora Popordanoska, Aleksei Tiulpin, Matthew B. Blaschko
Despite their impressive predictive performance in various computer vision tasks, deep neural networks (DNNs) tend to make overly confident predictions, which hinders their widespread use in safety-critical applications.
1 code implementation • 24 Aug 2023 • Jordy Van Landeghem, Sanket Biswas, Matthew B. Blaschko, Marie-Francine Moens
This paper highlights the need to bring document classification benchmarking closer to real-world applications, both in the nature of data tested ($X$: multi-channel, multi-paged, multi-industry; $Y$: class distributions and label set variety) and in classification tasks considered ($f$: multi-page document, page stream, and document bundle classification, ...).
no code implementations • 24 Jul 2023 • Wangduo Xie, Matthew B. Blaschko
However, it is difficult for previous unsupervised methods to retain structural information from CT images while handling the non-local characteristics of metal artifacts.
1 code implementation • 31 May 2023 • Junyi Zhu, Ruicong Yao, Matthew B. Blaschko
Seemingly, FL can provide a degree of protection against gradient inversion attacks on weight updates, since the gradient of a single step is concealed by the accumulation of gradients over multiple local iterations.
1 code implementation • CVPR 2023 • Junyi Zhu, Xingchen Ma, Matthew B. Blaschko
A global model is introduced as a latent variable to augment the joint distribution of clients' parameters and capture the common trends of different clients, optimization is derived based on the principle of maximizing the marginal likelihood and conducted using variational expectation maximization.
1 code implementation • 28 Mar 2023 • Zifu Wang, Teodora Popordanoska, Jeroen Bertels, Robin Lemmens, Matthew B. Blaschko
As a result, we obtain superior Dice scores and model calibration, which supports the wider adoption of DMLs in practice.
2 code implementations • NeurIPS 2023 • Zifu Wang, Xuefei Ning, Matthew B. Blaschko
To address this, we introduce Jaccard Metric Losses (JMLs), which are identical to the soft Jaccard loss in standard settings with hard labels but are fully compatible with soft labels.
1 code implementation • 25 Oct 2022 • Huy Hoang Nguyen, Matthew B. Blaschko, Simo Saarakkala, Aleksei Tiulpin
Deep neural networks are often applied to medical images to automate the problem of medical diagnosis.
1 code implementation • 13 Oct 2022 • Teodora Popordanoska, Raphael Sayer, Matthew B. Blaschko
As a remedy, we propose a low-bias, trainable calibration error estimator based on Dirichlet kernel density estimates, which asymptotically converges to the true $L_p$ calibration error.
no code implementations • 25 Aug 2022 • Teodora Popordanoska, Aleksei Tiulpin, Wacha Bounliphone, Matthew B. Blaschko
Moreover, we derive a method to bound the entries of the inverse covariance matrix, the so-called precision matrix.
1 code implementation • 13 Jul 2022 • Zifu Wang, Matthew B. Blaschko
UNet [27] is widely used in semantic segmentation due to its simplicity and effectiveness.
no code implementations • 17 Jun 2022 • Sinnu Susan Thomas, Jacopo Palandri, Mohsen Lakehal-ayat, Punarjay Chakravarty, Friedrich Wolf-Monheim, Matthew B. Blaschko
We show that the proposed approach is general, scalable, and efficient, and that the novel convergence criteria can be implemented straightforwardly based on existing concepts and subroutines in popular Bayesian optimization software packages.
no code implementations • 4 Jun 2022 • Han Zhou, Aida Ashrafi, Matthew B. Blaschko
In this paper, we explore methods of direct combinatorial optimization in the problem of risk minimization with binary weights, which can be made equivalent to a non-monotone submodular maximization under certain conditions.
1 code implementation • 23 Dec 2021 • Teodora Popordanoska, Jeroen Bertels, Dirk Vandermeulen, Frederik Maes, Matthew B. Blaschko
This has led to a renewed focus on calibrated predictions in the medical imaging and broader machine learning communities.
1 code implementation • 1 Dec 2021 • Junyi Zhu, Matthew B. Blaschko
Differentially private stochastic gradient descent (DP-SGD) has been widely adopted in deep learning to provide rigorously defined privacy, which requires gradient clipping to bound the maximum norm of individual gradients and additive isotropic Gaussian noise.
no code implementations • 29 Sep 2021 • Teodora Popordanoska, Raphael Sayer, Matthew B. Blaschko
The computational complexity of our estimator is O(n^2), matching that of the kernel maximum mean discrepancy, used in a previously considered trainable calibration estimator.
2 code implementations • 29 May 2021 • Aleksei Tiulpin, Matthew B. Blaschko
This paper proposes a novel and principled method to tackle this limitation, minimizing an $f$-divergence between the true posterior and a kernel density estimator (KDE) in a function space.
1 code implementation • 10 May 2021 • Xingchen Ma, Matthew B. Blaschko
In this paper, we introduce two constraints that are worth consideration in designing a calibration map for post-hoc calibration.
2 code implementations • 8 Apr 2021 • Huy Hoang Nguyen, Simo Saarakkala, Matthew B. Blaschko, Aleksei Tiulpin
We show the effectiveness of our method in predicting the development of structural knee osteoarthritis changes over time.
no code implementations • 22 Mar 2021 • Ruben Hemelings, Bart Elen, João Barbosa-Breda, Matthew B. Blaschko, Patrick De Boever, Ingeborg Stalmans
We trained and evaluated deep learning models using fundus images that underwent a certain cropping policy.
no code implementations • 26 Oct 2020 • Tom Eelbode, Jeroen Bertels, Maxim Berman, Dirk Vandermeulen, Frederik Maes, Raf Bisschops, Matthew B. Blaschko
We verify these results empirically in an extensive validation on six medical segmentation tasks and can confirm that metric-sensitive losses are superior to cross-entropy based loss functions in case of evaluation with Dice Score or Jaccard Index.
1 code implementation • 6 Oct 2020 • Xingchen Ma, Matthew B. Blaschko
Bayesian optimization (BO) is a sample-efficient global optimization algorithm for black-box functions which are expensive to evaluate.
no code implementations • 21 Jun 2020 • Xingchen Ma, Matthew B. Blaschko
Bayesian optimization (BO) is a sample-efficient global optimization algorithm for black-box functions which are expensive to evaluate.
no code implementations • 4 Jun 2020 • Ruben Hemelings, Bart Elen, Matthew B. Blaschko, Julie Jacob, Ingeborg Stalmans, Patrick De Boever
This investigation reports on the results of convolutional neural networks developed for the recently introduced PathologicAL Myopia (PALM) dataset, which consists of 1200 fundus images.
1 code implementation • CVPR 2020 • Maxim Berman, Leonid Pishchulin, Ning Xu, Matthew B. Blaschko, Gerard Medioni
We introduce a novel efficient one-shot NAS approach to optimally search for channel numbers, given latency constraints on a specific hardware.
no code implementations • 25 Nov 2019 • Maxim Berman, Matthew B. Blaschko
In order to constrain such a model to remain tractable, previous approaches have enforced the weight vector to be positive for pairwise potentials in which the labels differ, and set pairwise potentials to zero in the case that the label remains the same.
no code implementations • 23 Jul 2019 • Shivangi Srivastava, Maxim Berman, Matthew B. Blaschko, Devis Tuia
The latter approach falls under the denomination of lifelong learning, where the model is updated in a way that it performs well on both old and new tasks, without having access to the old task's training samples anymore.
no code implementations • 6 Sep 2018 • Maxim Berman, Matthew B. Blaschko, Amal Rannen Triki, Jiaqian Yu
This note is a response to [7] in which it is claimed that [13, Proposition 11] is false.
1 code implementation • 7 Jun 2018 • Mathijs Schuurmans, Maxim Berman, Matthew B. Blaschko
In this work, we evaluate the use of superpixel pooling layers in deep network architectures for semantic segmentation.
2 code implementations • CVPR 2018 • Maxim Berman, Amal Rannen Triki, Matthew B. Blaschko
The Jaccard index, also referred to as the intersection-over-union score, is commonly employed in the evaluation of image segmentation results given its perceptual qualities, scale invariance - which lends appropriate relevance to small objects, and appropriate counting of false negatives, in comparison to per-pixel losses.
no code implementations • 25 May 2018 • José Ignacio Orlando, João Barbosa Breda, Karel van Keer, Matthew B. Blaschko, Pablo J. Blanco, Carlos A. Bulant
In this paper we propose a first approach for characterizing those changes using computational hemodynamics.
no code implementations • 18 Oct 2017 • Amal Rannen Triki, Maxim Berman, Matthew B. Blaschko
Deep neural networks (DNNs) have become increasingly important due to their excellent empirical performance on a wide range of problems.
1 code implementation • 9 Jun 2017 • José Ignacio Orlando, Elena Prokofyeva, Mariana del Fresno, Matthew B. Blaschko
In this paper we propose a novel method for red lesion detection based on combining both deep learned and domain knowledge.
4 code implementations • CVPR 2018 • Maxim Berman, Amal Rannen Triki, Matthew B. Blaschko
The Jaccard index, also referred to as the intersection-over-union score, is commonly employed in the evaluation of image segmentation results given its perceptual qualities, scale invariance - which lends appropriate relevance to small objects, and appropriate counting of false negatives, in comparison to per-pixel losses.
Ranked #34 on Semantic Segmentation on PASCAL VOC 2012 test
1 code implementation • 31 Mar 2017 • Amal Rannen Triki, Matthew B. Blaschko, Yoon Mo Jung, Seungri Song, Hyun Ju Han, Seung Il Kim, Chulmin Joo
The use of a function norm introduces a direct control over the complexity of the function with the aim of diminishing the risk of overfitting.
no code implementations • 13 Feb 2017 • Jiaqian Yu, Matthew B. Blaschko
These loss functions do not necessarily have the same structure as the one used by the segmentation inference algorithm, and in general, we may have to resort to generic submodular minimization algorithms for loss augmented inference.
1 code implementation • 19 Jun 2016 • Matthew B. Blaschko
We demonstrate in this paper that we may use these concepts to define polynomial time convex extensions of arbitrary supermodular functions, providing an analysis framework for the tightness of these surrogates.
1 code implementation • 30 May 2016 • Amal Rannen Triki, Matthew B. Blaschko
In this paper, we study the feasibility of directly using the $L_2$ function norm for regularization.
no code implementations • NeurIPS 2016 • Eugene Belilovsky, Gaël Varoquaux, Matthew B. Blaschko
We characterize the uncertainty of differences with confidence intervals obtained using a parametric distribution on parameters of a sparse estimator.
1 code implementation • 14 Nov 2015 • Wacha Bounliphone, Eugene Belilovsky, Matthew B. Blaschko, Ioannis Antonoglou, Arthur Gretton
Probabilistic generative models provide a powerful framework for representing data that avoids the expense of manual annotation typically needed by discriminative approaches.
no code implementations • CVPR 2014 • Andrea Vedaldi, Siddharth Mahendran, Stavros Tsogkas, Subhransu Maji, Ross Girshick, Juho Kannala, Esa Rahtu, Iasonas Kokkinos, Matthew B. Blaschko, David Weiss, Ben Taskar, Karen Simonyan, Naomi Saphra, Sammy Mohamed
We show that the collected data can be used to study the relation between part detection and attribute prediction by diagnosing the performance of classifiers that pool information from different parts of an object.