Search Results for author: Matthew B. Blaschko

Found 55 papers, 30 papers with code

A Novel Characterization of the Population Area Under the Risk Coverage Curve (AURC) and Rates of Finite Sample Estimators

no code implementations20 Oct 2024 Han Zhou, Jordy Van Landeghem, Teodora Popordanoska, Matthew B. Blaschko

The selective classifier (SC) has garnered increasing interest in areas such as medical diagnostics, autonomous driving, and the justice system.

Autonomous Driving

Redundancy-Aware Camera Selection for Indoor Scene Neural Rendering

no code implementations11 Sep 2024 Zehao Wang, Han Zhou, Matthew B. Blaschko, Tinne Tuytelaars, Minye Wu

Based on this matrix, we use the Intra-List Diversity (ILD) metric to assess camera redundancy, formulating the camera selection task as an optimization problem.

Diversity Neural Rendering +1

Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs

1 code implementation1 Jul 2024 Enshu Liu, Junyi Zhu, Zinan Lin, Xuefei Ning, Matthew B. Blaschko, Shengen Yan, Guohao Dai, Huazhong Yang, Yu Wang

For example, we demonstrate that pruning up to 75% of experts in Mixtral $8\times7$B-Instruct results in a substantial reduction in parameters with minimal performance loss.

FastMem: Fast Memorization of Prompt Improves Context Awareness of Large Language Models

1 code implementation23 Jun 2024 Junyi Zhu, Shuochen Liu, Yu Yu, Bo Tang, Yibo Yan, Zhiyu Li, Feiyu Xiong, Tong Xu, Matthew B. Blaschko

Large language models (LLMs) excel in generating coherent text, but they often struggle with context awareness, leading to inaccuracies in tasks requiring faithful adherence to provided information.

Memorization Reading Comprehension +1

Can LLMs Learn by Teaching for Better Reasoning? A Preliminary Study

1 code implementation20 Jun 2024 Xuefei Ning, Zifu Wang, Shiyao Li, Zinan Lin, Peiran Yao, Tianyu Fu, Matthew B. Blaschko, Guohao Dai, Huazhong Yang, Yu Wang

We reveal some findings: (1) Teaching materials that make it easier for students to learn have clearer and more accurate logic when using in-context learning as the student's "learning" method; (2) Weak-to-strong generalization: LbT might help improve strong models by teaching weak models; (3) Diversity in students might help: teaching multiple students could be better than teaching one student or the teacher itself.

In-Context Learning Knowledge Distillation

A Generic Method for Fine-grained Category Discovery in Natural Language Texts

no code implementations18 Jun 2024 Chang Tian, Matthew B. Blaschko, Wenpeng Yin, Mingzhe Xing, Yinliang Yue, Marie-Francine Moens

To address these shortcomings, we introduce a method that successfully detects fine-grained clusters of semantically similar texts guided by a novel objective function.

Contrastive Learning

Implicit Neural Representations for Robust Joint Sparse-View CT Reconstruction

no code implementations3 May 2024 Jiayang Shi, Junyi Zhu, Daniel M. Pelt, K. Joost Batenburg, Matthew B. Blaschko

Recognizing that CT often involves scanning similar subjects, we propose a novel approach to improve reconstruction quality through joint reconstruction of multiple objects using INRs.

Computed Tomography (CT) CT Reconstruction

Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better

1 code implementation2 Apr 2024 Enshu Liu, Junyi Zhu, Zinan Lin, Xuefei Ning, Matthew B. Blaschko, Sergey Yekhanin, Shengen Yan, Guohao Dai, Huazhong Yang, Yu Wang

For example, LCSC achieves better performance using 1 number of function evaluation (NFE) than the base model with 2 NFE on consistency distillation, and decreases the NFE of DM from 15 to 9 while maintaining the generation quality on CIFAR-10.

The Common Stability Mechanism behind most Self-Supervised Learning Approaches

1 code implementation22 Feb 2024 Abhishek Jha, Matthew B. Blaschko, Yuki M. Asano, Tinne Tuytelaars

Last couple of years have witnessed a tremendous progress in self-supervised learning (SSL), the success of which can be attributed to the introduction of useful inductive biases in the learning process to learn meaningful visual representations while avoiding collapse.

Self-Supervised Learning

Biological Valuation Map of Flanders: A Sentinel-2 Imagery Analysis

no code implementations26 Jan 2024 Mingshi Li, Dusan Grujicic, Steven De Saeger, Stien Heremans, Ben Somers, Matthew B. Blaschko

The synergy of machine learning and satellite imagery analysis has demonstrated significant productivity in this field, as evidenced by several studies.

Benchmarking Semantic Segmentation

Consistent and Asymptotically Unbiased Estimation of Proper Calibration Errors

no code implementations14 Dec 2023 Teodora Popordanoska, Sebastian G. Gruber, Aleksei Tiulpin, Florian Buettner, Matthew B. Blaschko

Proper scoring rules evaluate the quality of probabilistic predictions, playing an essential role in the pursuit of accurate and well-calibrated models.

scoring rule

Estimating calibration error under label shift without labels

no code implementations14 Dec 2023 Teodora Popordanoska, Gorjan Radevski, Tinne Tuytelaars, Matthew B. Blaschko

In the face of dataset shift, model calibration plays a pivotal role in ensuring the reliability of machine learning systems.

Beyond Classification: Definition and Density-based Estimation of Calibration in Object Detection

1 code implementation11 Dec 2023 Teodora Popordanoska, Aleksei Tiulpin, Matthew B. Blaschko

Despite their impressive predictive performance in various computer vision tasks, deep neural networks (DNNs) tend to make overly confident predictions, which hinders their widespread use in safety-critical applications.

Density Estimation Object +2

Beyond Document Page Classification: Design, Datasets, and Challenges

1 code implementation24 Aug 2023 Jordy Van Landeghem, Sanket Biswas, Matthew B. Blaschko, Marie-Francine Moens

This paper highlights the need to bring document classification benchmarking closer to real-world applications, both in the nature of data tested ($X$: multi-channel, multi-paged, multi-industry; $Y$: class distributions and label set variety) and in classification tasks considered ($f$: multi-page document, page stream, and document bundle classification, ...).

Benchmarking Classification +1

Dense Transformer based Enhanced Coding Network for Unsupervised Metal Artifact Reduction

no code implementations24 Jul 2023 Wangduo Xie, Matthew B. Blaschko

However, it is difficult for previous unsupervised methods to retain structural information from CT images while handling the non-local characteristics of metal artifacts.

Disentanglement Metal Artifact Reduction

Surrogate Model Extension (SME): A Fast and Accurate Weight Update Attack on Federated Learning

1 code implementation31 May 2023 Junyi Zhu, Ruicong Yao, Matthew B. Blaschko

Seemingly, FL can provide a degree of protection against gradient inversion attacks on weight updates, since the gradient of a single step is concealed by the accumulation of gradients over multiple local iterations.

Federated Learning

Confidence-aware Personalized Federated Learning via Variational Expectation Maximization

1 code implementation CVPR 2023 Junyi Zhu, Xingchen Ma, Matthew B. Blaschko

A global model is introduced as a latent variable to augment the joint distribution of clients' parameters and capture the common trends of different clients, optimization is derived based on the principle of maximizing the marginal likelihood and conducted using variational expectation maximization.

Personalized Federated Learning Variational Inference

Dice Semimetric Losses: Optimizing the Dice Score with Soft Labels

1 code implementation28 Mar 2023 Zifu Wang, Teodora Popordanoska, Jeroen Bertels, Robin Lemmens, Matthew B. Blaschko

As a result, we obtain superior Dice scores and model calibration, which supports the wider adoption of DMLs in practice.

Knowledge Distillation

Jaccard Metric Losses: Optimizing the Jaccard Index with Soft Labels

2 code implementations NeurIPS 2023 Zifu Wang, Xuefei Ning, Matthew B. Blaschko

To address this, we introduce Jaccard Metric Losses (JMLs), which are identical to the soft Jaccard loss in standard settings with hard labels but are fully compatible with soft labels.

Knowledge Distillation Semantic Segmentation

A Consistent and Differentiable Lp Canonical Calibration Error Estimator

1 code implementation13 Oct 2022 Teodora Popordanoska, Raphael Sayer, Matthew B. Blaschko

As a remedy, we propose a low-bias, trainable calibration error estimator based on Dirichlet kernel density estimates, which asymptotically converges to the true $L_p$ calibration error.

MRF-UNets: Searching UNet with Markov Random Fields

1 code implementation13 Jul 2022 Zifu Wang, Matthew B. Blaschko

UNet [27] is widely used in semantic segmentation due to its simplicity and effectiveness.

Neural Architecture Search Semantic Segmentation

Designing MacPherson Suspension Architectures using Bayesian Optimization

no code implementations17 Jun 2022 Sinnu Susan Thomas, Jacopo Palandri, Mohsen Lakehal-ayat, Punarjay Chakravarty, Friedrich Wolf-Monheim, Matthew B. Blaschko

We show that the proposed approach is general, scalable, and efficient, and that the novel convergence criteria can be implemented straightforwardly based on existing concepts and subroutines in popular Bayesian optimization software packages.

Bayesian Optimization

Combinatorial optimization for low bit-width neural networks

no code implementations4 Jun 2022 Han Zhou, Aida Ashrafi, Matthew B. Blaschko

In this paper, we explore methods of direct combinatorial optimization in the problem of risk minimization with binary weights, which can be made equivalent to a non-monotone submodular maximization under certain conditions.

Binary Classification Combinatorial Optimization

Improving Differentially Private SGD via Randomly Sparsified Gradients

1 code implementation1 Dec 2021 Junyi Zhu, Matthew B. Blaschko

Differentially private stochastic gradient descent (DP-SGD) has been widely adopted in deep learning to provide rigorously defined privacy, which requires gradient clipping to bound the maximum norm of individual gradients and additive isotropic Gaussian noise.

Federated Learning

Calibration Regularized Training of Deep Neural Networks using Kernel Density Estimation

no code implementations29 Sep 2021 Teodora Popordanoska, Raphael Sayer, Matthew B. Blaschko

The computational complexity of our estimator is O(n^2), matching that of the kernel maximum mean discrepancy, used in a previously considered trainable calibration estimator.

Autonomous Driving Density Estimation +1

Greedy Bayesian Posterior Approximation with Deep Ensembles

2 code implementations29 May 2021 Aleksei Tiulpin, Matthew B. Blaschko

This paper proposes a novel and principled method to tackle this limitation, minimizing an $f$-divergence between the true posterior and a kernel density estimator (KDE) in a function space.

Diversity Out-of-Distribution Detection

Meta-Cal: Well-controlled Post-hoc Calibration by Ranking

1 code implementation10 May 2021 Xingchen Ma, Matthew B. Blaschko

In this paper, we introduce two constraints that are worth consideration in designing a calibration map for post-hoc calibration.

Multi-class Classification

Optimization for Medical Image Segmentation: Theory and Practice when evaluating with Dice Score or Jaccard Index

no code implementations26 Oct 2020 Tom Eelbode, Jeroen Bertels, Maxim Berman, Dirk Vandermeulen, Frederik Maes, Raf Bisschops, Matthew B. Blaschko

We verify these results empirically in an extensive validation on six medical segmentation tasks and can confirm that metric-sensitive losses are superior to cross-entropy based loss functions in case of evaluation with Dice Score or Jaccard Index.

Image Segmentation Medical Image Segmentation +2

Additive Tree-Structured Covariance Function for Conditional Parameter Spaces in Bayesian Optimization

no code implementations21 Jun 2020 Xingchen Ma, Matthew B. Blaschko

Bayesian optimization (BO) is a sample-efficient global optimization algorithm for black-box functions which are expensive to evaluate.

Bayesian Optimization Model Compression +1

Pathological myopia classification with simultaneous lesion segmentation using deep learning

no code implementations4 Jun 2020 Ruben Hemelings, Bart Elen, Matthew B. Blaschko, Julie Jacob, Ingeborg Stalmans, Patrick De Boever

This investigation reports on the results of convolutional neural networks developed for the recently introduced PathologicAL Myopia (PALM) dataset, which consists of 1200 fundus images.

Classification Deep Learning +4

AOWS: Adaptive and optimal network width search with latency constraints

1 code implementation CVPR 2020 Maxim Berman, Leonid Pishchulin, Ning Xu, Matthew B. Blaschko, Gerard Medioni

We introduce a novel efficient one-shot NAS approach to optimally search for channel numbers, given latency constraints on a specific hardware.

Neural Architecture Search

Discriminative training of conditional random fields with probably submodular constraints

no code implementations25 Nov 2019 Maxim Berman, Matthew B. Blaschko

In order to constrain such a model to remain tractable, previous approaches have enforced the weight vector to be positive for pairwise potentials in which the labels differ, and set pairwise potentials to zero in the case that the label remains the same.

3D Reconstruction Denoising

Adaptive Compression-based Lifelong Learning

no code implementations23 Jul 2019 Shivangi Srivastava, Maxim Berman, Matthew B. Blaschko, Devis Tuia

The latter approach falls under the denomination of lifelong learning, where the model is updated in a way that it performs well on both old and new tasks, without having access to the old task's training samples anymore.

Bayesian Optimization Semantic Segmentation

Yes, IoU loss is submodular - as a function of the mispredictions

no code implementations6 Sep 2018 Maxim Berman, Matthew B. Blaschko, Amal Rannen Triki, Jiaqian Yu

This note is a response to [7] in which it is claimed that [13, Proposition 11] is false.

Efficient semantic image segmentation with superpixel pooling

1 code implementation7 Jun 2018 Mathijs Schuurmans, Maxim Berman, Matthew B. Blaschko

In this work, we evaluate the use of superpixel pooling layers in deep network architectures for semantic segmentation.

Image Segmentation Semantic Segmentation

The Lovász-Softmax Loss: A Tractable Surrogate for the Optimization of the Intersection-Over-Union Measure in Neural Networks

2 code implementations CVPR 2018 Maxim Berman, Amal Rannen Triki, Matthew B. Blaschko

The Jaccard index, also referred to as the intersection-over-union score, is commonly employed in the evaluation of image segmentation results given its perceptual qualities, scale invariance - which lends appropriate relevance to small objects, and appropriate counting of false negatives, in comparison to per-pixel losses.

Image Segmentation Segmentation +1

Function Norms and Regularization in Deep Networks

no code implementations18 Oct 2017 Amal Rannen Triki, Maxim Berman, Matthew B. Blaschko

Deep neural networks (DNNs) have become increasingly important due to their excellent empirical performance on a wide range of problems.

Image Segmentation Learning Theory +2

An Ensemble Deep Learning Based Approach for Red Lesion Detection in Fundus Images

1 code implementation9 Jun 2017 José Ignacio Orlando, Elena Prokofyeva, Mariana del Fresno, Matthew B. Blaschko

In this paper we propose a novel method for red lesion detection based on combining both deep learned and domain knowledge.

Lesion Detection

The Lovász-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks

4 code implementations CVPR 2018 Maxim Berman, Amal Rannen Triki, Matthew B. Blaschko

The Jaccard index, also referred to as the intersection-over-union score, is commonly employed in the evaluation of image segmentation results given its perceptual qualities, scale invariance - which lends appropriate relevance to small objects, and appropriate counting of false negatives, in comparison to per-pixel losses.

Image Segmentation Segmentation +1

Intraoperative margin assessment of human breast tissue in optical coherence tomography images using deep neural networks

1 code implementation31 Mar 2017 Amal Rannen Triki, Matthew B. Blaschko, Yoon Mo Jung, Seungri Song, Hyun Ju Han, Seung Il Kim, Chulmin Joo

The use of a function norm introduces a direct control over the complexity of the function with the aim of diminishing the risk of overfitting.

Specificity

An Efficient Decomposition Framework for Discriminative Segmentation with Supermodular Losses

no code implementations13 Feb 2017 Jiaqian Yu, Matthew B. Blaschko

These loss functions do not necessarily have the same structure as the one used by the segmentation inference algorithm, and in general, we may have to resort to generic submodular minimization algorithms for loss augmented inference.

Computational Efficiency Image Segmentation +2

Slack and Margin Rescaling as Convex Extensions of Supermodular Functions

1 code implementation19 Jun 2016 Matthew B. Blaschko

We demonstrate in this paper that we may use these concepts to define polynomial time convex extensions of arbitrary supermodular functions, providing an analysis framework for the tightness of these surrogates.

Image Segmentation Object Localization +2

Stochastic Function Norm Regularization of Deep Networks

1 code implementation30 May 2016 Amal Rannen Triki, Matthew B. Blaschko

In this paper, we study the feasibility of directly using the $L_2$ function norm for regularization.

Small Data Image Classification

Testing for Differences in Gaussian Graphical Models: Applications to Brain Connectivity

no code implementations NeurIPS 2016 Eugene Belilovsky, Gaël Varoquaux, Matthew B. Blaschko

We characterize the uncertainty of differences with confidence intervals obtained using a parametric distribution on parameters of a sparse estimator.

Functional Connectivity

A Test of Relative Similarity For Model Selection in Generative Models

1 code implementation14 Nov 2015 Wacha Bounliphone, Eugene Belilovsky, Matthew B. Blaschko, Ioannis Antonoglou, Arthur Gretton

Probabilistic generative models provide a powerful framework for representing data that avoids the expense of manual annotation typically needed by discriminative approaches.

Model Selection

Understanding Objects in Detail with Fine-Grained Attributes

no code implementations CVPR 2014 Andrea Vedaldi, Siddharth Mahendran, Stavros Tsogkas, Subhransu Maji, Ross Girshick, Juho Kannala, Esa Rahtu, Iasonas Kokkinos, Matthew B. Blaschko, David Weiss, Ben Taskar, Karen Simonyan, Naomi Saphra, Sammy Mohamed

We show that the collected data can be used to study the relation between part detection and attribute prediction by diagnosing the performance of classifiers that pool information from different parts of an object.

Attribute Object +2

Cannot find the paper you are looking for? You can Submit a new open access paper.