2 code implementations • 12 Feb 2024 • Luben M. C. Cabezas, Mateus P. Otto, Rafael Izbicki, Rafael B. Stern
Our approach is based on pursuing the coarsest partition of the feature space that approximates conditional coverage.
no code implementations • 8 Feb 2024 • Luca Masserano, Alex Shen, Michele Doro, Tommaso Dorigo, Rafael Izbicki, Ann B. Lee
An open scientific challenge is how to classify events with reliable measures of uncertainty, when we have a mechanistic model of the data-generating process but the distribution over both labels and latent nuisance parameters is different between train and target data.
1 code implementation • 9 Jan 2024 • Victor Dheur, Tanguy Bosser, Rafael Izbicki, Souhaib Ben Taieb
A primary objective is to generate a distribution-free joint prediction region for the arrival time and mark, with a finite-sample marginal coverage guarantee.
no code implementations • 12 May 2023 • Milene Regina dos Santos, Rafael Izbicki
In summary, this method offers a simple, fast, and effective solution for training regression models with noisy labels derived from diverse expert opinions.
no code implementations • 20 Apr 2023 • Gabriel O. Assunção, Rafael Izbicki, Marcos O. Prates
Imbalanced datasets present a significant challenge for machine learning models, often leading to biased predictions.
no code implementations • 23 Jan 2023 • Gustavo Grivol, Rafael Izbicki, Alex A. Okuno, Rafael B. Stern
This paper introduces FlexCodeTS, a new conditional density estimator for time series.
1 code implementation • 11 Nov 2022 • Mateus P. Otto, Rafael Izbicki
Kernel methods provide a flexible and theoretically grounded approach to nonlinear and nonparametric learning.
1 code implementation • 12 Sep 2022 • Gilson Y. Shimizu, Rafael Izbicki, Andre C. P. L. F. de Carvalho
A fundamental question on the use of ML models concerns the explanation of their predictions for increasing transparency in decision-making.
1 code implementation • 31 May 2022 • Luca Masserano, Tommaso Dorigo, Rafael Izbicki, Mikael Kuusela, Ann B. Lee
We also illustrate how our approach can correct overly confident posterior regions computed with normalizing flows.
1 code implementation • 29 May 2022 • Biprateep Dey, David Zhao, Jeffrey A. Newman, Brett H. Andrews, Rafael Izbicki, Ann B. Lee
The same regression function morphs the misspecified PD to a re-calibrated PD for all $\mathbf{x}$.
2 code implementations • 17 May 2022 • Felipe Maia Polo, Rafael Izbicki, Evanildo Gomes Lacerda Jr, Juan Pablo Ibieta-Jimenez, Renato Vicente
It is versatile, suitable for regression and classification tasks, and accommodates diverse data forms - tabular, text, or image.
no code implementations • 18 Feb 2022 • Gilson Shimizu, Rafael Izbicki, Denis Valle
This model allows the identification of mixed-membership clusters in discrete data and provides inference on the relationship between covariates and the abundance of these clusters.
1 code implementation • 4 Feb 2022 • Trey McNeely, Galen Vincent, Kimberly M. Wood, Rafael Izbicki, Ann B. Lee
We prove that type I error control is guaranteed as long as the distribution of the label series is well-estimated, which is made easier by the extensive historical data for binary TC event labels.
1 code implementation • 30 Nov 2021 • Luben M. C. Cabezas, Rafael Izbicki, Rafael B. Stern
We propose methods for the analysis of hierarchical clustering that fully use the multi-resolution structure provided by a dendrogram.
no code implementations • 24 Sep 2021 • Trey McNeely, Galen Vincent, Rafael Izbicki, Kimberly M. Wood, Ann B. Lee
Tropical cyclone (TC) intensity forecasts are issued by human forecasters who evaluate spatio-temporal observations (e. g., satellite imagery) and model output (e. g., numerical weather prediction, statistical models) to produce forecasts every 6 hours.
2 code implementations • 8 Jul 2021 • Niccolò Dalmasso, Luca Masserano, David Zhao, Rafael Izbicki, Ann B. Lee
In this work, we propose a unified and modular inference framework that bridges classical statistics and modern machine learning providing (i) a practical approach to the Neyman construction of confidence sets with frequentist finite-sample coverage for any value of the unknown parameters; and (ii) interpretable diagnostics that estimate the empirical coverage across the entire parameter space.
1 code implementation • 12 Sep 2020 • Tiago Botari, Frederik Hvilshøj, Rafael Izbicki, Andre C. P. L. F. de Carvalho
Additionally, we introduce modifications to standard training algorithms of local interpretable models fostering more robust explanations, even allowing the production of counterfactual examples.
no code implementations • 24 Jul 2020 • Rafael Izbicki, Gilson Shimizu, Rafael B. Stern
We also present simulations that show how to tune CD-split.
2 code implementations • ICML 2020 • Niccolò Dalmasso, Rafael Izbicki, Ann B. Lee
In this paper, we present $\texttt{ACORE}$ (Approximate Computation via Odds Ratio Estimation), a frequentist approach to LFI that first formulates the classical likelihood ratio test (LRT) as a parametrized classification problem, and then uses the equivalence of tests and confidence sets to build confidence regions for parameters of interest.
1 code implementation • 12 Oct 2019 • Rafael Izbicki, Gilson T. Shimizu, Rafael B. Stern
In order to obtain this property, these methods require strong conditions on the dependence between the target variable and the features.
1 code implementation • 11 Oct 2019 • Victor Coscrato, Marco Henrique de Almeida Inácio, Tiago Botari, Rafael Izbicki
We develop NLS (neural local smoother), a method that is complex enough to give good predictions, and yet gives solutions that are easy to be interpreted without the need of using a separate interpreter.
no code implementations • 16 Sep 2019 • Marco Henrique de Almeida Inácio, Rafael Izbicki, Bálint Gyires-Tóth
Given two distinct datasets, an important question is if they have arisen from the the same data generating function or alternatively how their data generating functions diverge from one another.
5 code implementations • 30 Aug 2019 • Niccolò Dalmasso, Taylor Pospisil, Ann B. Lee, Rafael Izbicki, Peter E. Freeman, Alex I. Malz
We provide sample code in $\texttt{Python}$ and $\texttt{R}$ as well as examples of applications to photometric redshift estimation and likelihood-free cosmological inference via CDE.
1 code implementation • 31 Jul 2019 • Marco Henrique de Almeida Inácio, Rafael Izbicki, Rafael Bassi Stern
Conditional independence testing is a key problem required by many machine learning and statistics tools.
no code implementations • 31 Jul 2019 • Tiago Botari, Rafael Izbicki, Andre C. P. L. F. de Carvalho
For such, they induce interpretable models on the neighborhood of the instance to be explained.
1 code implementation • 24 Jun 2019 • Victor Coscrato, Marco Henrique de Almeida Inácio, Rafael Izbicki
We show that while our approach keeps the interpretative features of Breiman's method at a local level, it leads to better predictive power, especially in datasets with large sample sizes.
1 code implementation • 27 May 2019 • Niccolò Dalmasso, Ann B. Lee, Rafael Izbicki, Taylor Pospisil, Ilmun Kim, Chieh-An Lin
At the heart of our approach is a two-sample test that quantifies the quality of the fit at fixed parameter values, and a global test that assesses goodness-of-fit across simulation parameters.
1 code implementation • 11 Jul 2018 • Afonso Fernandes Vaz, Rafael Izbicki, Rafael Bassi Stern
The quantification problem consists of determining the prevalence of a given label in a target population.
1 code implementation • 14 May 2018 • Rafael Izbicki, Ann B. Lee, Taylor Pospisil
Approximate Bayesian Computation (ABC) is typically used when the likelihood is either unavailable or intractable but where data can be simulated under different parameter settings using a forward model.
1 code implementation • 26 Apr 2017 • Rafael Izbicki, Ann B. Lee
There is a growing demand for nonparametric conditional density estimators (CDEs) in fields such as astronomy and economics.
no code implementations • 1 Feb 2016 • Ann B. Lee, Rafael Izbicki
We expand the unknown regression on the data in terms of the eigenfunctions of a kernel-based operator, and we take advantage of orthogonality of the basis with respect to the underlying data distribution, P, to speed up computations and tuning of parameters.
no code implementations • 13 May 2014 • Rafael Izbicki, Rafael Bassi Stern
Next, we discuss how this loss can be used to tune a penalization which introduces sparsity in the parameters of a traditional class of models.