Search Results for author: Arthur da Cunha

Found 4 papers, 0 papers with code

Revisiting Agnostic Boosting

no code implementations12 Mar 2025 Arthur da Cunha, Mikael Møller Høgsgaard, Andrea Paudice, Yuxin Sun

Boosting is a key method in statistical learning, allowing for converting weak learners into strong ones.

Optimal Parallelization of Boosting

no code implementations29 Aug 2024 Arthur da Cunha, Mikael Møller Høgsgaard, Kasper Green Larsen

Recent works on the parallel complexity of Boosting have established strong lower bounds on the tradeoff between the number of training rounds $p$ and the total parallel work per round $t$.

Boosting, Voting Classifiers and Randomized Sample Compression Schemes

no code implementations5 Feb 2024 Arthur da Cunha, Kasper Green Larsen, Martin Ritzert

At the center of this paradigm lies the concept of building the strong learner as a voting classifier, which outputs a weighted majority vote of the weak learners.

Proving the Lottery Ticket Hypothesis for Convolutional Neural Networks

no code implementations ICLR 2022 Arthur da Cunha, Emanuele Natale, Laurent Viennot

The lottery ticket hypothesis states that a randomly-initialized neural network contains a small subnetwork such that, when trained in isolation, can compete with the performance of the original network.

Cannot find the paper you are looking for? You can Submit a new open access paper.