Search Results for author: Henry W. J. Reeve

Found 7 papers, 0 papers with code

Asymptotic Optimality for Decentralised Bandits

no code implementations20 Sep 2021 Conor Newton, Ayalvadi Ganesh, Henry W. J. Reeve

In fact, our regret guarantee matches the asymptotically optimal rate achievable in the full communication setting.

Optimal subgroup selection

no code implementations2 Sep 2021 Henry W. J. Reeve, Timothy I. Cannings, Richard J. Samworth

We formulate the problem as one of constrained optimisation, where we seek a low-complexity, data-dependent selection set on which, with a guaranteed probability, the regression function is uniformly at least as large as the threshold; subject to this constraint, we would like the region to contain as much mass under the marginal feature distribution as possible.

regression

Adaptive transfer learning

no code implementations8 Jun 2021 Henry W. J. Reeve, Timothy I. Cannings, Richard J. Samworth

In transfer learning, we wish to make inference about a target population when we have access to data both from the distribution itself, and from a different but related source distribution.

Binary Classification Transfer Learning

Statistical optimality conditions for compressive ensembles

no code implementations2 Jun 2021 Henry W. J. Reeve, Ata Kaban

For each of these tasks, our strategy is to develop a tight upper bound on the compressibility function, and by doing so we discover distributional conditions of geometric nature under which the compressive algorithm attains minimax-optimal rates up to at most poly-logarithmic factors.

regression

Fast Rates for a kNN Classifier Robust to Unknown Asymmetric Label Noise

no code implementations11 Jun 2019 Henry W. J. Reeve, Ata Kaban

We consider classification in the presence of class-dependent asymmetric label noise with unknown noise probabilities.

Classification with unknown class-conditional label noise on non-compact feature spaces

no code implementations14 Feb 2019 Henry W. J. Reeve, Ata Kaban

In order to obtain finite sample rates, previous approaches to classification with unknown class-conditional label noise have required that the regression function is close to its extrema on sets of large measure.

Classification General Classification +1

Modular Autoencoders for Ensemble Feature Extraction

no code implementations23 Nov 2015 Henry W. J. Reeve, Gavin Brown

We introduce the concept of a Modular Autoencoder (MAE), capable of learning a set of diverse but complementary representations from unlabelled data, that can later be used for supervised tasks.

Cannot find the paper you are looking for? You can Submit a new open access paper.