Search Results for author: Antonia Marcu

Found 6 papers, 2 papers with code

On Pitfalls of Measuring Occlusion Robustness through Data Distortion

no code implementations24 Nov 2022 Antonia Marcu

Over the past years, the crucial role of data has largely been shadowed by the field's focus on architectures and training procedures.

Generalisation and the Risk--Entropy Curve

no code implementations15 Feb 2022 Dominic Belcher, Antonia Marcu, Adam Prügel-Bennett

In this paper we show that the expected generalisation performance of a learning machine is determined by the distribution of risks or equivalently its logarithm -- a quantity we term the risk entropy -- and the fluctuations in a quantity we call the training ratio.

On Data-centric Myths

no code implementations22 Nov 2021 Antonia Marcu, Adam Prügel-Bennett

The community lacks theory-informed guidelines for building good data sets.

On the Effects of Artificial Data Modification

1 code implementation NeurIPS 2021 Antonia Marcu, Adam Prügel-Bennett

Data distortion is commonly applied in vision models during both training (e. g methods like MixUp and CutMix) and evaluation (e. g. shape-texture bias and robustness).

FMix: Enhancing Mixed Sample Data Augmentation

5 code implementations27 Feb 2020 Ethan Harris, Antonia Marcu, Matthew Painter, Mahesan Niranjan, Adam Prügel-Bennett, Jonathon Hare

Finally, we show that a consequence of the difference between interpolating MSDA such as MixUp and masking MSDA such as FMix is that the two can be combined to improve performance even further.

Data Augmentation Image Classification

Rethinking Generalisation

no code implementations11 Nov 2019 Antonia Marcu, Adam Prügel-Bennett

In this paper, a new approach to computing the generalisation performance is presented that assumes the distribution of risks, $\rho(r)$, for a learning scenario is known.

Cannot find the paper you are looking for? You can Submit a new open access paper.