no code implementations • 21 Apr 2022 • Sebastian Scher, Andreas Trügler
We show that the widely used concept of adversarial robustness and closely related metrics based on counterfactuals are not necessarily valid metrics for determining the robustness of ML models against perturbations that occur "naturally", outside specific adversarial attack scenarios.
1 code implementation • 13 Feb 2020 • Sebastian Scher, Gabriele Messori
However, the skill of the neural network forecasts is systematically lower than that of state-of-the-art numerical weather prediction models.
3 code implementations • 2 Feb 2020 • Stephan Rasp, Peter D. Dueben, Sebastian Scher, Jonathan A. Weyn, Soukayna Mouatadid, Nils Thuerey
Data-driven approaches, most prominently deep learning, have become powerful tools for prediction in many domains.