Search Results for author: Werner Zellinger

Found 17 papers, 9 papers with code

SymbolicAI: A framework for logic-based approaches combining generative models and solvers

2 code implementations1 Feb 2024 Marius-Constantin Dinu, Claudiu Leoveanu-Condrei, Markus Holzleitner, Werner Zellinger, Sepp Hochreiter

We conclude by introducing a quality measure and its empirical score for evaluating these computational graphs, and propose a benchmark that compares various state-of-the-art LLMs across a set of complex workflows.

Few-Shot Learning Probabilistic Programming

Adaptive learning of density ratios in RKHS

no code implementations30 Jul 2023 Werner Zellinger, Stefan Kindermann, Sergei V. Pereverzyev

Estimating the ratio of two probability densities from finitely many observations of the densities is a central problem in machine learning and statistics with applications in two-sample testing, divergence estimation, generative modeling, covariate shift adaptation, conditional density estimation, and novelty detection.

Density Estimation Density Ratio Estimation +2

General regularization in covariate shift adaptation

no code implementations21 Jul 2023 Duc Hoan Nguyen, Sergei V. Pereverzyev, Werner Zellinger

Sample reweighting is one of the most widely used methods for correcting the error of least squares learning algorithms in reproducing kernel Hilbert spaces (RKHS), that is caused by future data distributions that are different from the training data distribution.

Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation

1 code implementation2 May 2023 Marius-Constantin Dinu, Markus Holzleitner, Maximilian Beck, Hoan Duc Nguyen, Andrea Huber, Hamid Eghbal-zadeh, Bernhard A. Moser, Sergei Pereverzyev, Sepp Hochreiter, Werner Zellinger

Our method outperforms deep embedded validation (DEV) and importance weighted validation (IWV) on all datasets, setting a new state-of-the-art performance for solving parameter choice issues in unsupervised domain adaptation with theoretical error guarantees.

Unsupervised Domain Adaptation

Domain Generalization by Functional Regression

1 code implementation9 Feb 2023 Markus Holzleitner, Sergei V. Pereverzyev, Werner Zellinger

The problem of domain generalization is to learn, given data from different source distributions, a model that can be expected to generalize well on new target distributions which are only seen through unlabeled samples.

Domain Generalization regression

Few-Shot Learning by Dimensionality Reduction in Gradient Space

1 code implementation7 Jun 2022 Martin Gauch, Maximilian Beck, Thomas Adler, Dmytro Kotsur, Stefan Fiel, Hamid Eghbal-zadeh, Johannes Brandstetter, Johannes Kofler, Markus Holzleitner, Werner Zellinger, Daniel Klotz, Sepp Hochreiter, Sebastian Lehner

We introduce SubGD, a novel few-shot learning method which is based on the recent finding that stochastic gradient descent updates tend to live in a low-dimensional parameter subspace.

Dimensionality Reduction Few-Shot Learning

Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning

no code implementations4 May 2022 Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Sebastiano Vascon, Werner Zellinger, Bernhard A. Moser, Alina Oprea, Battista Biggio, Marcello Pelillo, Fabio Roli

In this survey, we provide a comprehensive systematization of poisoning attacks and defenses in machine learning, reviewing more than 100 papers published in the field in the last 15 years.

BIG-bench Machine Learning Data Poisoning

The balancing principle for parameter choice in distance-regularized domain adaptation

1 code implementation NeurIPS 2021 Werner Zellinger, Natalia Shepeleva, Marius-Constantin Dinu, Hamid Eghbal-zadeh, Hoan Nguyen, Bernhard Nessler, Sergei Pereverzyev, Bernhard A. Moser

Our approach starts with the observation that the widely-used method of minimizing the source error, penalized by a distance measure between source and target feature representations, shares characteristics with regularized ill-posed inverse problems.

Unsupervised Domain Adaptation

On Data Augmentation and Adversarial Risk: An Empirical Analysis

no code implementations6 Jul 2020 Hamid Eghbal-zadeh, Khaled Koutini, Paul Primus, Verena Haunschmid, Michal Lewandowski, Werner Zellinger, Bernhard A. Moser, Gerhard Widmer

Data augmentation techniques have become standard practice in deep learning, as it has been shown to greatly improve the generalisation abilities of models.

Adversarial Attack Data Augmentation

ReLU Code Space: A Basis for Rating Network Quality Besides Accuracy

1 code implementation20 May 2020 Natalia Shepeleva, Werner Zellinger, Michal Lewandowski, Bernhard Moser

We propose a new metric space of ReLU activation codes equipped with a truncated Hamming distance which establishes an isometry between its elements and polyhedral bodies in the input space which have recently been shown to be strongly related to safety, robustness, and confidence.

Moment-Based Domain Adaptation: Learning Bounds and Algorithms

no code implementations22 Apr 2020 Werner Zellinger

This thesis contributes to the mathematical foundation of domain adaptation as emerging field in machine learning.

BIG-bench Machine Learning Domain Adaptation

On generalization in moment-based domain adaptation

no code implementations19 Feb 2020 Werner Zellinger, Bernhard A. Moser, Susanne Saminger-Platz

Domain adaptation algorithms are designed to minimize the misclassification risk of a discriminative model for a target domain with little training data by adapting a model from a source domain with a large amount of training data.

Domain Adaptation Generalization Bounds

Mixture Density Generative Adversarial Networks

1 code implementation CVPR 2019 Hamid Eghbal-zadeh, Werner Zellinger, Gerhard Widmer

Generative Adversarial Networks have surprising ability for generating sharp and realistic images, though they are known to suffer from the so-called mode collapse problem.

Cannot find the paper you are looking for? You can Submit a new open access paper.