no code implementations • 6 Jun 2025 • Zhixiong Zhuang, Hui-Po Wang, Maria-Irina Nicolae, Mario Fritz
Model stealing poses a significant security risk in machine learning by enabling attackers to replicate a black-box model without access to its training data, thus jeopardizing intellectual property and exposing sensitive information.
no code implementations • 4 Feb 2025 • Yaling Shen, Zhixiong Zhuang, Kun Yuan, Maria-Irina Nicolae, Nassir Navab, Nicolas Padoy, Mario Fritz
Experiments on the IU X-RAY and MIMIC-CXR radiology datasets demonstrate that Adversarial Domain Alignment enables attackers to steal the medical MLLM without any access to medical data.
no code implementations • 11 May 2024 • Zhixiong Zhuang, Maria-Irina Nicolae, Mario Fritz
Deep reinforcement learning policies, which are integral to modern control systems, represent valuable intellectual property.
3 code implementations • 28 Sep 2023 • Maria-Irina Nicolae, Max Eisele, Andreas Zeller
In this paper, we conduct the most extensive evaluation of NPS fuzzers against standard gray-box fuzzers (>11 CPU years and >5. 5 GPU years), and make the following contributions: (1) We find that the original performance claims for NPS fuzzers do not hold; a gap we relate to fundamental, implementation, and experimental limitations of prior works.
7 code implementations • 3 Jul 2018 • Maria-Irina Nicolae, Mathieu Sinn, Minh Ngoc Tran, Beat Buesser, Ambrish Rawat, Martin Wistuba, Valentina Zantedeschi, Nathalie Baracaldo, Bryant Chen, Heiko Ludwig, Ian M. Molloy, Ben Edwards
Defending Machine Learning models involves certifying and verifying model robustness and model hardening with approaches such as pre-processing inputs, augmenting training data with adversarial samples, and leveraging runtime detection methods to flag any inputs that might have been modified by an adversary.
no code implementations • 22 Nov 2017 • Ambrish Rawat, Martin Wistuba, Maria-Irina Nicolae
Deep Learning models are vulnerable to adversarial examples, i. e.\ images obtained via deliberate imperceptible perturbations, such that the model misclassifies them with high confidence.
no code implementations • 28 Aug 2017 • Vincent P. A. Lonij, Ambrish Rawat, Maria-Irina Nicolae
First, a knowledge-graph representation is learned to embed a large set of entities into a semantic space.
no code implementations • 21 Jul 2017 • Valentina Zantedeschi, Maria-Irina Nicolae, Ambrish Rawat
Following the recent adoption of deep neural networks (DNN) accross a wide range of applications, adversarial attacks against these models have proven to be an indisputable threat.
no code implementations • 15 Oct 2016 • Maria-Irina Nicolae, Éric Gaussier, Amaury Habrard, Marc Sebban
In this paper, we propose a novel method for learning similarities based on DTW, in order to improve time series classification.
no code implementations • 19 Dec 2014 • Maria-Irina Nicolae, Marc Sebban, Amaury Habrard, Éric Gaussier, Massih-Reza Amini
The notion of metric plays a key role in machine learning problems such as classification, clustering or ranking.