no code implementations • 9 Mar 2024 • Swapnaja Achintalwar, Adriana Alvarado Garcia, Ateret Anaby-Tavor, Ioana Baldini, Sara E. Berger, Bishwaranjan Bhattacharjee, Djallel Bouneffouf, Subhajit Chaudhury, Pin-Yu Chen, Lamogha Chiazor, Elizabeth M. Daly, Rogério Abreu de Paula, Pierre Dognin, Eitan Farchi, Soumya Ghosh, Michael Hind, Raya Horesh, George Kour, Ja Young Lee, Erik Miehling, Keerthiram Murugesan, Manish Nagireddy, Inkit Padhi, David Piorkowski, Ambrish Rawat, Orna Raz, Prasanna Sattigeri, Hendrik Strobelt, Sarathkrishna Swaminathan, Christoph Tillmann, Aashka Trivedi, Kush R. Varshney, Dennis Wei, Shalisha Witherspooon, Marcel Zalmanovici
Large language models (LLMs) are susceptible to a variety of risks, from non-faithful output to biased and toxic generations.
no code implementations • 12 Jan 2024 • Subina Khanal, Seshu Tirupathi, Giulio Zizzo, Ambrish Rawat, Torben Bach Pedersen
To address these limitations, in this paper, we pre-train the time series Transformer model on a source domain with sufficient data and fine-tune it on the target domain with limited data.
no code implementations • 12 Dec 2023 • Swanand Ravindra Kadhe, Anisa Halimi, Ambrish Rawat, Nathalie Baracaldo
We evaluate the performance-fairness trade-off for SISA, and empirically demsontrate that SISA can indeed reduce fairness in LLMs.
no code implementations • 30 Oct 2023 • Swanand Ravindra Kadhe, Heiko Ludwig, Nathalie Baracaldo, Alan King, Yi Zhou, Keith Houck, Ambrish Rawat, Mark Purcell, Naoise Holohan, Mikio Takeuchi, Ryo Kawahara, Nir Drucker, Hayim Shaul, Eyal Kushnir, Omri Soceanu
The effective detection of evidence of financial anomalies requires collaboration among multiple entities who own a diverse set of data, such as a payment network system (PNS) and its partner banks.
1 code implementation • 15 Jun 2023 • Myles Foley, Ambrish Rawat, Taesung Lee, Yufang Hou, Gabriele Picco, Giulio Zizzo
The wide applicability and adaptability of generative large language models (LLMs) has enabled their rapid adoption.
no code implementations • 16 Dec 2022 • Ambrish Rawat, Giulio Zizzo, Swanand Kadhe, Jonathan P. Epperlein, Stefano Braghin
In this work, we devise robust and efficient learning protocols for orchestrating a Federated Learning (FL) process for the Federated Tumor Segmentation Challenge (FeTS 2022).
1 code implementation • 12 Jul 2022 • Anisa Halimi, Swanand Kadhe, Ambrish Rawat, Nathalie Baracaldo
With privacy legislation empowering the users with the right to be forgotten, it has become essential to make a model amenable for forgetting some of its training data.
no code implementations • 7 Jul 2022 • Ambrish Rawat, James Requeima, Wessel Bruinsma, Richard Turner
Machine unlearning refers to the task of removing a subset of training data, thereby removing its contributions to a trained model.
no code implementations • 25 Feb 2022 • Nathalie Baracaldo, Ali Anwar, Mark Purcell, Ambrish Rawat, Mathieu Sinn, Bashar Altakrouri, Dian Balta, Mahdi Sellami, Peter Kuhn, Ulrich Schopp, Matthias Buchinger
Federated Learning (FL) is a novel paradigm for the shared training of models based on decentralized and private data.
no code implementations • 20 Dec 2021 • Giulio Zizzo, Ambrish Rawat, Mathieu Sinn, Sergio Maffeis, Chris Hankin
We model an attacker who poisons the model to insert a weakness into the adversarial training such that the model displays apparent adversarial robustness, while the attacker can exploit the inserted weakness to bypass the adversarial training and force the model to misclassify adversarial examples.
no code implementations • 6 Sep 2021 • Ambrish Rawat, Mathieu Sinn, Beat Buesser
Adversarial training is a computationally expensive task and hence searching for neural network architectures with robustness as the criterion can be challenging.
1 code implementation • 3 Aug 2021 • Ambrish Rawat, Killian Levacher, Mathieu Sinn
Deep Generative Models (DGMs) are a popular class of deep learning models which find widespread use because of their ability to synthesize data from complex, high-dimensional manifolds.
no code implementations • ICML Workshop AutoML 2021 • Akihiro Kishimoto, Djallel Bouneffouf, Radu Marinescu, Parikshit Ram, Ambrish Rawat, Martin Wistuba, Paulito Pedregosa Palmes, Adi Botea
Optimizing a machine learning (ML) pipeline has been an important topic of AI and ML.
no code implementations • 3 Dec 2020 • Giulio Zizzo, Ambrish Rawat, Mathieu Sinn, Beat Buesser
Federated learning (FL) is one of the most important paradigms addressing privacy and data governance issues in machine learning (ML).
1 code implementation • 22 Jul 2020 • Heiko Ludwig, Nathalie Baracaldo, Gegi Thomas, Yi Zhou, Ali Anwar, Shashank Rajamoni, Yuya Ong, Jayaram Radhakrishnan, Ashish Verma, Mathieu Sinn, Mark Purcell, Ambrish Rawat, Tran Minh, Naoise Holohan, Supriyo Chakraborty, Shalisha Whitherspoon, Dean Steuer, Laura Wynter, Hifaz Hassan, Sean Laguna, Mikhail Yurochkin, Mayank Agarwal, Ebube Chuba, Annie Abay
Federated Learning (FL) is an approach to conduct machine learning without centralizing training data in a single place, for reasons of privacy, confidentiality or data volume.
no code implementations • 22 Oct 2019 • Charu Aggarwal, Djallel Bouneffouf, Horst Samulowitz, Beat Buesser, Thanh Hoang, Udayan Khurana, Sijia Liu, Tejaswini Pedapati, Parikshit Ram, Ambrish Rawat, Martin Wistuba, Alexander Gray
Data science is labor-intensive and human experts are scarce but heavily involved in every aspect of it.
no code implementations • 4 May 2019 • Martin Wistuba, Ambrish Rawat, Tejaswini Pedapati
The growing interest in both the automation of machine learning and deep learning has inevitably led to the development of a wide variety of automated methods for neural architecture search.
5 code implementations • 3 Jul 2018 • Maria-Irina Nicolae, Mathieu Sinn, Minh Ngoc Tran, Beat Buesser, Ambrish Rawat, Martin Wistuba, Valentina Zantedeschi, Nathalie Baracaldo, Bryant Chen, Heiko Ludwig, Ian M. Molloy, Ben Edwards
Defending Machine Learning models involves certifying and verifying model robustness and model hardening with approaches such as pre-processing inputs, augmenting training data with adversarial samples, and leveraging runtime detection methods to flag any inputs that might have been modified by an adversary.
no code implementations • 7 Jun 2018 • Martin Wistuba, Ambrish Rawat
We introduce a new Bayesian multi-class support vector machine by formulating a pseudo-likelihood for a multi-class hinge loss in the form of a location-scale mixture of Gaussians.
no code implementations • 22 Nov 2017 • Ambrish Rawat, Martin Wistuba, Maria-Irina Nicolae
Deep Learning models are vulnerable to adversarial examples, i. e.\ images obtained via deliberate imperceptible perturbations, such that the model misclassifies them with high confidence.
no code implementations • 28 Aug 2017 • Vincent P. A. Lonij, Ambrish Rawat, Maria-Irina Nicolae
First, a knowledge-graph representation is learned to embed a large set of entities into a semantic space.
no code implementations • 21 Jul 2017 • Valentina Zantedeschi, Maria-Irina Nicolae, Ambrish Rawat
Following the recent adoption of deep neural networks (DNN) accross a wide range of applications, adversarial attacks against these models have proven to be an indisputable threat.
no code implementations • 25 May 2017 • Mathieu Sinn, Ambrish Rawat
Generative Adversarial Networks (GANs) have become a widely popular framework for generative modelling of high-dimensional datasets.