no code implementations • 23 May 2024 • Chi Hong, Jiyue Huang, Robert Birke, Dick Epema, Stefanie Roos, Lydia Y. Chen
While diffusion models effectively generate remarkable synthetic images, a key limitation is the inference inefficiency, requiring numerous sampling steps.
1 code implementation • 3 May 2024 • Miruna Beţianu, Abele Mălan, Marco Aldinucci, Robert Birke, Lydia Chen
In this paper, we design DALLMi, Domain Adaptation Large Language Model interpolator, a first-of-its-kind semi-supervised domain adaptation method for text data models based on LLMs, specifically BERT.
3 code implementations • 19 Oct 2023 • Zilong Zhao, Robert Birke, Lydia Chen
Results show that Tabula averagely reduces 46. 2% training time per epoch comparing to current LLMs-based state-of-the-art algorithm and consistently achieves even higher synthetic data utility.
no code implementations • 12 Sep 2023 • Jeroen M. Galjaard, Robert Birke, Juan Perez, Lydia Y. Chen
We show that the accuracy of Reptile, iMAML, and foMAML drops by up to 42% on the Omniglot and CifarFS datasets when meta-training is affected by label noise.
no code implementations • 27 Jun 2023 • Cristina Silvano, Daniele Ielmini, Fabrizio Ferrandi, Leandro Fiorin, Serena Curzel, Luca Benini, Francesco Conti, Angelo Garofalo, Cristian Zambelli, Enrico Calore, Sebastiano Fabio Schifano, Maurizio Palesi, Giuseppe Ascia, Davide Patti, Nicola Petra, Davide De Caro, Luciano Lavagno, Teodoro Urso, Valeria Cardellini, Gian Carlo Cardarilli, Robert Birke, Stefania Perri
Recent trends in deep learning (DL) imposed hardware accelerators as the most viable solution for several classes of high-performance computing (HPC) applications such as image classification, computer vision, and speech recognition.
1 code implementation • 8 Mar 2023 • Gianluca Mittone, Walter Riviera, Iacopo Colonnelli, Robert Birke, Marco Aldinucci
MAFL marries a model-agnostic FL algorithm, AdaBoost. F, with an open industry-grade FL framework: Intel OpenFL.
1 code implementation • 15 Feb 2023 • Gianluca Mittone, Nicolò Tonci, Robert Birke, Iacopo Colonnelli, Doriana Medić, Andrea Bartolini, Roberto Esposito, Emanuele Parisi, Francesco Beneventi, Mirko Polato, Massimo Torquati, Luca Benini, Marco Aldinucci
Federated Learning (FL) and Edge Inference are examples of DML.
no code implementations • 17 Nov 2022 • Yujin Zhu, Zilong Zhao, Robert Birke, Lydia Y. Chen
We show that changing the input column order worsens the statistical difference between real and synthetic data by up to 38. 67% due to the encoding of tabular data and the network architectures.
no code implementations • 12 Oct 2022 • Zilong Zhao, Robert Birke, Lydia Y. Chen
Mainstream state-of-the-art tabular data synthesizers draw methodologies from Generative Adversarial Networks (GANs), which are composed of a generator and a discriminator.
2 code implementations • 1 Apr 2022 • Zilong Zhao, Aditya Kunar, Robert Birke, Lydia Y. Chen
We extensively evaluate CTAB-GAN+ on data similarity and analysis utility against state-of-the-art tabular GANs.
1 code implementation • 18 Aug 2021 • Zilong Zhao, Robert Birke, Aditya Kunar, Lydia Y. Chen
And, while learning GANs to synthesize images on FL systems has just been demonstrated, it is unknown if GANs for tabular data can be learned from decentralized data sources.
no code implementations • 4 Aug 2021 • Cosmin Octavian Pene, Amirmasoud Ghiassi, Taraneh Younesian, Robert Birke, Lydia Y. Chen
Multi-label learning is an emerging extension of the multi-class classification where an image contains multiple labels.
no code implementations • 6 Jul 2021 • Aditya Kunar, Robert Birke, Zilong Zhao, Lydia Chen
Additionally, we rigorously evaluate the theoretical privacy guarantees offered by DP empirically against membership and attribute inference attacks.
1 code implementation • 19 Mar 2021 • Zilong Zhao, Robert Birke, Rui Han, Bogdan Robu, Sara Bouchenak, Sonia Ben Mokhtar, Lydia Y. Chen
Classification algorithms have been widely adopted to detect anomalies for various systems, e. g., IoT, cloud and face recognition, under the common assumption that the data source is clean, i. e., features and labels are correctly set.
1 code implementation • 16 Feb 2021 • Zilong Zhao, Aditya Kunar, Hiek Van der Scheer, Robert Birke, Lydia Y. Chen
In this paper, we develop CTAB-GAN, a novel conditional table GAN architecture that can effectively model diverse data types, including a mix of continuous and categorical variables.
no code implementations • 1 Jan 2021 • Amirmasoud Ghiassi, Robert Birke, Lydia Y. Chen
In this paper, we propose to construct a golden symmetric loss (GSL) based on the estimated confusion matrix as to avoid overfitting to noisy labels and learn effectively from hard classes.
no code implementations • 13 Nov 2020 • Taraneh Younesian, Chi Hong, Amirmasoud Ghiassi, Robert Birke, Lydia Y. Chen
Furthermore, relabeling only 10% of the data using the expert's results in over 90% classification accuracy with SVM.
no code implementations • 13 Jul 2020 • Amirmasoud Ghiassi, Taraneh Younesian, Robert Birke, Lydia Y. Chen
Based on the insights, we design TrustNet that first adversely learns the pattern of noise corruption, being it both symmetric or asymmetric, from a small set of trusted data.
no code implementations • 10 Jul 2020 • Amirmasoud Ghiassi, Robert Birke, Rui Han, Lydia Y. Chen
Today's available datasets in the wild, e. g., from social media and open platforms, present tremendous opportunities and challenges for deep learning, as there is a significant portion of tagged images, but often with noisy, i. e. erroneous, labels.
2 code implementations • IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2020 • Shuyu Lin, Ronald Clark, Robert Birke, Sandro Schönborn, Niki Trigoni, Stephen Roberts
In this work, we propose a VAE-LSTM hybrid model as an unsupervised approach for anomaly detection in time series.
no code implementations • 28 Jan 2020 • Taraneh Younesian, Zilong Zhao, Amirmasoud Ghiassi, Robert Birke, Lydia Y. Chen
A central feature of QActor is to dynamically adjust the query limit according to the learning loss for each data batch.
no code implementations • 11 Nov 2019 • Zilong Zhao, Robert Birke, Rui Han, Bogdan Robu, Sara Bouchenak, Sonia Ben Mokhtar, Lydia Y. Chen
Classification algorithms have been widely adopted to detect anomalies for various systems, e. g., IoT, cloud and face recognition, under the common assumption that the data source is clean, i. e., features and labels are correctly set.
no code implementations • ICLR Workshop DeepGenStruct 2019 • Shuyu Lin, Ronald Clark, Robert Birke, Niki Trigoni, Stephen Roberts
In this paper, we present a new generative model for learning latent embeddings.
no code implementations • 16 Feb 2019 • Shuyu Lin, Ronald Clark, Robert Birke, Niki Trigoni, Stephen Roberts
Variational Auto-encoders (VAEs) have been very successful as methods for forming compressed latent representations of complex, often high-dimensional, data.
no code implementations • 19 Jul 2018 • Chi Hong, Amirmasoud Ghiassi, Yichi Zhou, Robert Birke, Lydia Y. Chen
Our evaluation results on various online scenarios show that BiLA can effectively infer the true labels, with an error rate reduction of at least 10 to 1. 5 percent points for synthetic and real-world datasets, respectively.