Search Results for author: Paolo Romano

Found 6 papers, 3 papers with code

Adversarial training for tabular data with attack propagation

no code implementations28 Jul 2023 Tiago Leon Melo, João Bravo, Marco O. P. Sampaio, Paolo Romano, Hugo Ferreira, João Tiago Ascensão, Pedro Bizarro

Adversarial attacks are a major concern in security-centered applications, where malicious actors continuously try to mislead Machine Learning (ML) models into wrongly classifying fraudulent activity as legitimate, whereas system maintainers try to stop them.

Feature Engineering Fraud Detection

Hyper-parameter Tuning for Adversarially Robust Models

1 code implementation5 Apr 2023 Pedro Mendes, Paolo Romano, David Garlan

This work focuses on the problem of hyper-parameter tuning (HPT) for robust (i. e., adversarially trained) models, shedding light on the new challenges and opportunities arising during the HPT process for robust models.

Adversarial Robustness

HyperJump: Accelerating HyperBand via Risk Modelling

1 code implementation5 Aug 2021 Pedro Mendes, Maria Casimiro, Paolo Romano, David Garlan

In the literature on hyper-parameter tuning, a number of recent solutions rely on low-fidelity observations (e. g., training with sub-sampled datasets) in order to efficiently identify promising configurations to be then tested via high-fidelity observations (e. g., using the full dataset).

TrimTuner: Efficient Optimization of Machine Learning Jobs in the Cloud via Sub-Sampling

no code implementations9 Nov 2020 Pedro Mendes, Maria Casimiro, Paolo Romano, David Garlan

This work introduces TrimTuner, the first system for optimizing machine learning jobs in the cloud to exploit sub-sampling techniques to reduce the cost of the optimization process while keeping into account user-specified constraints.

BIG-bench Machine Learning

Bandwidth-Aware Page Placement in NUMA

2 code implementations6 Mar 2020 David Gureya, João Neto, Reza Karimi, João Barreto, Pramod Bhatotia, Vivien Quema, Rodrigo Rodrigues, Paolo Romano, Vladimir Vlassov

Page placement is a critical problem for memoryintensive applications running on a shared-memory multiprocessor with a non-uniform memory access (NUMA) architecture.

Distributed, Parallel, and Cluster Computing

On Bootstrapping Machine Learning Performance Predictors via Analytical Models

no code implementations19 Oct 2014 Diego Didona, Paolo Romano

Performance modeling typically relies on two antithetic methodologies: white box models, which exploit knowledge on system's internals and capture its dynamics using analytical approaches, and black box techniques, which infer relations among the input and output variables of a system based on the evidences gathered during an initial training phase.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.