Search Results for author: Alessandro De Palma

Found 9 papers, 4 papers with code

Expressive Losses for Verified Robustness via Convex Combinations

1 code implementation23 May 2023 Alessandro De Palma, Rudy Bunel, Krishnamurthy Dvijotham, M. Pawan Kumar, Robert Stanforth, Alessio Lomuscio

In order to train networks for verified adversarial robustness, it is common to over-approximate the worst-case loss over perturbation regions, resulting in networks that attain verifiability at the expense of standard performance.

Adversarial Robustness

IBP Regularization for Verified Adversarial Robustness via Branch-and-Bound

1 code implementation29 Jun 2022 Alessandro De Palma, Rudy Bunel, Krishnamurthy Dvijotham, M. Pawan Kumar, Robert Stanforth

Recent works have tried to increase the verifiability of adversarially trained networks by running the attacks over domains larger than the original perturbations and adding various regularization terms to the objective.

Adversarial Robustness

In Defense of the Unitary Scalarization for Deep Multi-Task Learning

1 code implementation11 Jan 2022 Vitaly Kurin, Alessandro De Palma, Ilya Kostrikov, Shimon Whiteson, M. Pawan Kumar

We show that unitary scalarization, coupled with standard regularization and stabilization techniques from single-task learning, matches or improves upon the performance of complex multi-task optimizers in popular supervised and reinforcement learning settings.

Multi-Task Learning Reinforcement Learning (RL)

Lagrangian Decomposition for Neural Network Verification

2 code implementations24 Feb 2020 Rudy Bunel, Alessandro De Palma, Alban Desmaison, Krishnamurthy Dvijotham, Pushmeet Kohli, Philip H. S. Torr, M. Pawan Kumar

Both the algorithms offer three advantages: (i) they yield bounds that are provably at least as tight as previous dual algorithms relying on Lagrangian relaxations; (ii) they are based on operations analogous to forward/backward pass of neural networks layers and are therefore easily parallelizable, amenable to GPU implementation and able to take advantage of the convolutional structure of problems; and (iii) they allow for anytime stopping while still providing valid bounds.

valid

Sampling Acquisition Functions for Batch Bayesian Optimization

no code implementations22 Mar 2019 Alessandro De Palma, Celestine Mendler-Dünner, Thomas Parnell, Andreea Anghel, Haralampos Pozidis

We present Acquisition Thompson Sampling (ATS), a novel technique for batch Bayesian Optimization (BO) based on the idea of sampling multiple acquisition functions from a stochastic process.

Bayesian Optimization Thompson Sampling

Benchmarking and Optimization of Gradient Boosting Decision Tree Algorithms

no code implementations12 Sep 2018 Andreea Anghel, Nikolaos Papandreou, Thomas Parnell, Alessandro De Palma, Haralampos Pozidis

Gradient boosting decision trees (GBDTs) have seen widespread adoption in academia, industry and competitive data science due to their state-of-the-art performance in many machine learning tasks.

Bayesian Optimization Benchmarking

Distributed Stratified Locality Sensitive Hashing for Critical Event Prediction in the Cloud

no code implementations1 Dec 2017 Alessandro De Palma, Erik Hemberg, Una-May O'Reilly

The availability of massive healthcare data repositories calls for efficient tools for data-driven medicine.

Cannot find the paper you are looking for? You can Submit a new open access paper.