Search Results for author: Filipe Condessa

Found 11 papers, 0 papers with code

Performance measures for classification systems with rejection

no code implementations10 Apr 2015 Filipe Condessa, Jelena Kovacevic, Jose Bioucas-Dias

Classifiers with rejection are essential in real-world applications where misclassifications and their effects are critical.

Classification Decision Making +1

SegSALSA-STR: A convex formulation to supervised hyperspectral image segmentation using hidden fields and structure tensor regularization

no code implementations27 Apr 2015 Filipe Condessa, Jose Bioucas-Dias, Jelena Kovacevic

We present a supervised hyperspectral image segmentation algorithm based on a convex formulation of a marginal maximum a posteriori segmentation with hidden fields and structure tensor regularization: Segmentation via the Constraint Split Augmented Lagrangian Shrinkage by Structure Tensor Regularization (SegSALSA-STR).

Hyperspectral Image Segmentation Image Segmentation +2

Robust hyperspectral image classification with rejection fields

no code implementations29 Apr 2015 Filipe Condessa, Jose Bioucas-Dias, Jelena Kovacevic

We validate our method in real hyperspectral data and show that the performance gains obtained from the rejection fields are equivalent to an increase the dimension of the training sets.

Classification General Classification +2

Provably robust deep generative models

no code implementations22 Apr 2020 Filipe Condessa, Zico Kolter

In this paper, we propose a method for training provably robust generative models, specifically a provably robust version of the variational auto-encoder (VAE).

You Only Query Once: Effective Black Box Adversarial Attacks with Minimal Repeated Queries

no code implementations29 Jan 2021 Devin Willmott, Anit Kumar Sahu, Fatemeh Sheikholeslami, Filipe Condessa, Zico Kolter

In this work, we instead show that it is possible to craft (universal) adversarial perturbations in the black-box setting by querying a sequence of different images only once.

Empirical robustification of pre-trained classifiers

no code implementations ICML Workshop AML 2021 Mohammad Sadegh Norouzzadeh, Wan-Yi Lin, Leonid Boytsov, Leslie Rice, huan zhang, Filipe Condessa, J Zico Kolter

Most pre-trained classifiers, though they may work extremely well on the domain they were trained upon, are not trained in a robust fashion, and therefore are sensitive to adversarial attacks.

Denoising Image Reconstruction +1

Smooth-Reduce: Leveraging Patches for Improved Certified Robustness

no code implementations12 May 2022 Ameya Joshi, Minh Pham, Minsu Cho, Leonid Boytsov, Filipe Condessa, J. Zico Kolter, Chinmay Hegde

Randomized smoothing (RS) has been shown to be a fast, scalable technique for certifying the robustness of deep neural network classifiers.

Defending Multimodal Fusion Models against Single-Source Adversaries

no code implementations CVPR 2021 Karren Yang, Wan-Yi Lin, Manash Barman, Filipe Condessa, Zico Kolter

Beyond achieving high performance across many vision tasks, multimodal models are expected to be robust to single-source faults due to the availability of redundant information between modalities.

Action Recognition object-detection +2

Leveraging Foundation Models to Improve Lightweight Clients in Federated Learning

no code implementations14 Nov 2023 Xidong Wu, Wan-Yi Lin, Devin Willmott, Filipe Condessa, Yufei Huang, Zhenzhen Li, Madan Ravi Ganesh

Federated Learning (FL) is a distributed training paradigm that enables clients scattered across the world to cooperatively learn a global model without divulging confidential data.

Federated Learning

A Curious Case of Remarkable Resilience to Gradient Attacks via Fully Convolutional and Differentiable Front End with a Skip Connection

no code implementations26 Feb 2024 Leonid Boytsov, Ameya Joshi, Filipe Condessa

By training them using a small learning rate for about one epoch, we obtained models that retained the accuracy of the backbone classifier while being unusually resistant to gradient attacks including APGD and FAB-T attacks from the AutoAttack package, which we attributed to gradient masking.

Adversarial Robustness

Cannot find the paper you are looking for? You can Submit a new open access paper.