Search Results for author: Philippe Burlina

Found 18 papers, 4 papers with code

PLeak: Prompt Leaking Attacks against Large Language Model Applications

1 code implementation10 May 2024 Bo Hui, Haolin Yuan, Neil Gong, Philippe Burlina, Yinzhi Cao

As a result, a natural attack, called prompt leaking, is to steal the system prompt from an LLM application, which compromises the developer's intellectual property.

Language Modelling Large Language Model

Evaluating Trade-offs in Computer Vision Between Attribute Privacy, Fairness and Utility

no code implementations15 Feb 2023 William Paul, Philip Mathew, Fady Alajaji, Philippe Burlina

This paper investigates to what degree and magnitude tradeoffs exist between utility, fairness and attribute privacy in computer vision.

Attribute Fairness

Classification Utility, Fairness, and Compactness via Tunable Information Bottleneck and Rényi Measures

1 code implementation20 Jun 2022 Adam Gronowski, William Paul, Fady Alajaji, Bahman Gharesifard, Philippe Burlina

Designing machine learning algorithms that are accurate yet fair, not discriminating based on any sensitive attribute, is of paramount importance for society to accept AI for critical applications.

Attribute Fairness +2

Renyi Fair Information Bottleneck for Image Classification

no code implementations9 Mar 2022 Adam Gronowski, William Paul, Fady Alajaji, Bahman Gharesifard, Philippe Burlina

We develop a novel method for ensuring fairness in machine learning which we term as the Renyi Fair Information Bottleneck (RFIB).

Classification Fairness +1

Robustness and Adaptation to Hidden Factors of Variation

no code implementations3 Mar 2022 William Paul, Philippe Burlina

We tackle here a specific, still not widely addressed aspect, of AI robustness, which consists of seeking invariance / insensitivity of model performance to hidden factors of variations in the data.

Data Augmentation

Patch Attack Invariance: How Sensitive are Patch Attacks to 3D Pose?

no code implementations16 Aug 2021 Max Lennon, Nathan Drenkow, Philippe Burlina

To this end, several contributions are made here: A) we develop a new metric called mean Attack Success over Transformations (mAST) to evaluate patch attack robustness and invariance; and B), we systematically assess robustness of patch attacks to 3D position and orientation for various conditions; in particular, we conduct a sensitivity analysis which provides important qualitative insights into attack effectiveness as a function of the 3D pose of a patch relative to the camera (rotation, translation) and sets forth some properties for patch attack 3D invariance; and C), we draw novel qualitative conclusions including: 1) we demonstrate that for some 3D transformations, namely rotation and loom, increasing the training distribution support yields an increase in patch success over the full range at test time.

Adaptation and Generalization for Unknown Sensitive Factors of Variations

no code implementations28 Jul 2021 William Paul, Philippe Burlina

We also demonstrate how adaptation to real factors of variations can be performed in the semi-supervised case where some target factor labels are known, via automated intervention selection.

Domain Generalization Fairness

Practical Blind Membership Inference Attack via Differential Comparisons

1 code implementation5 Jan 2021 Bo Hui, Yuchen Yang, Haolin Yuan, Philippe Burlina, Neil Zhenqiang Gong, Yinzhi Cao

The success of the former heavily depends on the quality of the shadow model, i. e., the transferability between the shadow and the target; the latter, given only blackbox probing access to the target model, cannot make an effective inference of unknowns, compared with MI attacks using shadow models, due to the insufficient number of qualified samples labeled with ground truth membership information.

Inference Attack Membership Inference Attack

Addressing Visual Search in Open and Closed Set Settings

no code implementations11 Dec 2020 Nathan Drenkow, Philippe Burlina, Neil Fendley, Onyekachi Odoemene, Jared Markowitz

We interpret both detection problems through a probabilistic, Bayesian lens, whereby the objectness maps produced by our method serve as priors in a maximum-a-posteriori approach to the detection step.

Object object-detection +1

Attack Agnostic Detection of Adversarial Examples via Random Subspace Analysis

no code implementations11 Dec 2020 Nathan Drenkow, Neil Fendley, Philippe Burlina

We present a technique that utilizes properties of random projections to characterize the behavior of clean and adversarial examples across a diverse set of subspaces.

Adversarial Attack Detection

Least $k$th-Order and Rényi Generative Adversarial Networks

no code implementations3 Jun 2020 Himesh Bhatia, William Paul, Fady Alajaji, Bahman Gharesifard, Philippe Burlina

Another novel GAN generator loss function is next proposed in terms of R\'{e}nyi cross-entropy functionals with order $\alpha >0$, $\alpha\neq 1$.


Jacks of All Trades, Masters Of None: Addressing Distributional Shift and Obtrusiveness via Transparent Patch Attacks

no code implementations1 May 2020 Neil Fendley, Max Lennon, I-Jeng Wang, Philippe Burlina, Nathan Drenkow

We focus on the development of effective adversarial patch attacks and -- for the first time -- jointly address the antagonistic objectives of attack success and obtrusiveness via the design of novel semi-transparent patches.

Addressing Artificial Intelligence Bias in Retinal Disease Diagnostics

no code implementations28 Apr 2020 Philippe Burlina, Neil Joshi, William Paul, Katia D. Pacheco, Neil M. Bressler

Using novel generative methods for addressing missing subpopulation training data (DR-referable darker-skin) achieved instead accuracy, for lighter-skin, of 72. 0% (65. 8%, 78. 2%), and for darker-skin, of 71. 5% (65. 2%, 77. 8%), demonstrating closer parity (delta=0. 5%) in accuracy across subpopulations (Welch t-test t=0. 111, P=. 912).

Domain Generalization

Unsupervised Discovery, Control, and Disentanglement of Semantic Attributes with Applications to Anomaly Detection

no code implementations25 Feb 2020 William Paul, I-Jeng Wang, Fady Alajaji, Philippe Burlina

Our work focuses on unsupervised and generative methods that address the following goals: (a) learning unsupervised generative representations that discover latent factors controlling image semantic attributes, (b) studying how this ability to control attributes formally relates to the issue of latent factor disentanglement, clarifying related but dissimilar concepts that had been confounded in the past, and (c) developing anomaly detection methods that leverage representations learned in (a).

Anomaly Detection Attribute +4

Where's Wally Now? Deep Generative and Discriminative Embeddings for Novelty Detection

no code implementations CVPR 2019 Philippe Burlina, Neil Joshi, I-Jeng Wang

We develop a framework for novelty detection (ND) methods relying on deep embeddings, either discriminative or generative, and also propose a novel framework for assessing their performance.

Novelty Detection

Occupancy Map Prediction Using Generative and Fully Convolutional Networks for Vehicle Navigation

no code implementations6 Mar 2018 Kapil Katyal, Katie Popek, Chris Paxton, Joseph Moore, Kevin Wolfe, Philippe Burlina, Gregory D. Hager

In these situations, the robot's ability to reason about its future motion is often severely limited by sensor field of view (FOV).

Navigate SSIM

Cannot find the paper you are looking for? You can Submit a new open access paper.