Search Results for author: Paul Montague

Found 18 papers, 5 papers with code

Removing Undesirable Concepts in Text-to-Image Generative Models with Learnable Prompts

no code implementations18 Mar 2024 Anh Bui, Khanh Doan, Trung Le, Paul Montague, Tamas Abraham, Dinh Phung

Generative models have demonstrated remarkable potential in generating visually impressive content from textual descriptions.

Transfer Learning

BAIT: Benchmarking (Embedding) Architectures for Interactive Theorem-Proving

no code implementations6 Mar 2024 Sean Lamont, Michael Norrish, Amir Dezfouli, Christian Walder, Paul Montague

We also provide a qualitative analysis, illustrating that improved performance is associated with more semantically-aware embeddings.

Automated Theorem Proving Benchmarking

Adversarial Robustness on Image Classification with $k$-means

no code implementations15 Dec 2023 Rollin Omari, Junae Kim, Paul Montague

In this paper we explore the challenges and strategies for enhancing the robustness of $k$-means clustering algorithms against adversarial manipulations.

Adversarial Robustness Classification +2

It's Simplex! Disaggregating Measures to Improve Certified Robustness

no code implementations20 Sep 2023 Andrew C. Cullen, Paul Montague, Shijie Liu, Sarah M. Erfani, Benjamin I. P. Rubinstein

Certified robustness circumvents the fragility of defences against adversarial attacks, by endowing model predictions with guarantees of class invariance for attacks up to a calculated size.

Generating Adversarial Examples with Task Oriented Multi-Objective Optimization

1 code implementation26 Apr 2023 Anh Bui, Trung Le, He Zhao, Quan Tran, Paul Montague, Dinh Phung

The key factor for the success of adversarial training is the capability to generate qualified and divergent adversarial examples which satisfy some objectives/goals (e. g., finding adversarial examples that maximize the model losses for simultaneously attacking multiple models).

Et Tu Certifications: Robustness Certificates Yield Better Adversarial Examples

no code implementations9 Feb 2023 Andrew C. Cullen, Shijie Liu, Paul Montague, Sarah M. Erfani, Benjamin I. P. Rubinstein

In guaranteeing the absence of adversarial examples in an instance's neighbourhood, certification mechanisms play an important role in demonstrating neural net robustness.

Double Bubble, Toil and Trouble: Enhancing Certified Robustness through Transitivity

1 code implementation12 Oct 2022 Andrew C. Cullen, Paul Montague, Shijie Liu, Sarah M. Erfani, Benjamin I. P. Rubinstein

In response to subtle adversarial examples flipping classifications of neural network models, recent research has promoted certified robustness as a solution.

Open-Ended Question Answering

Improving Robustness with Optimal Transport based Adversarial Generalization

no code implementations29 Sep 2021 Siqi Xia, Shijie Liu, Trung Le, Dinh Phung, Sarah Erfani, Benjamin I. P. Rubinstein, Christopher Leckie, Paul Montague

More specifically, by minimizing the WS distance of interest, an adversarial example is pushed toward the cluster of benign examples sharing the same label on the latent space, which helps to strengthen the generalization ability of the classifier on the adversarial examples.

Understanding and Achieving Efficient Robustness with Adversarial Supervised Contrastive Learning

1 code implementation25 Jan 2021 Anh Bui, Trung Le, He Zhao, Paul Montague, Seyit Camtepe, Dinh Phung

Central to this approach is the selection of positive (similar) and negative (dissimilar) sets to provide the model the opportunity to `contrast' between data and class representation in the latent space.

Contrastive Learning

Learning to Attack with Fewer Pixels: A Probabilistic Post-hoc Framework for Refining Arbitrary Dense Adversarial Attacks

no code implementations13 Oct 2020 He Zhao, Thanh Nguyen, Trung Le, Paul Montague, Olivier De Vel, Tamas Abraham, Dinh Phung

Deep neural network image classifiers are reported to be susceptible to adversarial evasion attacks, which use carefully crafted images created to mislead a classifier.

Adversarial Attack Detection

Improving Ensemble Robustness by Collaboratively Promoting and Demoting Adversarial Robustness

1 code implementation21 Sep 2020 Anh Bui, Trung Le, He Zhao, Paul Montague, Olivier deVel, Tamas Abraham, Dinh Phung

An important technique of this approach is to control the transferability of adversarial examples among ensemble members.

Adversarial Robustness

Improving Adversarial Robustness by Enforcing Local and Global Compactness

1 code implementation ECCV 2020 Anh Bui, Trung Le, He Zhao, Paul Montague, Olivier deVel, Tamas Abraham, Dinh Phung

The fact that deep neural networks are susceptible to crafted perturbations severely impacts the use of deep learning in certain domains of application.

Adversarial Robustness Clustering

Perturbations are not Enough: Generating Adversarial Examples with Spatial Distortions

no code implementations3 Oct 2019 He Zhao, Trung Le, Paul Montague, Olivier De Vel, Tamas Abraham, Dinh Phung

Deep neural network image classifiers are reported to be susceptible to adversarial evasion attacks, which use carefully crafted images created to mislead a classifier.

Adversarial Attack Translation

Maximal Divergence Sequential Autoencoder for Binary Software Vulnerability Detection

no code implementations ICLR 2019 Tue Le, Tuan Nguyen, Trung Le, Dinh Phung, Paul Montague, Olivier De Vel, Lizhen Qu

Due to the sharp increase in the severity of the threat imposed by software vulnerabilities, the detection of vulnerabilities in binary code has become an important concern in the software industry, such as the embedded systems industry, and in the field of computer security.

Computer Security Vulnerability Detection

Adversarial Reinforcement Learning under Partial Observability in Autonomous Computer Network Defence

no code implementations25 Feb 2019 Yi Han, David Hubczenko, Paul Montague, Olivier De Vel, Tamas Abraham, Benjamin I. P. Rubinstein, Christopher Leckie, Tansu Alpcan, Sarah Erfani

Recent studies have demonstrated that reinforcement learning (RL) agents are susceptible to adversarial manipulation, similar to vulnerabilities previously demonstrated in the supervised learning setting.

reinforcement-learning Reinforcement Learning (RL)

Reinforcement Learning for Autonomous Defence in Software-Defined Networking

no code implementations17 Aug 2018 Yi Han, Benjamin I. P. Rubinstein, Tamas Abraham, Tansu Alpcan, Olivier De Vel, Sarah Erfani, David Hubczenko, Christopher Leckie, Paul Montague

Despite the successful application of machine learning (ML) in a wide range of domains, adaptability---the very property that makes machine learning desirable---can be exploited by adversaries to contaminate training and evade classification.

BIG-bench Machine Learning General Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.