Search Results for author: Lewis D. Griffin

Found 18 papers, 3 papers with code

Understanding the limitations of self-supervised learning for tabular anomaly detection

no code implementations15 Sep 2023 Kimberly T. Mai, Toby Davies, Lewis D. Griffin

While self-supervised learning has improved anomaly detection in computer vision and natural language processing, it is unclear whether tabular data can benefit from it.

Anomaly Detection Self-Supervised Learning

Use of LLMs for Illicit Purposes: Threats, Prevention Measures, and Vulnerabilities

no code implementations24 Aug 2023 Maximilian Mozes, Xuanli He, Bennett Kleinberg, Lewis D. Griffin

Spurred by the recent rapid increase in the development and distribution of large language models (LLMs) across industry and academia, much recent work has drawn attention to safety- and security-related threats and vulnerabilities of LLMs, including in the context of potentially criminal activities.

A Simple Approach for State-Action Abstraction using a Learned MDP Homomorphism

no code implementations14 Sep 2022 Augustine N. Mavor-Parker, Matthew J. Sargent, Andrea Banino, Lewis D. Griffin, Caswell Barry

Consequently, impressive improvements in sample efficiency have been achieved when a suitable MDP homomorphism can be constructed a priori -- usually by exploiting a practioner's knowledge of environment symmetries.

Self-Supervised Losses for One-Class Textual Anomaly Detection

no code implementations12 Apr 2022 Kimberly T. Mai, Toby Davies, Lewis D. Griffin

The separability of anomalies and inliers signals that a representation is more effective for detecting semantic anomalies, whilst the presence of narrow feature directions signals a representation that is effective for detecting syntactic anomalies.

Anomaly Detection

Contrasting Human- and Machine-Generated Word-Level Adversarial Examples for Text Classification

1 code implementation EMNLP 2021 Maximilian Mozes, Max Bartolo, Pontus Stenetorp, Bennett Kleinberg, Lewis D. Griffin

Research shows that natural language processing models are generally considered to be vulnerable to adversarial attacks; but recent work has drawn attention to the issue of validating these adversarial inputs against certain criteria (e. g., the preservation of semantics and grammaticality).

Sentiment Analysis Sentiment Classification +3

Brittle Features May Help Anomaly Detection

no code implementations21 Apr 2021 Kimberly T. Mai, Toby Davies, Lewis D. Griffin

In addition, separability between anomalies and normal data is important but not the sole factor for a good representation, as anomaly detection performance is also correlated with more adversarially brittle features in the representation space.

Anomaly Detection Knowledge Distillation

Escaping Stochastic Traps with Aleatoric Mapping Agents

1 code implementation8 Feb 2021 Augustine N. Mavor-Parker, Kimberly A. Young, Caswell Barry, Lewis D. Griffin

Exploration in environments with sparse rewards is difficult for artificial agents.

Frequency-Guided Word Substitutions for Detecting Textual Adversarial Examples

no code implementations EACL 2021 Maximilian Mozes, Pontus Stenetorp, Bennett Kleinberg, Lewis D. Griffin

Recent efforts have shown that neural text processing models are vulnerable to adversarial examples, but the nature of these examples is poorly understood.

General Classification SST-2 +1

Conditional Adversarial Camera Model Anonymization

1 code implementation18 Feb 2020 Jerone T. A. Andrews, Yidan Zhang, Lewis D. Griffin

Model anonymization is the process of transforming these artifacts such that the apparent capture model is changed.

Multiple-Identity Image Attacks Against Face-based Identity Verification

no code implementations20 Jun 2019 Jerone T. A. Andrews, Thomas Tanay, Lewis D. Griffin

New quantitative results are presented that support an explanation in terms of the geometry of the representations spaces used by the verification systems.

A New Angle on L2 Regularization

no code implementations28 Jun 2018 Thomas Tanay, Lewis D. Griffin

Imagine two high-dimensional clusters and a hyperplane separating them.

General Classification L2 Regularization

Built-in Vulnerabilities to Imperceptible Adversarial Perturbations

no code implementations19 Jun 2018 Thomas Tanay, Jerone T. A. Andrews, Lewis D. Griffin

Designing models that are robust to small adversarial perturbations of their inputs has proven remarkably difficult.

Test

Automated detection of smuggled high-risk security threats using Deep Learning

no code implementations9 Sep 2016 Nicolas Jaccard, Thomas W. Rogers, Edward J. Morton, Lewis D. Griffin

In this contribution, we demonstrate for the first time the use of Convolutional Neural Networks (CNNs), a type of Deep Learning, to automate the detection of SMTs in fullsize X-ray images of cargo containers.

Vocal Bursts Intensity Prediction

Detection of concealed cars in complex cargo X-ray imagery using Deep Learning

no code implementations26 Jun 2016 Nicolas Jaccard, Thomas W. Rogers, Edward J. Morton, Lewis D. Griffin

In this contribution, we describe a method for the detection of cars in X-ray cargo images based on trained-from-scratch Convolutional Neural Networks.

Image Classification object-detection +1

Better Image Segmentation by Exploiting Dense Semantic Predictions

no code implementations5 Jun 2016 Qiyang Zhao, Lewis D. Griffin

The paper focuses on utilizing the FCNN-based dense semantic predictions in the bottom-up image segmentation, arguing to take semantic cues into account from the very beginning.

Image Segmentation Segmentation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.