1 code implementation • 13 Aug 2024 • Louis Kwok, Michal Bravansky, Lewis D. Griffin
The success of Large Language Models (LLMs) in multicultural environments hinges on their ability to understand users' diverse cultural backgrounds.
no code implementations • 15 Sep 2023 • Kimberly T. Mai, Toby Davies, Lewis D. Griffin
While self-supervised learning has improved anomaly detection in computer vision and natural language processing, it is unclear whether tabular data can benefit from it.
no code implementations • 24 Aug 2023 • Maximilian Mozes, Xuanli He, Bennett Kleinberg, Lewis D. Griffin
Spurred by the recent rapid increase in the development and distribution of large language models (LLMs) across industry and academia, much recent work has drawn attention to safety- and security-related threats and vulnerabilities of LLMs, including in the context of potentially criminal activities.
no code implementations • 20 Oct 2022 • Maximilian Mozes, Bennett Kleinberg, Lewis D. Griffin
Adversarial examples in NLP are receiving increasing research attention.
no code implementations • 14 Sep 2022 • Augustine N. Mavor-Parker, Matthew J. Sargent, Christian Pehle, Andrea Banino, Lewis D. Griffin, Caswell Barry
Reinforcement learning agents must painstakingly learn through trial and error what sets of state-action pairs are value equivalent -- requiring an often prohibitively large amount of environment experience.
no code implementations • 12 Apr 2022 • Kimberly T. Mai, Toby Davies, Lewis D. Griffin
The separability of anomalies and inliers signals that a representation is more effective for detecting semantic anomalies, whilst the presence of narrow feature directions signals a representation that is effective for detecting syntactic anomalies.
1 code implementation • EMNLP 2021 • Maximilian Mozes, Max Bartolo, Pontus Stenetorp, Bennett Kleinberg, Lewis D. Griffin
Research shows that natural language processing models are generally considered to be vulnerable to adversarial attacks; but recent work has drawn attention to the issue of validating these adversarial inputs against certain criteria (e. g., the preservation of semantics and grammaticality).
no code implementations • 21 Apr 2021 • Kimberly T. Mai, Toby Davies, Lewis D. Griffin
In addition, separability between anomalies and normal data is important but not the sole factor for a good representation, as anomaly detection performance is also correlated with more adversarially brittle features in the representation space.
2 code implementations • 8 Feb 2021 • Augustine N. Mavor-Parker, Kimberly A. Young, Caswell Barry, Lewis D. Griffin
Exploration in environments with sparse rewards is difficult for artificial agents.
no code implementations • EACL 2021 • Maximilian Mozes, Pontus Stenetorp, Bennett Kleinberg, Lewis D. Griffin
Recent efforts have shown that neural text processing models are vulnerable to adversarial examples, but the nature of these examples is poorly understood.
1 code implementation • 18 Feb 2020 • Jerone T. A. Andrews, Yidan Zhang, Lewis D. Griffin
Model anonymization is the process of transforming these artifacts such that the apparent capture model is changed.
no code implementations • 20 Jun 2019 • Jerone T. A. Andrews, Thomas Tanay, Lewis D. Griffin
New quantitative results are presented that support an explanation in terms of the geometry of the representations spaces used by the verification systems.
no code implementations • IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 2018 • Lewis D. Griffin, Matthew Caldwell, Jerone T. A. Andrews, Helene Bohler
The algorithms are tested on X-ray parcel images using stream-of-commerce data as the normal class, and parcels with firearms present as examples of anomalies to be detected.
no code implementations • 28 Jun 2018 • Thomas Tanay, Lewis D. Griffin
Imagine two high-dimensional clusters and a hyperplane separating them.
no code implementations • 19 Jun 2018 • Thomas Tanay, Jerone T. A. Andrews, Lewis D. Griffin
Designing models that are robust to small adversarial perturbations of their inputs has proven remarkably difficult.
no code implementations • 9 Sep 2016 • Nicolas Jaccard, Thomas W. Rogers, Edward J. Morton, Lewis D. Griffin
In this contribution, we demonstrate for the first time the use of Convolutional Neural Networks (CNNs), a type of Deep Learning, to automate the detection of SMTs in fullsize X-ray images of cargo containers.
no code implementations • 2 Aug 2016 • Thomas W. Rogers, Nicolas Jaccard, Edward J. Morton, Lewis D. Griffin
We review the relatively immature field of automated image analysis for X-ray cargo imagery.
no code implementations • 26 Jun 2016 • Nicolas Jaccard, Thomas W. Rogers, Edward J. Morton, Lewis D. Griffin
In this contribution, we describe a method for the detection of cars in X-ray cargo images based on trained-from-scratch Convolutional Neural Networks.
no code implementations • 5 Jun 2016 • Qiyang Zhao, Lewis D. Griffin
The paper focuses on utilizing the FCNN-based dense semantic predictions in the bottom-up image segmentation, arguing to take semantic cues into account from the very beginning.
no code implementations • 16 Mar 2016 • Qiyang Zhao, Lewis D. Griffin
These units suppress signals of exceptional magnitude.