1 code implementation • 5 Jul 2017 • Reuben Binns, Michael Veale, Max Van Kleek, Nigel Shadbolt
This paper provides some exploratory methods by which the normative biases of algorithmic content moderation systems can be measured, by way of a case study using an existing dataset of comments labelled for offence.
no code implementations • 20 Jan 2021 • Nitin Agrawal, Reuben Binns, Max Van Kleek, Kim Laine, Nigel Shadbolt
Homomorphic encryption, secure multi-party computation, and differential privacy are part of an emerging class of Privacy Enhancing Technologies which share a common promise: to preserve privacy whilst also obtaining the benefits of computational analysis.
Human-Computer Interaction
no code implementations • 9 Oct 2021 • Siddhartha Datta, Giulio Lovisotto, Ivan Martinovic, Nigel Shadbolt
As collaborative learning and the outsourcing of data collection become more common, malicious actors (or agents) which attempt to manipulate the learning process face an additional obstacle as they compete with each other.
1 code implementation • 20 Dec 2021 • Siddhartha Datta, Konrad Kollnig, Nigel Shadbolt
Digital harms are widespread in the mobile ecosystem.
no code implementations • 24 Jan 2022 • Siddhartha Datta, Nigel Shadbolt
Attack vectors that compromise machine learning pipelines in the physical world have been demonstrated in recent research, from perturbations to architectural components.
no code implementations • 28 Jan 2022 • Siddhartha Datta, Nigel Shadbolt
Malicious agents in collaborative learning and outsourced data collection threaten the training of clean models.
no code implementations • 7 Mar 2022 • Siddhartha Datta, Nigel Shadbolt
clean labels, which motivates this paper's work on the construction of multi-agent backdoor defenses that maximize accuracy w. r. t.
no code implementations • NAACL (WOAH) 2022 • Siddhartha Datta, Konrad Kollnig, Nigel Shadbolt
Digital harms can manifest across any interface.
no code implementations • 19 May 2022 • Siddhartha Datta, Nigel Shadbolt
Inspired by recent work on neural subspaces and mode connectivity, we revisit parameter subspace sampling for shifted and/or interpolatable input distributions (instead of a single, unshifted distribution).
no code implementations • 29 Sep 2022 • Siddhartha Datta, Nigel Shadbolt
Adapting model parameters to incoming streams of data is a crucial factor to deep learning scalability.
no code implementations • 27 Jan 2023 • Siddhartha Datta, Nigel Shadbolt
Large models support great zero-shot and few-shot capabilities.
no code implementations • 28 Sep 2023 • Emanuele La Malfa, Aleksandar Petrov, Simon Frieder, Christoph Weinhuber, Ryan Burnell, Raza Nazar, Anthony G. Cohn, Nigel Shadbolt, Michael Wooldridge
This paper has two goals: on the one hand, we delineate how the aforementioned challenges act as impediments to the accessibility, replicability, reliability, and trustworthiness of LMaaS.
no code implementations • 17 Jan 2024 • Emanuele La Malfa, Christoph Weinhuber, Orazio Torre, Fangru Lin, Anthony Cohn, Nigel Shadbolt, Michael Wooldridge
We investigate the extent to which Large Language Models (LLMs) can simulate the execution of computer code and algorithms.