Search Results for author: Aleksandar Petrov

Found 14 papers, 7 papers with code

On the Coexistence and Ensembling of Watermarks

1 code implementation29 Jan 2025 Aleksandar Petrov, Shruti Agarwal, Philip H. S. Torr, Adel Bibi, John Collomosse

We perform the first study of coexistence of deep image watermarking methods and, contrary to intuition, we find that various open-source watermarks can coexist with only minor impacts on image quality and decoding robustness.

Universal In-Context Approximation By Prompting Fully Recurrent Models

1 code implementation3 Jun 2024 Aleksandar Petrov, Tom A. Lamb, Alasdair Paren, Philip H. S. Torr, Adel Bibi

We demonstrate that RNNs, LSTMs, GRUs, Linear RNNs, and linear gated architectures such as Mamba and Hawk/Griffin can also serve as universal in-context approximators.

In-Context Learning Mamba

Prompting a Pretrained Transformer Can Be a Universal Approximator

no code implementations22 Feb 2024 Aleksandar Petrov, Philip H. S. Torr, Adel Bibi

Despite the widespread adoption of prompting, prompt tuning and prefix-tuning of transformer models, our theoretical understanding of these fine-tuning methods remains limited.

When Do Prompting and Prefix-Tuning Work? A Theory of Capabilities and Limitations

1 code implementation30 Oct 2023 Aleksandar Petrov, Philip H. S. Torr, Adel Bibi

Context-based fine-tuning methods, including prompting, in-context learning, soft prompting (also known as prompt tuning), and prefix-tuning, have gained popularity due to their ability to often match the performance of full fine-tuning with a fraction of the parameters.

In-Context Learning

Language Models as a Service: Overview of a New Paradigm and its Challenges

no code implementations28 Sep 2023 Emanuele La Malfa, Aleksandar Petrov, Simon Frieder, Christoph Weinhuber, Ryan Burnell, Raza Nazar, Anthony G. Cohn, Nigel Shadbolt, Michael Wooldridge

This paper has two goals: on the one hand, we delineate how the aforementioned challenges act as impediments to the accessibility, replicability, reliability, and trustworthiness of LMaaS.

Benchmarking

Certifying Ensembles: A General Certification Theory with S-Lipschitzness

no code implementations25 Apr 2023 Aleksandar Petrov, Francisco Eiras, Amartya Sanyal, Philip H. S. Torr, Adel Bibi

Improving and guaranteeing the robustness of deep learning models has been a topic of intense research.

Robustness of Unsupervised Representation Learning without Labels

1 code implementation8 Oct 2022 Aleksandar Petrov, Marta Kwiatkowska

When used in adversarial training, they improve most unsupervised robustness measures, including certified robustness.

Representation Learning

Learning Camera Miscalibration Detection

1 code implementation24 May 2020 Andrei Cramariuc, Aleksandar Petrov, Rohit Suri, Mayank Mittal, Roland Siegwart, Cesar Cadena

Self-diagnosis and self-repair are some of the key challenges in deploying robotic platforms for long-term real-world applications.

Dataset Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.