Search Results for author: Tu Bui

Found 19 papers, 6 papers with code

ProMark: Proactive Diffusion Watermarking for Causal Attribution

no code implementations14 Mar 2024 Vishal Asnani, John Collomosse, Tu Bui, Xiaoming Liu, Shruti Agarwal

ProMark can maintain image quality whilst outperforming correlation-based attribution.

Attribute

VIXEN: Visual Text Comparison Network for Image Difference Captioning

no code implementations29 Feb 2024 Alexander Black, Jing Shi, Yifei Fan, Tu Bui, John Collomosse

We present VIXEN - a technique that succinctly summarizes in text the visual differences between a pair of images in order to highlight any content manipulation present.

Language Modelling Large Language Model +1

TrustMark: Universal Watermarking for Arbitrary Resolution Images

no code implementations30 Nov 2023 Tu Bui, Shruti Agarwal, John Collomosse

We propose TrustMark - a GAN-based watermarking method with novel design in architecture and spatio-spectra losses to balance the trade-off between watermarked image quality with the watermark recovery accuracy.

Misinformation

RoSteALS: Robust Steganography using Autoencoder Latent Space

1 code implementation6 Apr 2023 Tu Bui, Shruti Agarwal, Ning Yu, John Collomosse

Data hiding such as steganography and invisible watermarking has important applications in copyright protection, privacy-preserved communication and content provenance.

Denoising

VADER: Video Alignment Differencing and Retrieval

no code implementations ICCV 2023 Alexander Black, Simon Jenni, Tu Bui, Md. Mehrab Tanjim, Stefano Petrangeli, Ritwik Sinha, Viswanathan Swaminathan, John Collomosse

We propose VADER, a spatio-temporal matching, alignment, and change summarization method to help fight misinformation spread via manipulated videos.

Misinformation Retrieval +2

PARASOL: Parametric Style Control for Diffusion Image Synthesis

no code implementations11 Mar 2023 Gemma Canet Tarrés, Dan Ruta, Tu Bui, John Collomosse

We propose PARASOL, a multi-modal synthesis model that enables disentangled, parametric control of the visual style of the image by jointly conditioning synthesis on both content and a fine-grained visual style embedding.

Image Generation

RepMix: Representation Mixing for Robust Attribution of Synthesized Images

1 code implementation5 Jul 2022 Tu Bui, Ning Yu, John Collomosse

Uniquely, we present a solution to this task capable of 1) matching images invariant to their semantic content; 2) robust to benign transformations (changes in quality, resolution, shape, etc.)

SImProv: Scalable Image Provenance Framework for Robust Content Attribution

no code implementations28 Jun 2022 Alexander Black, Tu Bui, Simon Jenni, Zhifei Zhang, Viswanathan Swaminanthan, John Collomosse

We present SImProv - a scalable image provenance framework to match a query image back to a trusted database of originals and identify possible manipulations on the query.

Re-Ranking Retrieval

CoGS: Controllable Generation and Search from Sketch and Style

1 code implementation17 Mar 2022 Cusuh Ham, Gemma Canet Tarres, Tu Bui, James Hays, Zhe Lin, John Collomosse

CoGS enables exploration of diverse appearance possibilities for a given sketched object, enabling decoupled control over the structure and the appearance of the output.

Object

VPN: Video Provenance Network for Robust Content Attribution

no code implementations21 Sep 2021 Alexander Black, Tu Bui, Simon Jenni, Vishy Swaminathan, John Collomosse

We present VPN - a content attribution method for recovering provenance information from videos shared online.

Contrastive Learning

Scene Designer: a Unified Model for Scene Search and Synthesis from Sketch

1 code implementation16 Aug 2021 Leo Sampaio Ferraz Ribeiro, Tu Bui, John Collomosse, Moacir Ponti

Scene Designer is a novel method for searching and generating images using free-hand sketches of scene compositions; i. e. drawings that describe both the appearance and relative positions of objects.

Contrastive Learning Object

OSCAR-Net: Object-centric Scene Graph Attention for Image Attribution

no code implementations ICCV 2021 Eric Nguyen, Tu Bui, Vishy Swaminathan, John Collomosse

Our key contribution is OSCAR-Net (Object-centric Scene Graph Attention for Image Attribution Network); a robust image hashing model inspired by recent successes of Transformers in the visual domain.

Contrastive Learning Graph Attention

Compositional Sketch Search

1 code implementation15 Jun 2021 Alexander Black, Tu Bui, Long Mai, Hailin Jin, John Collomosse

We present an algorithm for searching image collections using free-hand sketches that describe the appearance and relative positions of multiple objects.

Position Quantization +2

Sketchformer: Transformer-based Representation for Sketched Structure

1 code implementation CVPR 2020 Leo Sampaio Ferraz Ribeiro, Tu Bui, John Collomosse, Moacir Ponti

Sketchformer is a novel transformer-based representation for encoding free-hand sketches input in a vector form, i. e. as a sequence of strokes.

Cross-Modal Retrieval Dictionary Learning +3

LiveSketch: Query Perturbations for Guided Sketch-based Visual Search

no code implementations CVPR 2019 John Collomosse, Tu Bui, Hailin Jin

LiveSketch is a novel algorithm for searching large image collections using hand-sketched queries.

Clustering

Sketching With Style: Visual Search With Sketches and Aesthetic Context

no code implementations ICCV 2017 John Collomosse, Tu Bui, Michael J. Wilber, Chen Fang, Hailin Jin

We propose a novel measure of visual similarity for image retrieval that incorporates both structural and aesthetic (style) constraints.

Image Retrieval Retrieval

Generalisation and Sharing in Triplet Convnets for Sketch based Visual Search

no code implementations16 Nov 2016 Tu Bui, Leonardo Ribeiro, Moacir Ponti, John Collomosse

We propose and evaluate several triplet CNN architectures for measuring the similarity between sketches and photographs, within the context of the sketch based image retrieval (SBIR) task.

Data Augmentation Dimensionality Reduction +3

Cannot find the paper you are looking for? You can Submit a new open access paper.