Search Results for author: Manuel Brack

Found 15 papers, 12 papers with code

DeiSAM: Segment Anything with Deictic Prompting

1 code implementation21 Feb 2024 Hikaru Shindo, Manuel Brack, Gopika Sudhakaran, Devendra Singh Dhami, Patrick Schramowski, Kristian Kersting

To remedy this issue, we propose DeiSAM -- a combination of large pre-trained neural networks with differentiable logic reasoners -- for deictic promptable segmentation.

Image Segmentation Segmentation +1

Distilling Adversarial Prompts from Safety Benchmarks: Report for the Adversarial Nibbler Challenge

no code implementations20 Sep 2023 Manuel Brack, Patrick Schramowski, Kristian Kersting

Text-conditioned image generation models have recently achieved astonishing image quality and alignment results.

Image Generation

Mitigating Inappropriateness in Image Generation: Can there be Value in Reflecting the World's Ugliness?

no code implementations28 May 2023 Manuel Brack, Felix Friedrich, Patrick Schramowski, Kristian Kersting

Text-conditioned image generation models have recently achieved astonishing results in image quality and text alignment and are consequently employed in a fast-growing number of applications.

Image Generation

Class Attribute Inference Attacks: Inferring Sensitive Class Information by Diffusion-Based Attribute Manipulations

1 code implementation16 Mar 2023 Lukas Struppek, Dominik Hintersdorf, Felix Friedrich, Manuel Brack, Patrick Schramowski, Kristian Kersting

Neural network-based image classifiers are powerful tools for computer vision tasks, but they inadvertently reveal sensitive attribute information about their classes, raising concerns about their privacy.

Attribute Face Recognition +2

Fair Diffusion: Instructing Text-to-Image Generation Models on Fairness

1 code implementation7 Feb 2023 Felix Friedrich, Manuel Brack, Lukas Struppek, Dominik Hintersdorf, Patrick Schramowski, Sasha Luccioni, Kristian Kersting

Generative AI models have recently achieved astonishing results in quality and are consequently employed in a fast-growing number of applications.

Fairness Text-to-Image Generation

AtMan: Understanding Transformer Predictions Through Memory Efficient Attention Manipulation

1 code implementation NeurIPS 2023 Björn Deiseroth, Mayukh Deb, Samuel Weinbach, Manuel Brack, Patrick Schramowski, Kristian Kersting

Generative transformer models have become increasingly complex, with large numbers of parameters and the ability to process multiple input modalities.

The Stable Artist: Steering Semantics in Diffusion Latent Space

2 code implementations12 Dec 2022 Manuel Brack, Patrick Schramowski, Felix Friedrich, Dominik Hintersdorf, Kristian Kersting

Large, text-conditioned generative diffusion models have recently gained a lot of attention for their impressive performance in generating high-fidelity images from text alone.

Image Generation

Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models

2 code implementations CVPR 2023 Patrick Schramowski, Manuel Brack, Björn Deiseroth, Kristian Kersting

Text-conditioned image generation models have recently achieved astonishing results in image quality and text alignment and are consequently employed in a fast-growing number of applications.

Image Generation

Exploiting Cultural Biases via Homoglyphs in Text-to-Image Synthesis

2 code implementations19 Sep 2022 Lukas Struppek, Dominik Hintersdorf, Felix Friedrich, Manuel Brack, Patrick Schramowski, Kristian Kersting

Models for text-to-image synthesis, such as DALL-E~2 and Stable Diffusion, have recently drawn a lot of interest from academia and the general public.

Image Generation

Does CLIP Know My Face?

2 code implementations15 Sep 2022 Dominik Hintersdorf, Lukas Struppek, Manuel Brack, Felix Friedrich, Patrick Schramowski, Kristian Kersting

Our large-scale experiments on CLIP demonstrate that individuals used for training can be identified with very high accuracy.

Inference Attack

ILLUME: Rationalizing Vision-Language Models through Human Interactions

1 code implementation17 Aug 2022 Manuel Brack, Patrick Schramowski, Björn Deiseroth, Kristian Kersting

Bootstrapping from pre-trained language models has been proven to be an efficient approach for building vision-language models (VLM) for tasks such as image captioning or visual question answering.

Image Captioning Question Answering +2

Cannot find the paper you are looking for? You can Submit a new open access paper.