Search Results for author: Angie Boggust

Found 9 papers, 9 papers with code

LeGrad: An Explainability Method for Vision Transformers via Feature Formation Sensitivity

1 code implementation4 Apr 2024 Walid Bousselham, Angie Boggust, Sofian Chaybouti, Hendrik Strobelt, Hilde Kuehne

Vision Transformers (ViTs), with their ability to model long-range dependencies through self-attention mechanisms, have become a standard architecture in computer vision.

DiffusionWorldViewer: Exposing and Broadening the Worldview Reflected by Generative Text-to-Image Models

1 code implementation18 Sep 2023 Zoe De Simone, Angie Boggust, Arvind Satyanarayan, Ashia Wilson

Generative text-to-image (TTI) models produce high-quality images from short textual descriptions and are widely used in academic and creative domains.

Fairness

VisText: A Benchmark for Semantically Rich Chart Captioning

1 code implementation28 Jun 2023 Benny J. Tang, Angie Boggust, Arvind Satyanarayan

Captions that describe or explain charts help improve recall and comprehension of the depicted data and provide a more accessible medium for people with visual disabilities.

Machine Translation Text Generation

Saliency Cards: A Framework to Characterize and Compare Saliency Methods

1 code implementation7 Jun 2022 Angie Boggust, Harini Suresh, Hendrik Strobelt, John V. Guttag, Arvind Satyanarayan

Moreover, with saliency cards, we are able to analyze the research landscape in a more structured fashion to identify opportunities for new methods and evaluation metrics for unmet user needs.

Shared Interest: Measuring Human-AI Alignment to Identify Recurring Patterns in Model Behavior

1 code implementation20 Jul 2021 Angie Boggust, Benjamin Hoover, Arvind Satyanarayan, Hendrik Strobelt

Saliency methods -- techniques to identify the importance of input features on a model's output -- are a common step in understanding neural network behavior.

Embedding Comparator: Visualizing Differences in Global Structure and Local Neighborhoods via Small Multiples

1 code implementation10 Dec 2019 Angie Boggust, Brandon Carter, Arvind Satyanarayan

Embeddings mapping high-dimensional discrete input to lower-dimensional continuous vector spaces have been widely adopted in machine learning applications as a way to capture domain semantics.

Cannot find the paper you are looking for? You can Submit a new open access paper.