Search Results for author: Charles Godfrey

Found 13 papers, 5 papers with code

Understanding the Inner Workings of Language Models Through Representation Dissimilarity

no code implementations23 Oct 2023 Davis Brown, Charles Godfrey, Nicholas Konz, Jonathan Tu, Henry Kvinge

As language models are applied to an increasing number of real-world applications, understanding their inner workings has become an important issue in model trust, interpretability, and transparency.

Language Modelling

Attributing Learned Concepts in Neural Networks to Training Data

no code implementations4 Oct 2023 Nicholas Konz, Charles Godfrey, Madelyn Shapiro, Jonathan Tu, Henry Kvinge, Davis Brown

By now there is substantial evidence that deep learning models learn certain human-interpretable features as part of their internal representations of data.

Impact of architecture on robustness and interpretability of multispectral deep neural networks

1 code implementation21 Sep 2023 Charles Godfrey, Elise Bishoff, Myles Mckay, Eleanor Byler

At one extreme, known as "early fusion," additional bands are stacked as extra channels to obtain an input image with more than three channels.

Quantifying the robustness of deep multispectral segmentation models against natural perturbations and data poisoning

1 code implementation18 May 2023 Elise Bishoff, Charles Godfrey, Myles Mckay, Eleanor Byler

In this work, we seek to characterize the performance and robustness of a multispectral (RGB and near infrared) image segmentation model subjected to adversarial attacks and natural perturbations.

Adversarial Robustness Data Poisoning +2

How many dimensions are required to find an adversarial example?

no code implementations24 Mar 2023 Charles Godfrey, Henry Kvinge, Elise Bishoff, Myles Mckay, Davis Brown, Tim Doster, Eleanor Byler

Past work exploring adversarial vulnerability have focused on situations where an adversary can perturb all dimensions of model input.

Fast computation of permutation equivariant layers with the partition algebra

no code implementations10 Mar 2023 Charles Godfrey, Michael G. Rawson, Davis Brown, Henry Kvinge

The space of permutation equivariant linear layers is a generalization of the partition algebra, an object first discovered in statistical physics with deep connections to the representation theory of the symmetric group, and the basis described above generalizes the so-called orbit basis of the partition algebra.

Exploring the Representation Manifolds of Stable Diffusion Through the Lens of Intrinsic Dimension

no code implementations16 Feb 2023 Henry Kvinge, Davis Brown, Charles Godfrey

We find that choice of prompt has a substantial impact on the intrinsic dimension of representations at both layers of the model which we explored, but that the nature of this impact depends on the layer being considered.

Internal Representations of Vision Models Through the Lens of Frames on Data Manifolds

no code implementations19 Nov 2022 Henry Kvinge, Grayson Jorgenson, Davis Brown, Charles Godfrey, Tegan Emerson

While the last five years have seen considerable progress in understanding the internal representations of deep learning models, many questions remain.

Testing predictions of representation cost theory with CNNs

1 code implementation3 Oct 2022 Charles Godfrey, Elise Bishoff, Myles Mckay, Davis Brown, Grayson Jorgenson, Henry Kvinge, Eleanor Byler

It is widely acknowledged that trained convolutional neural networks (CNNs) have different levels of sensitivity to signals of different frequency.

On the Symmetries of Deep Learning Models and their Internal Representations

2 code implementations27 May 2022 Charles Godfrey, Davis Brown, Tegan Emerson, Henry Kvinge

In this paper we seek to connect the symmetries arising from the architecture of a family of models with the symmetries of that family's internal representation of data.

Fiber Bundle Morphisms as a Framework for Modeling Many-to-Many Maps

no code implementations15 Mar 2022 Elizabeth Coda, Nico Courts, Colby Wight, Loc Truong, Woongjo Choi, Charles Godfrey, Tegan Emerson, Keerti Kappagantula, Henry Kvinge

That is, a single input can potentially yield many different outputs (whether due to noise, imperfect measurement, or intrinsic stochasticity in the process) and many different inputs can yield the same output (that is, the map is not injective).

Benchmarking Sentiment Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.