no code implementations • 23 Oct 2023 • Davis Brown, Charles Godfrey, Nicholas Konz, Jonathan Tu, Henry Kvinge
As language models are applied to an increasing number of real-world applications, understanding their inner workings has become an important issue in model trust, interpretability, and transparency.
no code implementations • 4 Oct 2023 • Nicholas Konz, Charles Godfrey, Madelyn Shapiro, Jonathan Tu, Henry Kvinge, Davis Brown
By now there is substantial evidence that deep learning models learn certain human-interpretable features as part of their internal representations of data.
1 code implementation • 21 Sep 2023 • Charles Godfrey, Elise Bishoff, Myles Mckay, Eleanor Byler
At one extreme, known as "early fusion," additional bands are stacked as extra channels to obtain an input image with more than three channels.
1 code implementation • NeurIPS 2023 • Kelsey Lieberman, James Diffenderfer, Charles Godfrey, Bhavya Kailkhura
Our benchmarks, spectral inspection tools, and findings provide a crucial bridge to the real-world adoption of NIC.
1 code implementation • 18 May 2023 • Elise Bishoff, Charles Godfrey, Myles Mckay, Eleanor Byler
In this work, we seek to characterize the performance and robustness of a multispectral (RGB and near infrared) image segmentation model subjected to adversarial attacks and natural perturbations.
no code implementations • 24 Mar 2023 • Charles Godfrey, Henry Kvinge, Elise Bishoff, Myles Mckay, Davis Brown, Tim Doster, Eleanor Byler
Past work exploring adversarial vulnerability have focused on situations where an adversary can perturb all dimensions of model input.
no code implementations • 10 Mar 2023 • Charles Godfrey, Michael G. Rawson, Davis Brown, Henry Kvinge
The space of permutation equivariant linear layers is a generalization of the partition algebra, an object first discovered in statistical physics with deep connections to the representation theory of the symmetric group, and the basis described above generalizes the so-called orbit basis of the partition algebra.
no code implementations • 28 Feb 2023 • Davis Brown, Charles Godfrey, Cody Nizinski, Jonathan Tu, Henry Kvinge
The current trend toward ever-larger models makes standard retraining procedures an ever-more expensive burden.
no code implementations • 16 Feb 2023 • Henry Kvinge, Davis Brown, Charles Godfrey
We find that choice of prompt has a substantial impact on the intrinsic dimension of representations at both layers of the model which we explored, but that the nature of this impact depends on the layer being considered.
no code implementations • 19 Nov 2022 • Henry Kvinge, Grayson Jorgenson, Davis Brown, Charles Godfrey, Tegan Emerson
While the last five years have seen considerable progress in understanding the internal representations of deep learning models, many questions remain.
1 code implementation • 3 Oct 2022 • Charles Godfrey, Elise Bishoff, Myles Mckay, Davis Brown, Grayson Jorgenson, Henry Kvinge, Eleanor Byler
It is widely acknowledged that trained convolutional neural networks (CNNs) have different levels of sensitivity to signals of different frequency.
2 code implementations • 27 May 2022 • Charles Godfrey, Davis Brown, Tegan Emerson, Henry Kvinge
In this paper we seek to connect the symmetries arising from the architecture of a family of models with the symmetries of that family's internal representation of data.
no code implementations • 15 Mar 2022 • Elizabeth Coda, Nico Courts, Colby Wight, Loc Truong, Woongjo Choi, Charles Godfrey, Tegan Emerson, Keerti Kappagantula, Henry Kvinge
That is, a single input can potentially yield many different outputs (whether due to noise, imperfect measurement, or intrinsic stochasticity in the process) and many different inputs can yield the same output (that is, the map is not injective).