Search Results for author: Tegan Emerson

Found 10 papers, 2 papers with code

Neural frames: A Tool for Studying the Tangent Bundles Underlying Image Datasets and How Deep Learning Models Process Them

no code implementations19 Nov 2022 Henry Kvinge, Grayson Jorgenson, Davis Brown, Charles Godfrey, Tegan Emerson

The assumption that many forms of high-dimensional data, such as images, actually live on low-dimensional manifolds, sometimes known as the manifold hypothesis, underlies much of our intuition for how and why deep learning works.

Do Neural Networks Trained with Topological Features Learn Different Internal Representations?

no code implementations14 Nov 2022 Sarah McGuire, Shane Jackson, Tegan Emerson, Henry Kvinge

While this field, sometimes known as topological machine learning (TML), has seen some notable successes, an understanding of how the process of learning from topological features differs from the process of learning from raw data is still limited.

Topological Data Analysis

On the Symmetries of Deep Learning Models and their Internal Representations

1 code implementation27 May 2022 Charles Godfrey, Davis Brown, Tegan Emerson, Henry Kvinge

In this paper we seek to connect the symmetries arising from the architecture of a family of models with the symmetries of that family's internal representation of data.

TopTemp: Parsing Precipitate Structure from Temper Topology

no code implementations1 Apr 2022 Lara Kassab, Scott Howland, Henry Kvinge, Keerti Sahithi Kappagantula, Tegan Emerson

Technological advances are in part enabled by the development of novel manufacturing processes that give rise to new materials or material property improvements.

Fiber Bundle Morphisms as a Framework for Modeling Many-to-Many Maps

no code implementations15 Mar 2022 Elizabeth Coda, Nico Courts, Colby Wight, Loc Truong, Woongjo Choi, Charles Godfrey, Tegan Emerson, Keerti Kappagantula, Henry Kvinge

That is, a single input can potentially yield many different outputs (whether due to noise, imperfect measurement, or intrinsic stochasticity in the process) and many different inputs can yield the same output (that is, the map is not injective).

Sentiment Analysis

Differential Property Prediction: A Machine Learning Approach to Experimental Design in Advanced Manufacturing

no code implementations3 Dec 2021 Loc Truong, Woongjo Choi, Colby Wight, Lizzy Coda, Tegan Emerson, Keerti Kappagantula, Henry Kvinge

We show that by focusing on the experimenter's need to choose between multiple candidate experimental parameters, we can reframe the challenging regression task of predicting material properties from processing parameters, into a classification task on which machine learning models can achieve good performance.

BIG-bench Machine Learning Experimental Design

A Topological Approach for Motion Track Discrimination

no code implementations10 Feb 2021 Tegan Emerson, Sarah Tymochko, George Stantchev, Jason A. Edelberg, Michael Wilson, Colin C. Olson

Detecting small targets at range is difficult because there is not enough spatial information present in an image sub-region containing the target to use correlation-based methods to differentiate it from dynamic confusers present in the scene.

Topological Data Analysis of Task-Based fMRI Data from Experiments on Schizophrenia

no code implementations22 Sep 2018 Bernadette J. Stolz, Tegan Emerson, Satu Nahkuri, Mason A. Porter, Heather A. Harrington

With these tools, which allow one to characterize topological invariants such as loops in high-dimensional data, we are able to gain understanding into low-dimensional structures in networks in a way that complements traditional approaches that are based on pairwise interactions.

Community Detection Time Series +1

Persistence Images: A Stable Vector Representation of Persistent Homology

4 code implementations22 Jul 2015 Henry Adams, Sofya Chepushtanova, Tegan Emerson, Eric Hanson, Michael Kirby, Francis Motta, Rachel Neville, Chris Peterson, Patrick Shipman, Lori Ziegelmeier

We convert a PD to a finite-dimensional vector representation which we call a persistence image (PI), and prove the stability of this transformation with respect to small perturbations in the inputs.

BIG-bench Machine Learning Graph Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.