Search Results for author: Luke McDermott

Found 6 papers, 1 papers with code

Neural Architecture Codesign for Fast Bragg Peak Analysis

no code implementations10 Dec 2023 Luke McDermott, Jason Weitz, Dmitri Demler, Daniel Cummings, Nhan Tran, Javier Duarte

We develop an automated pipeline to streamline neural architecture codesign for fast, real-time Bragg peak analysis in high-energy diffraction microscopy.

Model Compression Network Pruning +2

Linear Mode Connectivity in Sparse Neural Networks

no code implementations28 Oct 2023 Luke McDermott, Daniel Cummings

We find that distilled data, a synthetic summarization of the real data, paired with Iterative Magnitude Pruning (IMP) unveils a new class of sparse networks that are more stable to SGD noise on the real data, than either the dense model, or subnetworks found with real data in IMP.

Linear Mode Connectivity Network Pruning

UniCat: Crafting a Stronger Fusion Baseline for Multimodal Re-Identification

no code implementations28 Oct 2023 Jennifer Crawford, Haoli Yin, Luke McDermott, Daniel Cummings

Multimodal Re-Identification (ReID) is a popular retrieval task that aims to re-identify objects across diverse data streams, prompting many researchers to integrate multiple modalities into a unified representation.

Retrieval

GraFT: Gradual Fusion Transformer for Multimodal Re-Identification

no code implementations25 Oct 2023 Haoli Yin, Jiayao Li, Eva Schiller, Luke McDermott, Daniel Cummings

Object Re-Identification (ReID) is pivotal in computer vision, witnessing an escalating demand for adept multimodal representation learning.

Network Pruning Representation Learning

A Generalization of Continuous Relaxation in Structured Pruning

no code implementations28 Aug 2023 Brad Larson, Bishal Upadhyaya, Luke McDermott, Siddha Ganju

Structured pruning asserts that while large networks enable us to find solutions to complex computer vision problems, a smaller, computationally efficient sub-network can be derived from the large neural network that retains model accuracy but significantly improves computational efficiency.

Computational Efficiency

Distilled Pruning: Using Synthetic Data to Win the Lottery

1 code implementation7 Jul 2023 Luke McDermott, Daniel Cummings

This work introduces a novel approach to pruning deep learning models by using distilled data.

Efficient Neural Network Model Compression +2

Cannot find the paper you are looking for? You can Submit a new open access paper.