1 code implementation • 15 Jan 2024 • Dan Jacobellis, Daniel Cummings, Neeraja J. Yadwadkar
Our results indicate three key findings: (1) using generative compression, it is feasible to leverage highly compressed data while incurring a negligible impact on machine perceptual quality; (2) machine perceptual quality correlates strongly with deep similarity metrics, indicating a crucial role of these metrics in the development of machine-oriented codecs; and (3) using lossy compressed datasets, (e. g. ImageNet) for pre-training can lead to counter-intuitive scenarios where lossy compression increases machine perceptual quality rather than degrading it.
no code implementations • 10 Dec 2023 • Luke McDermott, Jason Weitz, Dmitri Demler, Daniel Cummings, Nhan Tran, Javier Duarte
We develop an automated pipeline to streamline neural architecture codesign for fast, real-time Bragg peak analysis in high-energy diffraction microscopy.
no code implementations • 28 Oct 2023 • Luke McDermott, Daniel Cummings
We find that distilled data, a synthetic summarization of the real data, paired with Iterative Magnitude Pruning (IMP) unveils a new class of sparse networks that are more stable to SGD noise on the real data, than either the dense model, or subnetworks found with real data in IMP.
no code implementations • 28 Oct 2023 • Jennifer Crawford, Haoli Yin, Luke McDermott, Daniel Cummings
Multimodal Re-Identification (ReID) is a popular retrieval task that aims to re-identify objects across diverse data streams, prompting many researchers to integrate multiple modalities into a unified representation.
no code implementations • 25 Oct 2023 • Haoli Yin, Jiayao Li, Eva Schiller, Luke McDermott, Daniel Cummings
Object Re-Identification (ReID) is pivotal in computer vision, witnessing an escalating demand for adept multimodal representation learning.
1 code implementation • 7 Jul 2023 • Luke McDermott, Daniel Cummings
This work introduces a novel approach to pruning deep learning models by using distilled data.
no code implementations • 19 May 2022 • Daniel Cummings, Anthony Sarah, Sharath Nittur Sridhar, Maciej Szankin, Juan Pablo Munoz, Sairam Sundaresan
Recent advances in Neural Architecture Search (NAS) such as one-shot NAS offer the ability to extract specialized hardware-aware sub-network configurations from a task-specific super-network.
no code implementations • 25 Feb 2022 • Daniel Cummings, Sharath Nittur Sridhar, Anthony Sarah, Maciej Szankin
Neural architecture search (NAS), the study of automating the discovery of optimal deep neural network architectures for tasks in domains such as computer vision and natural language processing, has seen rapid growth in the machine learning research community.
no code implementations • 25 Feb 2022 • Anthony Sarah, Daniel Cummings, Sharath Nittur Sridhar, Sairam Sundaresan, Maciej Szankin, Tristan Webb, J. Pablo Munoz
These methods decouple the super-network training from the sub-network search and thus decrease the computational burden of specializing to different hardware platforms.
no code implementations • 6 Apr 2021 • Daniel Cummings, Marcel Nassar
Academic citation graphs represent citation relationships between publications across the full range of academic fields.