Search Results for author: Thomas Flynn

Found 8 papers, 1 papers with code

An Evaluation of Real-time Adaptive Sampling Change Point Detection Algorithm using KCUSUM

no code implementations15 Feb 2024 Vijayalakshmi Saravanan, Perry Siehien, Shinjae Yoo, Hubertus van Dam, Thomas Flynn, Christopher Kelly, Khaled Z Ibrahim

Detecting abrupt changes in real-time data streams from scientific simulations presents a challenging task, demanding the deployment of accurate and efficient algorithms.

Change Detection Change Point Detection +1

Learning Independent Program and Architecture Representations for Generalizable Performance Modeling

no code implementations25 Oct 2023 Lingda Li, Thomas Flynn, Adolfy Hoisie

This paper proposes PerfVec, a novel deep learning-based performance modeling framework that learns high-dimensional, independent/orthogonal program and microarchitecture representations.

Stochastic Projective Splitting: Solving Saddle-Point Problems with Multiple Regularizers

no code implementations24 Jun 2021 Patrick R. Johnstone, Jonathan Eckstein, Thomas Flynn, Shinjae Yoo

We present a new, stochastic variant of the projective splitting (PS) family of algorithms for monotone inclusion problems.

regression

SimNet: Accurate and High-Performance Computer Architecture Simulation using Deep Learning

1 code implementation12 May 2021 Lingda Li, Santosh Pandey, Thomas Flynn, Hang Liu, Noel Wheeler, Adolfy Hoisie

While discrete-event simulators are essential tools for architecture research, design, and development, their practicality is limited by an extremely long time-to-solution for realistic applications under investigation.

BIG-bench Machine Learning Vocal Bursts Intensity Prediction

Bounding the expected run-time of nonconvex optimization with early stopping

no code implementations20 Feb 2020 Thomas Flynn, Kwang Min Yu, Abid Malik, Nicolas D'Imperio, Shinjae Yoo

This work examines the convergence of stochastic gradient-based optimization algorithms that use early stopping based on a validation function.

On the expected running time of nonconvex optimization with early stopping

no code implementations25 Sep 2019 Thomas Flynn, Kwang Min Yu, Abid Malik, Shinjae Yoo, Nicholas D'Imperio

This work examines the convergence of stochastic gradient algorithms that use early stopping based on a validation function, wherein optimization ends when the magnitude of a validation function gradient drops below a threshold.

Layered SGD: A Decentralized and Synchronous SGD Algorithm for Scalable Deep Neural Network Training

no code implementations13 Jun 2019 Kwangmin Yu, Thomas Flynn, Shinjae Yoo, Nicholas D'Imperio

The efficiency of the algorithm is tested by training a deep network on the ImageNet classification task.

The duality structure gradient descent algorithm: analysis and applications to neural networks

no code implementations1 Aug 2017 Thomas Flynn

The decision of what layer to update is done in a greedy fashion, based on a rigorous lower bound on the improvement of the objective function for each choice of layer.

Cannot find the paper you are looking for? You can Submit a new open access paper.