Search Results for author: Edmond Chow

Found 4 papers, 1 papers with code

Learning to Relax: Setting Solver Parameters Across a Sequence of Linear System Instances

no code implementations3 Oct 2023 Mikhail Khodak, Edmond Chow, Maria-Florina Balcan, Ameet Talwalkar

For this method, we prove that a bandit online learning algorithm -- using only the number of iterations as feedback -- can select parameters for a sequence of instances such that the overall cost approaches that of the best fixed $\omega$ as the sequence length increases.

Data-Driven Linear Complexity Low-Rank Approximation of General Kernel Matrices: A Geometric Approach

no code implementations24 Dec 2022 Difeng Cai, Edmond Chow, Yuanzhe Xi

Such rectangular kernel matrices may arise, for example, in Gaussian process regression where $X$ corresponds to the training data and $Y$ corresponds to the test data.

Integrating Deep Learning in Domain Sciences at Exascale

no code implementations23 Nov 2020 Rick Archibald, Edmond Chow, Eduardo D'Azevedo, Jack Dongarra, Markus Eisenbach, Rocco Febbo, Florent Lopez, Daniel Nichols, Stanimire Tomov, Kwai Wong, Junqi Yin

This paper discusses the necessities of an HPC deep learning framework and how those needs can be provided (e. g., as in MagmaDNN) through a deep integration with existing HPC libraries, such as MAGMA and its modular memory management, MPI, CuBLAS, CuDNN, MKL, and HIP.

Management

SMASH: Structured matrix approximation by separation and hierarchy

1 code implementation15 May 2017 Difeng Cai, Edmond Chow, Yousef Saad, Yuanzhe Xi

This paper presents an efficient method to perform Structured Matrix Approximation by Separation and Hierarchy (SMASH), when the original dense matrix is associated with a kernel function.

Numerical Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.