no code implementations • 3 Oct 2023 • Mikhail Khodak, Edmond Chow, Maria-Florina Balcan, Ameet Talwalkar
For this method, we prove that a bandit online learning algorithm -- using only the number of iterations as feedback -- can select parameters for a sequence of instances such that the overall cost approaches that of the best fixed $\omega$ as the sequence length increases.
no code implementations • 24 Dec 2022 • Difeng Cai, Edmond Chow, Yuanzhe Xi
Such rectangular kernel matrices may arise, for example, in Gaussian process regression where $X$ corresponds to the training data and $Y$ corresponds to the test data.
no code implementations • 23 Nov 2020 • Rick Archibald, Edmond Chow, Eduardo D'Azevedo, Jack Dongarra, Markus Eisenbach, Rocco Febbo, Florent Lopez, Daniel Nichols, Stanimire Tomov, Kwai Wong, Junqi Yin
This paper discusses the necessities of an HPC deep learning framework and how those needs can be provided (e. g., as in MagmaDNN) through a deep integration with existing HPC libraries, such as MAGMA and its modular memory management, MPI, CuBLAS, CuDNN, MKL, and HIP.
1 code implementation • 15 May 2017 • Difeng Cai, Edmond Chow, Yousef Saad, Yuanzhe Xi
This paper presents an efficient method to perform Structured Matrix Approximation by Separation and Hierarchy (SMASH), when the original dense matrix is associated with a kernel function.
Numerical Analysis