1 code implementation • 18 Oct 2022 • Edward Hutter, Edgar Solomonik
We consider alternative piecewise/grid-based models and supervised learning models for six applications and demonstrate that CP decomposition optimized using tensor completion offers higher prediction accuracy and memory-efficiency for high-dimensional performance modeling.
no code implementations • 26 May 2022 • Linjian Ma, Edgar Solomonik
We provide a systematic way to design tensor network embeddings consisting of Gaussian random tensors, such that for inputs with more general tensor network structures, both the sketch size (row size of $S$) and the sketching computational cost are low.
no code implementations • 14 Apr 2022 • Navjot Singh, Edgar Solomonik
Computing these critical points in an alternating manner motivates an alternating optimization algorithm which corresponds to alternating least squares algorithm in the matrix case.
1 code implementation • 15 Jun 2021 • Chaoqi Yang, Cheng Qian, Navjot Singh, Cao Xiao, M Brandon Westover, Edgar Solomonik, Jimeng Sun
This paper addresses the above challenges by proposing augmented tensor decomposition (ATD), which effectively incorporates data augmentations and self-supervised learning (SSL) to boost downstream classification.
1 code implementation • 14 Jun 2021 • Chaoqi Yang, Navjot Singh, Cao Xiao, Cheng Qian, Edgar Solomonik, Jimeng Sun
Our MTC model explores tensor mode properties and leverages the hierarchy of resolutions to recursively initialize an optimization setup, and optimizes on the coupled system using alternating least squares.
no code implementations • NeurIPS 2021 • Linjian Ma, Edgar Solomonik
Experimental results show that this new ALS algorithm, combined with a new initialization scheme based on randomized range finder, yields up to $22. 0\%$ relative decomposition residual improvement compared to the state-of-the-art sketched randomized algorithm for Tucker decomposition of various synthetic and real datasets.
1 code implementation • 10 Jul 2020 • Ryan Levy, Edgar Solomonik, Bryan K. Clark
The Density Matrix Renormalization Group (DMRG) algorithm is a powerful tool for solving eigenvalue problems to model quantum systems.
Distributed, Parallel, and Cluster Computing Strongly Correlated Electrons Computational Physics
1 code implementation • 10 May 2020 • Linjian Ma, Jiayu Ye, Edgar Solomonik
High-order optimization methods, including Newton's method and its variants as well as alternating minimization methods, dominate the optimization algorithms for tensor decompositions and tensor networks.
Mathematical Software Numerical Analysis Numerical Analysis
2 code implementations • 26 Nov 2018 • Linjian Ma, Edgar Solomonik
The alternating least squares algorithm for CP and Tucker decomposition is dominated in cost by the tensor contractions necessary to set up the quadratic optimization subproblems.
Numerical Analysis Numerical Analysis
5 code implementations • 16 Oct 2017 • Edwin Pednault, John A. Gunnels, Giacomo Nannicini, Lior Horesh, Thomas Magerlein, Edgar Solomonik, Robert Wisnieff
With the current rate of progress in quantum computing technologies, 50-qubit systems will soon become a reality.
Quantum Physics
3 code implementations • 22 Sep 2016 • Edgar Solomonik, Maciej Besta, Flavio Vella, Torsten Hoefler
Betweenness centrality (BC) is a crucial graph problem that measures the significance of a vertex by the number of shortest paths leading through it.
Distributed, Parallel, and Cluster Computing Discrete Mathematics Mathematical Software G.1.0; G.2.2
3 code implementations • 30 Nov 2015 • Edgar Solomonik, Torsten Hoefler
Dense and sparse tensors allow the representation of most bulk data structures in computational science applications.
Mathematical Software