Search Results for author: Richard Peng

Found 15 papers, 4 papers with code

An Empirical Study on Challenging Math Problem Solving with GPT-4

1 code implementation2 Jun 2023 Yiran Wu, Feiran Jia, Shaokun Zhang, Hangyu Li, Erkang Zhu, Yue Wang, Yin Tat Lee, Richard Peng, Qingyun Wu, Chi Wang

Employing Large Language Models (LLMs) to address mathematical problems is an intriguing research endeavor, considering the abundance of math problems expressed in natural language across numerous science and engineering fields.

Elementary Mathematics Math

Learning-Augmented B-Trees

no code implementations16 Nov 2022 Xinyuan Cao, Jingbang Chen, Li Chen, Chris Lambert, Richard Peng, Daniel Sleator

We study learning-augmented binary search trees (BSTs) and B-Trees via Treaps with composite priorities.

$\ell_2$-norm Flow Diffusion in Near-Linear Time

no code implementations30 May 2021 Li Chen, Richard Peng, Di Wang

Diffusion is a fundamental graph procedure and has been a basic building block in a wide range of theoretical and empirical applications such as graph partitioning and semi-supervised learning on graphs.

Clustering Graph Clustering +3

Fully Dynamic Electrical Flows: Sparse Maxflow Faster Than Goldberg-Rao

no code implementations18 Jan 2021 Yu Gao, Yang P. Liu, Richard Peng

We give an algorithm for computing exact maximum flows on graphs with $m$ edges and integer capacities in the range $[1, U]$ in $\widetilde{O}(m^{\frac{3}{2} - \frac{1}{328}} \log U)$ time.

Data Structures and Algorithms

A Matrix Chernoff Bound for Markov Chains and Its Application to Co-occurrence Matrices

no code implementations NeurIPS 2020 Jiezhong Qiu, Chi Wang, Ben Liao, Richard Peng, Jie Tang

Our result gives the first bound on the convergence rate of the co-occurrence matrix and the first sample complexity analysis in graph representation learning.

Graph Learning Graph Representation Learning

Faster Graph Embeddings via Coarsening

1 code implementation ICML 2020 Matthew Fahrbach, Gramoz Goranci, Richard Peng, Sushant Sachdeva, Chi Wang

As computing Schur complements is expensive, we give a nearly-linear time algorithm that generates a coarsened graph on the relevant vertices that provably matches the Schur complement in expectation in each iteration.

Link Prediction Node Classification

A Study of Performance of Optimal Transport

1 code implementation3 May 2020 Yihe Dong, Yu Gao, Richard Peng, Ilya Razenshteyn, Saurabh Sawlani

We investigate the problem of efficiently computing optimal transport (OT) distances, which is equivalent to the node-capacitated minimum cost maximum flow problem in a bipartite graph.

Fast, Provably convergent IRLS Algorithm for p-norm Linear Regression

1 code implementation NeurIPS 2019 Deeksha Adil, Richard Peng, Sushant Sachdeva

However, these algorithms often diverge for p > 3, and since the work of Osborne (1985), it has been an open problem whether there is an IRLS algorithm that is guaranteed to converge rapidly for p > 3.

regression

Higher-Order Accelerated Methods for Faster Non-Smooth Optimization

no code implementations4 Jun 2019 Brian Bullins, Richard Peng

We provide improved convergence rates for various \emph{non-smooth} optimization problems via higher-order accelerated methods.

Iterative Refinement for $\ell_p$-norm Regression

no code implementations21 Jan 2019 Deeksha Adil, Rasmus Kyng, Richard Peng, Sushant Sachdeva

We give improved algorithms for the $\ell_{p}$-regression problem, $\min_{x} \|x\|_{p}$ such that $A x=b,$ for all $p \in (1, 2) \cup (2,\infty).$ Our algorithms obtain a high accuracy solution in $\tilde{O}_{p}(m^{\frac{|p-2|}{2p + |p-2|}}) \le \tilde{O}_{p}(m^{\frac{1}{3}})$ iterations, where each iteration requires solving an $m \times m$ linear system, $m$ being the dimension of the ambient space.

regression

SPALS: Fast Alternating Least Squares via Implicit Leverage Scores Sampling

no code implementations NeurIPS 2016 Dehua Cheng, Richard Peng, Yan Liu, Ioakeim Perros

In this paper, we show ways of sampling intermediate steps of alternating minimization algorithms for computing low rank tensor CP decompositions, leading to the sparse alternating least squares (SPALS) method.

Spectral Sparsification of Random-Walk Matrix Polynomials

no code implementations12 Feb 2015 Dehua Cheng, Yu Cheng, Yan Liu, Richard Peng, Shang-Hua Teng

Our work is particularly motivated by the algorithmic problems for speeding up the classic Newton's method in applications such as computing the inverse square-root of the precision matrix of a Gaussian random field, as well as computing the $q$th-root transition (for $q\geq1$) in a time-reversible Markov model.

Partitioning Well-Clustered Graphs: Spectral Clustering Works!

no code implementations7 Nov 2014 Richard Peng, He Sun, Luca Zanetti

In this paper we study variants of the widely used spectral clustering that partitions a graph into k clusters by (1) embedding the vertices of a graph into a low-dimensional space using the bottom eigenvectors of the Laplacian matrix, and (2) grouping the embedded points into k clusters via k-means algorithms.

Clustering

Uniform Sampling for Matrix Approximation

no code implementations21 Aug 2014 Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, Richard Peng, Aaron Sidford

In addition to an improved understanding of uniform sampling, our main proof introduces a structural result of independent interest: we show that every matrix can be made to have low coherence by reweighting a small subset of its rows.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.