Search Results for author: Junru Shao

Found 10 papers, 5 papers with code

SparseTIR: Composable Abstractions for Sparse Compilation in Deep Learning

2 code implementations11 Jul 2022 Zihao Ye, Ruihang Lai, Junru Shao, Tianqi Chen, Luis Ceze

We propose SparseTIR, a sparse tensor compilation abstraction that offers composable formats and composable transformations for deep learning workloads.

TensorIR: An Abstraction for Automatic Tensorized Program Optimization

2 code implementations9 Jul 2022 Siyuan Feng, Bohan Hou, Hongyi Jin, Wuwei Lin, Junru Shao, Ruihang Lai, Zihao Ye, Lianmin Zheng, Cody Hao Yu, Yong Yu, Tianqi Chen

Finally, we build an end-to-end framework on top of our abstraction to automatically optimize deep learning models for given tensor computation primitives.

BIG-bench Machine Learning

Tensor Program Optimization with Probabilistic Programs

no code implementations26 May 2022 Junru Shao, Xiyou Zhou, Siyuan Feng, Bohan Hou, Ruihang Lai, Hongyi Jin, Wuwei Lin, Masahiro Masuda, Cody Hao Yu, Tianqi Chen

Experimental results show that MetaSchedule can cover the search space used in the state-of-the-art tensor program optimization frameworks in a modular way.

Probabilistic Programming

Deep Neural Networks with Multi-Branch Architectures Are Less Non-Convex

1 code implementation6 Jun 2018 Hongyang Zhang, Junru Shao, Ruslan Salakhutdinov

We show that one cause for such success is due to the fact that the multi-branch architecture is less non-convex in terms of duality gap.

Accelerated Distance Computation with Encoding Tree for High Dimensional Data

no code implementations17 Sep 2015 Shicong Liu, Junru Shao, Hongtao Lu

We propose a novel distance to calculate distance between high dimensional vector pairs, utilizing vector quantization generated encodings.

Quantization Vocal Bursts Intensity Prediction

Improved Residual Vector Quantization for High-dimensional Approximate Nearest Neighbor Search

no code implementations17 Sep 2015 Shicong Liu, Hongtao Lu, Junru Shao

In this paper, we propose an improved residual vector quantization (IRVQ) method, our IRVQ learns codebook with a hybrid method of subspace clustering and warm-started k-means on each stage to prevent performance gain from dropping, and uses a multi-path encoding scheme to encode a vector with lower distortion.

Clustering Quantization +1

HCLAE: High Capacity Locally Aggregating Encodings for Approximate Nearest Neighbor Search

no code implementations17 Sep 2015 Shicong Liu, Junru Shao, Hongtao Lu

Further, we propose Aggregating-Tree (A-Tree), a non-exhaustive search method using HCLAE to perform efficient ANN-Search.

Quantization Vocal Bursts Intensity Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.