Search Results for author: Sheng Di

Found 6 papers, 3 papers with code

Exploring Autoencoder-based Error-bounded Compression for Scientific Data

no code implementations25 May 2021 Jinyang Liu, Sheng Di, Kai Zhao, Sian Jin, Dingwen Tao, Xin Liang, Zizhong Chen, Franck Cappello

(1) We provide an in-depth investigation of the characteristics of various autoencoder models and develop an error-bounded autoencoder-based framework in terms of the SZ model.

Image Compression

cuSZ: An Efficient GPU-Based Error-Bounded Lossy Compression Framework for Scientific Data

2 code implementations19 Jul 2020 Jiannan Tian, Sheng Di, Kai Zhao, Cody Rivera, Megan Hickman Fulp, Robert Underwood, Sian Jin, Xin Liang, Jon Calhoun, Dingwen Tao, Franck Cappello

To the best of our knowledge, cuSZ is the first error-bounded lossy compressor on GPUs for scientific data.

Distributed, Parallel, and Cluster Computing

FT-CNN: Algorithm-Based Fault Tolerance for Convolutional Neural Networks

no code implementations27 Mar 2020 Kai Zhao, Sheng Di, Sihuan Li, Xin Liang, Yujia Zhai, Jieyang Chen, Kaiming Ouyang, Franck Cappello, Zizhong Chen

(1) We propose several systematic ABFT schemes based on checksum techniques and analyze their fault protection ability and runtime thoroughly. Unlike traditional ABFT based on matrix-matrix multiplication, our schemes support any convolution implementations.

DeepSZ: A Novel Framework to Compress Deep Neural Networks by Using Error-Bounded Lossy Compression

1 code implementation26 Jan 2019 Sian Jin, Sheng Di, Xin Liang, Jiannan Tian, Dingwen Tao, Franck Cappello

In this paper, we propose DeepSZ: an accuracy-loss bounded neural network compression framework, which involves four key steps: network pruning, error bound assessment, optimization for error bound configuration, and compressed model generation, featuring a high compression ratio and low encoding time.

 Ranked #1 on Neural Network Compression on ImageNet (using extra training data)

Network Pruning Neural Network Compression

Z-checker: A Framework for Assessing Lossy Compression of Scientific Data

1 code implementation12 Jun 2017 Dingwen Tao, Sheng Di, Hanqi Guo, Zizhong Chen, Franck Cappello

However, lossy compressor developers and users are missing a tool to explore the features of scientific datasets and understand the data alteration after compression in a systematic and reliable way.

Other Computer Science Instrumentation and Methods for Astrophysics Computational Engineering, Finance, and Science

Significantly Improving Lossy Compression for Scientific Data Sets Based on Multidimensional Prediction and Error-Controlled Quantization

no code implementations12 Jun 2017 Dingwen Tao, Sheng Di, Zizhong Chen, Franck Cappello

One serious challenge is that the data prediction has to be performed based on the preceding decompressed values during the compression in order to guarantee the error bounds, which may degrade the prediction accuracy in turn.

Information Theory Information Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.