Search Results for author: Sheng Di

Found 10 papers, 4 papers with code

FT K-means: A High-Performance K-means on GPU with Fault Tolerance

1 code implementation2 Aug 2024 Shixun Wu, Yitong Ding, Yujia Zhai, Jinyang Liu, Jiajun Huang, Zizhe Jian, Huangliang Dai, Sheng Di, Bryan M. Wong, Zizhong Chen, Franck Cappello

K-means is a widely used algorithm in clustering, however, its efficiency is primarily constrained by the computational cost of distance computing.

Code Generation

FedFa: A Fully Asynchronous Training Paradigm for Federated Learning

no code implementations17 Apr 2024 Haotian Xu, Zhaorui Zhang, Sheng Di, Benben Liu, Khalid Ayed Alharthi, Jiannong Cao

We propose a full asynchronous training paradigm, called FedFa, which can guarantee model convergence and eliminate the waiting time completely for federated learning by using a few buffered results on the server for parameter updating.

Federated Learning

Understanding The Effectiveness of Lossy Compression in Machine Learning Training Sets

no code implementations23 Mar 2024 Robert Underwood, Jon C. Calhoun, Sheng Di, Franck Cappello

We designed a systematic methodology for evaluating data reduction techniques for ML/AI, and we use it to perform a very comprehensive evaluation with 17 data reduction methods on 7 ML/AI applications to show modern lossy compression methods can achieve a 50-100x compression ratio improvement for a 1% or less loss in quality.

Data Compression

SRN-SZ: Deep Leaning-Based Scientific Error-bounded Lossy Compression with Super-resolution Neural Networks

no code implementations7 Sep 2023 Jinyang Liu, Sheng Di, Sian Jin, Kai Zhao, Xin Liang, Zizhong Chen, Franck Cappello

The fast growth of computational power and scales of modern super-computing systems have raised great challenges for the management of exascale scientific data.

Super-Resolution

Exploring Autoencoder-based Error-bounded Compression for Scientific Data

no code implementations25 May 2021 Jinyang Liu, Sheng Di, Kai Zhao, Sian Jin, Dingwen Tao, Xin Liang, Zizhong Chen, Franck Cappello

(1) We provide an in-depth investigation of the characteristics of various autoencoder models and develop an error-bounded autoencoder-based framework in terms of the SZ model.

Image Compression

cuSZ: An Efficient GPU-Based Error-Bounded Lossy Compression Framework for Scientific Data

2 code implementations19 Jul 2020 Jiannan Tian, Sheng Di, Kai Zhao, Cody Rivera, Megan Hickman Fulp, Robert Underwood, Sian Jin, Xin Liang, Jon Calhoun, Dingwen Tao, Franck Cappello

To the best of our knowledge, cuSZ is the first error-bounded lossy compressor on GPUs for scientific data.

Distributed, Parallel, and Cluster Computing

FT-CNN: Algorithm-Based Fault Tolerance for Convolutional Neural Networks

no code implementations27 Mar 2020 Kai Zhao, Sheng Di, Sihuan Li, Xin Liang, Yujia Zhai, Jieyang Chen, Kaiming Ouyang, Franck Cappello, Zizhong Chen

(1) We propose several systematic ABFT schemes based on checksum techniques and analyze their fault protection ability and runtime thoroughly. Unlike traditional ABFT based on matrix-matrix multiplication, our schemes support any convolution implementations.

DeepSZ: A Novel Framework to Compress Deep Neural Networks by Using Error-Bounded Lossy Compression

1 code implementation26 Jan 2019 Sian Jin, Sheng Di, Xin Liang, Jiannan Tian, Dingwen Tao, Franck Cappello

In this paper, we propose DeepSZ: an accuracy-loss bounded neural network compression framework, which involves four key steps: network pruning, error bound assessment, optimization for error bound configuration, and compressed model generation, featuring a high compression ratio and low encoding time.

Network Pruning Neural Network Compression

Significantly Improving Lossy Compression for Scientific Data Sets Based on Multidimensional Prediction and Error-Controlled Quantization

no code implementations12 Jun 2017 Dingwen Tao, Sheng Di, Zizhong Chen, Franck Cappello

One serious challenge is that the data prediction has to be performed based on the preceding decompressed values during the compression in order to guarantee the error bounds, which may degrade the prediction accuracy in turn.

Information Theory Information Theory

Z-checker: A Framework for Assessing Lossy Compression of Scientific Data

1 code implementation12 Jun 2017 Dingwen Tao, Sheng Di, Hanqi Guo, Zizhong Chen, Franck Cappello

However, lossy compressor developers and users are missing a tool to explore the features of scientific datasets and understand the data alteration after compression in a systematic and reliable way.

Other Computer Science Instrumentation and Methods for Astrophysics Computational Engineering, Finance, and Science

Cannot find the paper you are looking for? You can Submit a new open access paper.