Search Results for author: Zheng Chai

Found 12 papers, 3 papers with code

Distributed Graph Neural Network Training with Periodic Historical Embedding Synchronization

no code implementations31 May 2022 Zheng Chai, Guangji Bai, Liang Zhao, Yue Cheng

We proved that the approximation error induced by the staleness of historical embedding can be upper bounded and it does NOT affect the GNN model's expressiveness.

Graph Embedding Knowledge Graphs +1

LOF: Structure-Aware Line Tracking based on Optical Flow

no code implementations17 Sep 2021 Meixiang Quan, Zheng Chai, Xiao Liu

Lines provide the significantly richer geometric structural information about the environment than points, so lines are widely used in recent Visual Odometry (VO) works.

Line Detection Optical Flow Estimation +1

Asynchronous Federated Learning for Sensor Data with Concept Drift

no code implementations1 Sep 2021 Yujing Chen, Zheng Chai, Yue Cheng, Huzefa Rangwala

We propose a novel approach, FedConD, to detect and deal with the concept drift on local devices and minimize the effect on the performance of models in asynchronous FL.

Ensemble Learning Federated Learning

Method Towards CVPR 2021 Image Matching Challenge

no code implementations10 Aug 2021 Xiaopeng Bi, Yu Chen, Xinyang Liu, Dehao Zhang, Ran Yan, Zheng Chai, Haotian Zhang, Xiao Liu

This report describes Megvii-3D team's approach towards CVPR 2021 Image Matching Workshop.

Method Towards CVPR 2021 SimLocMatch Challenge

no code implementations10 Aug 2021 Xiaopeng Bi, Ran Yan, Zheng Chai, Haotian Zhang, Xiao Liu

This report describes Megvii-3D team's approach towards SimLocMatch Challenge @ CVPR 2021 Image Matching Workshop.

Towards Quantized Model Parallelism for Graph-Augmented MLPs Based on Gradient-Free ADMM framework

no code implementations20 May 2021 Junxiang Wang, Hongyi Li, Zheng Chai, Yongchao Wang, Yue Cheng, Liang Zhao

The Graph Augmented Multi-layer Perceptron (GA-MLP) model is an attractive alternative to Graph Neural Networks (GNNs).

Quantization

pdADMM: parallel deep learning Alternating Direction Method of Multipliers

1 code implementation1 Nov 2020 Junxiang Wang, Zheng Chai, Yue Cheng, Liang Zhao

In this paper, we propose a novel parallel deep learning ADMM framework (pdADMM) to achieve layer parallelism: parameters in each layer of neural networks can be updated independently in parallel.

FedAT: A High-Performance and Communication-Efficient Federated Learning System with Asynchronous Tiers

no code implementations12 Oct 2020 Zheng Chai, Yujing Chen, Ali Anwar, Liang Zhao, Yue Cheng, Huzefa Rangwala

By bridging the synchronous and asynchronous training through tiering, FedAT minimizes the straggler effect with improved convergence speed and test accuracy.

Federated Learning

Tunable Subnetwork Splitting for Model-parallelism of Neural Network Training

1 code implementation9 Sep 2020 Junxiang Wang, Zheng Chai, Yue Cheng, Liang Zhao

In this paper, we analyze the reason and propose to achieve a compelling trade-off between parallelism and accuracy by a reformulation called Tunable Subnetwork Splitting Method (TSSM), which can tune the decomposition granularity of deep neural networks.

TiFL: A Tier-based Federated Learning System

no code implementations25 Jan 2020 Zheng Chai, Ahsan Ali, Syed Zawad, Stacey Truex, Ali Anwar, Nathalie Baracaldo, Yi Zhou, Heiko Ludwig, Feng Yan, Yue Cheng

To this end, we propose TiFL, a Tier-based Federated Learning System, which divides clients into tiers based on their training performance and selects clients from the same tier in each training round to mitigate the straggler problem caused by heterogeneity in resource and data quantity.

Federated Learning

Federated Multi-task Hierarchical Attention Model for Sensor Analytics

no code implementations13 May 2019 Yujing Chen, Yue Ning, Zheng Chai, Huzefa Rangwala

The attention mechanism of the proposed model seeks to extract feature representations from the input and learn a shared representation focused on time dimensions across multiple sensors.

Activity Recognition General Classification

Characterizing Co-located Datacenter Workloads: An Alibaba Case Study

1 code implementation8 Aug 2018 Yue Cheng, Zheng Chai, Ali Anwar

Warehouse-scale cloud datacenters co-locate workloads with different and often complementary characteristics for improved resource utilization.

Distributed, Parallel, and Cluster Computing

Cannot find the paper you are looking for? You can Submit a new open access paper.