Search Results for author: Liang Luo

Found 12 papers, 3 papers with code

Disaggregated Multi-Tower: Topology-aware Modeling Technique for Efficient Large-Scale Recommendation

no code implementations1 Mar 2024 Liang Luo, Buyun Zhang, Michael Tsang, Yinbin Ma, Ching-Hsiang Chu, Yuxin Chen, Shen Li, Yuchen Hao, Yanli Zhao, Guna Lakshminarayanan, Ellie Dingqiao Wen, Jongsoo Park, Dheevatsa Mudigere, Maxim Naumov

We study a mismatch between the deep learning recommendation models' flat architecture, common distributed training paradigm and hierarchical data center topology.

Self-discipline on multiple channels

1 code implementation27 Apr 2023 Jiutian Zhao, Liang Luo, Hao Wang

Comparative experimental results on both datasets show that SMC-2 outperforms Label Smoothing Regularizaion and Self-distillation From The Last Mini-batch on all models, and outperforms the state-of-the-art Sharpness-Aware Minimization method on 83% of the models. Compatibility of SMC-2 and data augmentation experimental results show that using both SMC-2 and data augmentation improves the generalization ability of the model between 0. 28% and 1. 80% compared to using only data augmentation.

Data Augmentation

DHEN: A Deep and Hierarchical Ensemble Network for Large-Scale Click-Through Rate Prediction

no code implementations11 Mar 2022 Buyun Zhang, Liang Luo, Xi Liu, Jay Li, Zeliang Chen, Weilin Zhang, Xiaohan Wei, Yuchen Hao, Michael Tsang, Wenjun Wang, Yang Liu, Huayu Li, Yasmine Badr, Jongsoo Park, Jiyan Yang, Dheevatsa Mudigere, Ellie Wen

To overcome the challenge brought by DHEN's deeper and multi-layer structure in training, we propose a novel co-designed training system that can further improve the training efficiency of DHEN.

Click-Through Rate Prediction

Characterizing and Taming Resolution in Convolutional Neural Networks

no code implementations28 Oct 2021 Eddie Yan, Liang Luo, Luis Ceze

Image resolution has a significant effect on the accuracy and computational, storage, and bandwidth costs of computer vision model inference.

Accelerating SpMM Kernel with Cache-First Edge Sampling for Graph Neural Networks

no code implementations21 Apr 2021 Chien-Yu Lin, Liang Luo, Luis Ceze

To evaluate ES-SpMM's performance, we integrated it with a popular GNN framework, DGL, and tested it using representative GNN models and datasets.

Parameter Hub: a Rack-Scale Parameter Server for Distributed Deep Neural Network Training

no code implementations21 May 2018 Liang Luo, Jacob Nelson, Luis Ceze, Amar Phanishayee, Arvind Krishnamurthy

Distributed deep neural network (DDNN) training constitutes an increasingly important workload that frequently runs in the cloud.

Cannot find the paper you are looking for? You can Submit a new open access paper.