Search Results for author: Tien-Ju Yang

Found 15 papers, 5 papers with code

Heterogeneous Federated Learning Using Knowledge Codistillation

no code implementations4 Oct 2023 Jared Lichtarge, Ehsan Amid, Shankar Kumar, Tien-Ju Yang, Rohan Anil, Rajiv Mathews

Federated Averaging, and many federated learning algorithm variants which build upon it, have a limitation: all clients must share the same model architecture.

Federated Learning Image Classification +2

Federated Pruning: Improving Neural Network Efficiency with Federated Learning

no code implementations14 Sep 2022 Rongmei Lin, Yonghui Xiao, Tien-Ju Yang, Ding Zhao, Li Xiong, Giovanni Motta, Françoise Beaufays

Automatic Speech Recognition models require large amount of speech data for training, and the collection of such data often leads to privacy concerns.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Online Model Compression for Federated Learning with Large Models

no code implementations6 May 2022 Tien-Ju Yang, Yonghui Xiao, Giovanni Motta, Françoise Beaufays, Rajiv Mathews, Mingqing Chen

This paper addresses the challenges of training large neural network models under federated learning settings: high on-device memory usage and communication cost.

Federated Learning Model Compression +3

Partial Variable Training for Efficient On-Device Federated Learning

no code implementations11 Oct 2021 Tien-Ju Yang, Dhruv Guliani, Françoise Beaufays, Giovanni Motta

This paper aims to address the major challenges of Federated Learning (FL) on edge devices: limited memory and expensive communication.

Federated Learning speech-recognition +1

NetAdaptV2: Efficient Neural Architecture Search with Fast Super-Network Training and Architecture Optimization

no code implementations CVPR 2021 Tien-Ju Yang, Yi-Lun Liao, Vivienne Sze

Neural architecture search (NAS) typically consists of three main steps: training a super-network, training and evaluating sampled deep neural networks (DNNs), and training the discovered DNN.

Neural Architecture Search

Design Considerations for Efficient Deep Neural Networks on Processing-in-Memory Accelerators

no code implementations18 Dec 2019 Tien-Ju Yang, Vivienne Sze

This paper describes various design considerations for deep neural networks that enable them to operate efficiently and accurately on processing-in-memory accelerators.

SegSort: Segmentation by Discriminative Sorting of Segments

1 code implementation ICCV 2019 Jyh-Jing Hwang, Stella X. Yu, Jianbo Shi, Maxwell D. Collins, Tien-Ju Yang, Xiao Zhang, Liang-Chieh Chen

The proposed SegSort further produces an interpretable result, as each choice of label can be easily understood from the retrieved nearest segments.

Ranked #10 on Unsupervised Semantic Segmentation on PASCAL VOC 2012 val (using extra training data)

Clustering Metric Learning +2

Eyeriss v2: A Flexible Accelerator for Emerging Deep Neural Networks on Mobile Devices

1 code implementation10 Jul 2018 Yu-Hsin Chen, Tien-Ju Yang, Joel Emer, Vivienne Sze

In this work, we present Eyeriss v2, a DNN accelerator architecture designed for running compact and sparse DNNs.

NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications

4 code implementations ECCV 2018 Tien-Ju Yang, Andrew Howard, Bo Chen, Xiao Zhang, Alec Go, Mark Sandler, Vivienne Sze, Hartwig Adam

This work proposes an algorithm, called NetAdapt, that automatically adapts a pre-trained deep neural network to a mobile platform given a resource budget.

Image Classification

Efficient Processing of Deep Neural Networks: A Tutorial and Survey

no code implementations27 Mar 2017 Vivienne Sze, Yu-Hsin Chen, Tien-Ju Yang, Joel Emer

The reader will take away the following concepts from this article: understand the key design considerations for DNNs; be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics; understand the trade-offs between various hardware architectures and platforms; be able to evaluate the utility of various DNN design techniques for efficient processing; and understand recent implementation trends and opportunities.

Benchmarking speech-recognition +1

Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning

no code implementations CVPR 2017 Tien-Ju Yang, Yu-Hsin Chen, Vivienne Sze

With the proposed pruning method, the energy consumption of AlexNet and GoogLeNet are reduced by 3. 7x and 1. 6x, respectively, with less than 1% top-5 accuracy loss.

Cannot find the paper you are looking for? You can Submit a new open access paper.