Search Results for author: Vivienne Sze

Found 17 papers, 4 papers with code

Sparseloop: An Analytical Approach To Sparse Tensor Accelerator Modeling

no code implementations12 May 2022 Yannan Nellie Wu, Po-An Tsai, Angshuman Parashar, Vivienne Sze, Joel S. Emer

This paper first presents a unified taxonomy to systematically describe the diverse sparse tensor accelerator design space.

Searching for Efficient Multi-Stage Vision Transformers

1 code implementation1 Sep 2021 Yi-Lun Liao, Sertac Karaman, Vivienne Sze

This naturally raises the question of how the performance of ViT can be advanced with design techniques of CNN.

Neural Architecture Search

NetAdaptV2: Efficient Neural Architecture Search with Fast Super-Network Training and Architecture Optimization

no code implementations CVPR 2021 Tien-Ju Yang, Yi-Lun Liao, Vivienne Sze

Neural architecture search (NAS) typically consists of three main steps: training a super-network, training and evaluating sampled deep neural networks (DNNs), and training the discovered DNN.

Neural Architecture Search

App-based saccade latency and error determination across the adult age spectrum

no code implementations14 Dec 2020 Hsin-Yu Lai, Gladynel Saavedra-Pena, Charles G. Sodini, Thomas Heldt, Vivienne Sze

We aid in neurocognitive monitoring outside the hospital environment by enabling app-based measurements of visual reaction time (saccade latency) and error rate in a cohort of subjects spanning the adult age spectrum.

Depth Map Estimation of Dynamic Scenes Using Prior Depth Information

no code implementations2 Feb 2020 James Noraky, Vivienne Sze

When evaluated using RGB-D datasets of various dynamic scenes, our approach estimates depth maps with a mean relative error of 2. 5% while reducing the active depth sensor usage by over 90%.

Optical Flow Estimation

Design Considerations for Efficient Deep Neural Networks on Processing-in-Memory Accelerators

no code implementations18 Dec 2019 Tien-Ju Yang, Vivienne Sze

This paper describes various design considerations for deep neural networks that enable them to operate efficiently and accurately on processing-in-memory accelerators.

Eyeriss v2: A Flexible Accelerator for Emerging Deep Neural Networks on Mobile Devices

no code implementations10 Jul 2018 Yu-Hsin Chen, Tien-Ju Yang, Joel Emer, Vivienne Sze

In this work, we present Eyeriss v2, a DNN accelerator architecture designed for running compact and sparse DNNs.

NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications

4 code implementations ECCV 2018 Tien-Ju Yang, Andrew Howard, Bo Chen, Xiao Zhang, Alec Go, Mark Sandler, Vivienne Sze, Hartwig Adam

This work proposes an algorithm, called NetAdapt, that automatically adapts a pre-trained deep neural network to a mobile platform given a resource budget.

Image Classification

Efficient Processing of Deep Neural Networks: A Tutorial and Survey

no code implementations27 Mar 2017 Vivienne Sze, Yu-Hsin Chen, Tien-Ju Yang, Joel Emer

The reader will take away the following concepts from this article: understand the key design considerations for DNNs; be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics; understand the trade-offs between various hardware architectures and platforms; be able to evaluate the utility of various DNN design techniques for efficient processing; and understand recent implementation trends and opportunities.

Speech Recognition

Towards Closing the Energy Gap Between HOG and CNN Features for Embedded Vision

no code implementations17 Mar 2017 Amr Suleiman, Yu-Hsin Chen, Joel Emer, Vivienne Sze

Computer vision enables a wide range of applications in robotics/drones, self-driving cars, smart Internet of Things, and portable/wearable electronics.

Self-Driving Cars

Hardware for Machine Learning: Challenges and Opportunities

1 code implementation22 Dec 2016 Vivienne Sze, Yu-Hsin Chen, Joel Emer, Amr Suleiman, Zhengdong Zhang

Machine learning plays a critical role in extracting meaningful information out of the zetabytes of sensor data collected every day.

Self-Driving Cars

Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning

no code implementations CVPR 2017 Tien-Ju Yang, Yu-Hsin Chen, Vivienne Sze

With the proposed pruning method, the energy consumption of AlexNet and GoogLeNet are reduced by 3. 7x and 1. 6x, respectively, with less than 1% top-5 accuracy loss.

A 58.6mW Real-Time Programmable Object Detector with Multi-Scale Multi-Object Support Using Deformable Parts Model on 1920x1080 Video at 30fps

no code implementations27 Jul 2016 Amr Suleiman, Zhengdong Zhang, Vivienne Sze

This paper presents a programmable, energy-efficient and real-time object detection accelerator using deformable parts models (DPM), with 2x higher accuracy than traditional rigid body models.

Classification General Classification +2

FAST: A Framework to Accelerate Super-Resolution Processing on Compressed Videos

no code implementations29 Mar 2016 Zhengdong Zhang, Vivienne Sze

State-of-the-art super-resolution (SR) algorithms require significant computational resources to achieve real-time throughput (e. g., 60Mpixels/s for HD video).


Cannot find the paper you are looking for? You can Submit a new open access paper.