Search Results for author: Yiming Hu

Found 12 papers, 3 papers with code

ESE: Efficient Speech Recognition Engine with Sparse LSTM on FPGA

no code implementations1 Dec 2016 Song Han, Junlong Kang, Huizi Mao, Yiming Hu, Xin Li, Yubin Li, Dongliang Xie, Hong Luo, Song Yao, Yu Wang, Huazhong Yang, William J. Dally

Evaluated on the LSTM for speech recognition benchmark, ESE is 43x and 3x faster than Core i7 5930k CPU and Pascal Titan X GPU implementations.

Quantization speech-recognition +1

A novel channel pruning method for deep neural network compression

no code implementations29 May 2018 Yiming Hu, Siyang Sun, Jianquan Li, Xingang Wang, Qingyi Gu

In order to accelerate the selection process, the proposed method formulates it as a search problem, which can be solved efficiently by genetic algorithm.

Combinatorial Optimization Knowledge Distillation +1

Action Machine: Rethinking Action Recognition in Trimmed Videos

no code implementations14 Dec 2018 Jiagang Zhu, Wei Zou, Liang Xu, Yiming Hu, Zheng Zhu, Manyu Chang, Jun-Jie Huang, Guan Huang, Dalong Du

On NTU RGB-D, Action Machine achieves the state-of-the-art performance with top-1 accuracies of 97. 2% and 94. 3% on cross-view and cross-subject respectively.

Action Recognition Multimodal Activity Recognition +3

Better Guider Predicts Future Better: Difference Guided Generative Adversarial Networks

no code implementations7 Jan 2019 Guohao Ying, Yingtian Zou, Lin Wan, Yiming Hu, Jiashi Feng

In this paper, we propose a novel GAN based on inter-frame difference to circumvent the difficulties.

Video Prediction

Multi-loss-aware Channel Pruning of Deep Networks

no code implementations27 Feb 2019 Yiming Hu, Siyang Sun, Jianquan Li, Jiagang Zhu, Xingang Wang, Qingyi Gu

Particularly, we introduce an additional loss to encode the differences in the feature and semantic distributions within feature maps between the baseline model and the pruned one.

General Classification

Cluster Regularized Quantization for Deep Networks Compression

no code implementations27 Feb 2019 Yiming Hu, Jianquan Li, Xianlei Long, Shenhua Hu, Jiagang Zhu, Xingang Wang, Qingyi Gu

Deep neural networks (DNNs) have achieved great success in a wide range of computer vision areas, but the applications to mobile devices is limited due to their high storage and computational cost.

Quantization

Angle-based Search Space Shrinking for Neural Architecture Search

1 code implementation ECCV 2020 Yiming Hu, Yuding Liang, Zichao Guo, Ruosi Wan, Xiangyu Zhang, Yichen Wei, Qingyi Gu, Jian Sun

Comprehensive experiments show that ABS can dramatically enhance existing NAS approaches by providing a promising shrunk search space.

Neural Architecture Search

DPUV3INT8: A Compiler View to programmable FPGA Inference Engines

no code implementations8 Oct 2021 Paolo D'Alberto, Jiangsha Ma, Jintao Li, Yiming Hu, Manasa Bollavaram, Shaoxia Fang

We have a FPGA design, we make it fast, efficient, and tested for a few important examples.

Exploring the impact of weather on Metro demand forecasting using machine learning method

no code implementations24 Oct 2022 Yiming Hu, Yangchuan Huang, Shuying Liu, Yuanyang Qi, Danhui Bai

Urban rail transit provides significant comprehensive benefits such as large traffic volume and high speed, serving as one of the most important components of urban traffic construction management and congestion solution.

Management Scheduling

Masked Autoencoders Are Robust Neural Architecture Search Learners

no code implementations20 Nov 2023 Yiming Hu, Xiangxiang Chu, Bo Zhang

Neural Architecture Search (NAS) currently relies heavily on labeled data, which is both expensive and time-consuming to acquire.

Image Reconstruction Neural Architecture Search

MobileVLM V2: Faster and Stronger Baseline for Vision Language Model

1 code implementation6 Feb 2024 Xiangxiang Chu, Limeng Qiao, Xinyu Zhang, Shuang Xu, Fei Wei, Yang Yang, Xiaofei Sun, Yiming Hu, Xinyang Lin, Bo Zhang, Chunhua Shen

We introduce MobileVLM V2, a family of significantly improved vision language models upon MobileVLM, which proves that a delicate orchestration of novel architectural design, an improved training scheme tailored for mobile VLMs, and rich high-quality dataset curation can substantially benefit VLMs' performance.

AutoML Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.