Search Results for author: Lujun Li

Found 19 papers, 7 papers with code

ParZC: Parametric Zero-Cost Proxies for Efficient NAS

no code implementations3 Feb 2024 Peijie Dong, Lujun Li, Xinglin Pan, Zimian Wei, Xiang Liu, Qiang Wang, Xiaowen Chu

Recent advancements in Zero-shot Neural Architecture Search (NAS) highlight the efficacy of zero-cost proxies in various NAS benchmarks.

Neural Architecture Search

TVT: Training-Free Vision Transformer Search on Tiny Datasets

no code implementations24 Nov 2023 Zimian Wei, Hengyue Pan, Lujun Li, Peijie Dong, Zhiliang Tian, Xin Niu, Dongsheng Li

In this paper, for the first time, we investigate how to search in a training-free manner with the help of teacher models and devise an effective Training-free ViT (TVT) search framework.

EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization

1 code implementation ICCV 2023 Peijie Dong, Lujun Li, Zimian Wei, Xin Niu, Zhiliang Tian, Hengyue Pan

In particular, we devise an elaborate search space involving the existing proxies and perform an evolution search to discover the best correlated MQ proxy.

Quantization

NORM: Knowledge Distillation via N-to-One Representation Matching

1 code implementation23 May 2023 Xiaolong Liu, Lujun Li, Chao Li, Anbang Yao

By sequentially splitting the expanded student representation into N non-overlapping feature segments having the same number of feature channels as the teacher's, they can be readily forced to approximate the intact teacher representation simultaneously, formulating a novel many-to-one representation matching mechanism conditioned on a single teacher-student layer pair.

Knowledge Distillation

Catch-Up Distillation: You Only Need to Train Once for Accelerating Sampling

1 code implementation18 May 2023 Shitong Shao, Xu Dai, Shouyi Yin, Lujun Li, Huanran Chen, Yang Hu

On CIFAR-10, we obtain a FID of 2. 80 by sampling in 15 steps under one-session training and the new state-of-the-art FID of 3. 37 by sampling in one step with additional training.

Knowledge Distillation

DisWOT: Student Architecture Search for Distillation WithOut Training

no code implementations CVPR 2023 Peijie Dong, Lujun Li, Zimian Wei

In this way, our student architecture search for Distillation WithOut Training (DisWOT) significantly improves the performance of the model in the distillation stage with at least 180$\times$ training acceleration.

Knowledge Distillation

Progressive Meta-Pooling Learning for Lightweight Image Classification Model

no code implementations24 Jan 2023 Peijie Dong, Xin Niu, Zhiliang Tian, Lujun Li, Xiaodong Wang, Zimian Wei, Hengyue Pan, Dongsheng Li

Practical networks for edge devices adopt shallow depth and small convolutional kernels to save memory and computational cost, which leads to a restricted receptive field.

Classification Image Classification

RD-NAS: Enhancing One-shot Supernet Ranking Ability via Ranking Distillation from Zero-cost Proxies

1 code implementation24 Jan 2023 Peijie Dong, Xin Niu, Lujun Li, Zhiliang Tian, Xiaodong Wang, Zimian Wei, Hengyue Pan, Dongsheng Li

In this paper, we propose Ranking Distillation one-shot NAS (RD-NAS) to enhance ranking consistency, which utilizes zero-cost proxies as the cheap teacher and adopts the margin ranking loss to distill the ranking knowledge.

Computational Efficiency Neural Architecture Search

GP-NAS-ensemble: a model for NAS Performance Prediction

no code implementations23 Jan 2023 Kunlong Chen, Liu Yang, Yitian Chen, Kunjin Chen, Yidan Xu, Lujun Li

It is of great significance to estimate the performance of a given model architecture without training in the application of Neural Architecture Search (NAS) as it may take a lot of time to evaluate the performance of an architecture.

Ensemble Learning Neural Architecture Search

Prior-Guided One-shot Neural Architecture Search

1 code implementation27 Jun 2022 Peijie Dong, Xin Niu, Lujun Li, Linzhen Xie, Wenbin Zou, Tian Ye, Zimian Wei, Hengyue Pan

In this paper, we present Prior-Guided One-shot NAS (PGONAS) to strengthen the ranking correlation of supernets.

Neural Architecture Search

Activation Modulation and Recalibration Scheme for Weakly Supervised Semantic Segmentation

1 code implementation16 Dec 2021 Jie Qin, Jie Wu, Xuefeng Xiao, Lujun Li, Xingang Wang

Extensive experiments show that AMR establishes a new state-of-the-art performance on the PASCAL VOC 2012 dataset, surpassing not only current methods trained with the image-level of supervision but also some methods relying on stronger supervision, such as saliency label.

Feature Importance Scene Understanding +3

Adversarial Joint Training with Self-Attention Mechanism for Robust End-to-End Speech Recognition

no code implementations3 Apr 2021 Lujun Li, Yikai Kang, Yuchen Shi, Ludwig Kürzinger, Tobias Watzel, Gerhard Rigoll

Inspired by the extensive applications of the generative adversarial networks (GANs) in speech enhancement and ASR tasks, we propose an adversarial joint training framework with the self-attention mechanism to boost the noise robustness of the ASR system.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Explicit Connection Distillation

no code implementations1 Jan 2021 Lujun Li, Yikai Wang, Anbang Yao, Yi Qian, Xiao Zhou, Ke He

In this paper, we present Explicit Connection Distillation (ECD), a new KD framework, which addresses the knowledge distillation problem in a novel perspective of bridging dense intermediate feature connections between a student network and its corresponding teacher generated automatically in the training, achieving knowledge transfer goal via direct cross-network layer-to-layer gradients propagation, without need to define complex distillation losses and assume a pre-trained teacher model to be available.

Image Classification Knowledge Distillation +1

CTC-Segmentation of Large Corpora for German End-to-end Speech Recognition

11 code implementations17 Jul 2020 Ludwig Kürzinger, Dominik Winkelbauer, Lujun Li, Tobias Watzel, Gerhard Rigoll

In this work, we combine freely available corpora for German speech recognition, including yet unlabeled speech data, to a big dataset of over $1700$h of speech data.

Ranked #5 on Speech Recognition on TUDA (using extra training data)

Speech Recognition Audio and Speech Processing

Cannot find the paper you are looking for? You can Submit a new open access paper.