Search Results for author: Luming Liang

Found 18 papers, 15 papers with code

OTOv3: Automatic Architecture-Agnostic Neural Network Training and Compression from Structured Pruning to Erasing Operators

1 code implementation15 Dec 2023 Tianyi Chen, Tianyu Ding, Zhihui Zhu, Zeyu Chen, HsiangTao Wu, Ilya Zharkov, Luming Liang

Compressing a predefined deep neural network (DNN) into a compact sub-network with competitive performance is crucial in the efficient machine learning realm.

Neural Architecture Search

The Efficiency Spectrum of Large Language Models: An Algorithmic Survey

1 code implementation1 Dec 2023 Tianyu Ding, Tianyi Chen, Haidong Zhu, Jiachen Jiang, Yiqi Zhong, Jinxin Zhou, Guangzhi Wang, Zhihui Zhu, Ilya Zharkov, Luming Liang

The rapid growth of Large Language Models (LLMs) has been a driving force in transforming various domains, reshaping the artificial general intelligence landscape.

Model Compression

DREAM: Diffusion Rectification and Estimation-Adaptive Models

1 code implementation30 Nov 2023 Jinxin Zhou, Tianyu Ding, Tianyi Chen, Jiachen Jiang, Ilya Zharkov, Zhihui Zhu, Luming Liang

We present DREAM, a novel training framework representing Diffusion Rectification and Estimation Adaptive Models, requiring minimal code changes (just three lines) yet significantly enhancing the alignment of training with sampling in diffusion models.

Image Super-Resolution

CaesarNeRF: Calibrated Semantic Representation for Few-shot Generalizable Neural Rendering

1 code implementation27 Nov 2023 Haidong Zhu, Tianyu Ding, Tianyi Chen, Ilya Zharkov, Ram Nevatia, Luming Liang

CaesarNeRF explicitly models pose differences of reference views to combine scene-level semantic representations, providing a calibrated holistic understanding.

Few-Shot Learning Neural Rendering

LoRAShear: Efficient Large Language Model Structured Pruning and Knowledge Recovery

1 code implementation24 Oct 2023 Tianyi Chen, Tianyu Ding, Badal Yadav, Ilya Zharkov, Luming Liang

Large Language Models (LLMs) have transformed the landscape of artificial intelligence, while their enormous size presents significant challenges in terms of computational costs.

Language Modelling Large Language Model +1

MMVP: Motion-Matrix-based Video Prediction

1 code implementation ICCV 2023 Yiqi Zhong, Luming Liang, Ilya Zharkov, Ulrich Neumann

A central challenge of video prediction lies where the system has to reason the objects' future motions from image frames while simultaneously maintaining the consistency of their appearances across frames.

motion prediction Video Prediction

Automated Search-Space Generation Neural Architecture Search

1 code implementation25 May 2023 Tianyi Chen, Luming Liang, Tianyu Ding, Ilya Zharkov

To search an optimal sub-network within a general deep neural network (DNN), existing neural architecture search (NAS) methods typically rely on handcrafting a search space beforehand.

Neural Architecture Search

OTOV2: Automatic, Generic, User-Friendly

1 code implementation13 Mar 2023 Tianyi Chen, Luming Liang, Tianyu Ding, Zhihui Zhu, Ilya Zharkov

We propose the second generation of Only-Train-Once (OTOv2), which first automatically trains and compresses a general DNN only once from scratch to produce a more compact model with competitive performance without fine-tuning.

Model Compression

Sparsity-guided Network Design for Frame Interpolation

1 code implementation9 Sep 2022 Tianyu Ding, Luming Liang, Zhihui Zhu, Tianyi Chen, Ilya Zharkov

As a result, we achieve a considerable performance gain with a quarter of the size of the original AdaCoF.

RSTT: Real-time Spatial Temporal Transformer for Space-Time Video Super-Resolution

1 code implementation CVPR 2022 Zhicheng Geng, Luming Liang, Tianyu Ding, Ilya Zharkov

Space-time video super-resolution (STVSR) is the task of interpolating videos with both Low Frame Rate (LFR) and Low Resolution (LR) to produce High-Frame-Rate (HFR) and also High-Resolution (HR) counterparts.

Space-time Video Super-resolution Video Frame Interpolation +1

Only Train Once: A One-Shot Neural Network Training And Pruning Framework

1 code implementation NeurIPS 2021 Tianyi Chen, Bo Ji, Tianyu Ding, Biyi Fang, Guanyi Wang, Zhihui Zhu, Luming Liang, Yixin Shi, Sheng Yi, Xiao Tu

Structured pruning is a commonly used technique in deploying deep neural networks (DNNs) onto resource-constrained devices.

CDFI: Compression-Driven Network Design for Frame Interpolation

1 code implementation CVPR 2021 Tianyu Ding, Luming Liang, Zhihui Zhu, Ilya Zharkov

DNN-based frame interpolation--that generates the intermediate frames given two consecutive frames--typically relies on heavy model architectures with a huge number of features, preventing them from being deployed on systems with limited resources, e. g., mobile devices.

 Ranked #1 on Video Frame Interpolation on Middlebury (LPIPS metric)

Video Frame Interpolation

DRD-Net: Detail-recovery Image Deraining via Context Aggregation Networks

1 code implementation27 Aug 2019 Sen Deng, Mingqiang Wei, Jun Wang, Luming Liang, Haoran Xie, Meng Wang

We have validated our approach on four recognized datasets (three synthetic and one real-world).

Rain Removal

Convolutional Neural Network with Median Layers for Denoising Salt-and-Pepper Contaminations

1 code implementation18 Aug 2019 Luming Liang, Sen Deng, Lionel Gueguen, Mingqiang Wei, Xinming Wu, Jing Qin

We propose a deep fully convolutional neural network with a new type of layer, named median layer, to restore images contaminated by the salt-and-pepper (s&p) noise.

Salt-And-Pepper Noise Removal

Cannot find the paper you are looking for? You can Submit a new open access paper.