Search Results for author: Tianyu Ding

Found 24 papers, 17 papers with code

ONNXPruner: ONNX-Based General Model Pruning Adapter

no code implementations10 Apr 2024 Dongdong Ren, Wenbin Li, Tianyu Ding, Lei Wang, Qi Fan, Jing Huo, Hongbing Pan, Yang Gao

However, the practical application of these algorithms across various models and platforms remains a significant challenge.

Exploiting Inter-sample and Inter-feature Relations in Dataset Distillation

1 code implementation31 Mar 2024 Wenxiao Deng, Wenbin Li, Tianyu Ding, Lei Wang, Hongguang Zhang, Kuihua Huang, Jing Huo, Yang Gao

However, these methods face two primary limitations: the dispersed feature distribution within the same class in synthetic datasets, reducing class discrimination, and an exclusive focus on mean feature consistency, lacking precision and comprehensiveness.

OTOv3: Automatic Architecture-Agnostic Neural Network Training and Compression from Structured Pruning to Erasing Operators

1 code implementation15 Dec 2023 Tianyi Chen, Tianyu Ding, Zhihui Zhu, Zeyu Chen, HsiangTao Wu, Ilya Zharkov, Luming Liang

Compressing a predefined deep neural network (DNN) into a compact sub-network with competitive performance is crucial in the efficient machine learning realm.

Neural Architecture Search

The Efficiency Spectrum of Large Language Models: An Algorithmic Survey

1 code implementation1 Dec 2023 Tianyu Ding, Tianyi Chen, Haidong Zhu, Jiachen Jiang, Yiqi Zhong, Jinxin Zhou, Guangzhi Wang, Zhihui Zhu, Ilya Zharkov, Luming Liang

The rapid growth of Large Language Models (LLMs) has been a driving force in transforming various domains, reshaping the artificial general intelligence landscape.

Model Compression

DREAM: Diffusion Rectification and Estimation-Adaptive Models

1 code implementation30 Nov 2023 Jinxin Zhou, Tianyu Ding, Tianyi Chen, Jiachen Jiang, Ilya Zharkov, Zhihui Zhu, Luming Liang

We present DREAM, a novel training framework representing Diffusion Rectification and Estimation Adaptive Models, requiring minimal code changes (just three lines) yet significantly enhancing the alignment of training with sampling in diffusion models.

Image Super-Resolution

CaesarNeRF: Calibrated Semantic Representation for Few-shot Generalizable Neural Rendering

1 code implementation27 Nov 2023 Haidong Zhu, Tianyu Ding, Tianyi Chen, Ilya Zharkov, Ram Nevatia, Luming Liang

CaesarNeRF explicitly models pose differences of reference views to combine scene-level semantic representations, providing a calibrated holistic understanding.

Few-Shot Learning Neural Rendering

LoRAShear: Efficient Large Language Model Structured Pruning and Knowledge Recovery

1 code implementation24 Oct 2023 Tianyi Chen, Tianyu Ding, Badal Yadav, Ilya Zharkov, Luming Liang

Large Language Models (LLMs) have transformed the landscape of artificial intelligence, while their enormous size presents significant challenges in terms of computational costs.

Language Modelling Large Language Model +1

InsertNeRF: Instilling Generalizability into NeRF with HyperNet Modules

1 code implementation26 Aug 2023 Yanqi Bao, Tianyu Ding, Jing Huo, Wenbin Li, Yuxin Li, Yang Gao

By utilizing multiple plug-and-play HyperNet modules, InsertNeRF dynamically tailors NeRF's weights to specific reference scenes, transforming multi-scale sampling-aware features into scene-specific representations.

Where and How: Mitigating Confusion in Neural Radiance Fields from Sparse Inputs

1 code implementation5 Aug 2023 Yanqi Bao, Yuxin Li, Jing Huo, Tianyu Ding, Xinyue Liang, Wenbin Li, Yang Gao

Neural Radiance Fields from Sparse input} (NeRF-S) have shown great potential in synthesizing novel views with a limited number of observed viewpoints.

Attribute

Automated Search-Space Generation Neural Architecture Search

1 code implementation25 May 2023 Tianyi Chen, Luming Liang, Tianyu Ding, Ilya Zharkov

To search an optimal sub-network within a general deep neural network (DNN), existing neural architecture search (NAS) methods typically rely on handcrafting a search space beforehand.

Neural Architecture Search

OTOV2: Automatic, Generic, User-Friendly

1 code implementation13 Mar 2023 Tianyi Chen, Luming Liang, Tianyu Ding, Zhihui Zhu, Ilya Zharkov

We propose the second generation of Only-Train-Once (OTOv2), which first automatically trains and compresses a general DNN only once from scratch to produce a more compact model with competitive performance without fine-tuning.

Model Compression

Sparsity-guided Network Design for Frame Interpolation

1 code implementation9 Sep 2022 Tianyu Ding, Luming Liang, Zhihui Zhu, Tianyi Chen, Ilya Zharkov

As a result, we achieve a considerable performance gain with a quarter of the size of the original AdaCoF.

RSTT: Real-time Spatial Temporal Transformer for Space-Time Video Super-Resolution

1 code implementation CVPR 2022 Zhicheng Geng, Luming Liang, Tianyu Ding, Ilya Zharkov

Space-time video super-resolution (STVSR) is the task of interpolating videos with both Low Frame Rate (LFR) and Low Resolution (LR) to produce High-Frame-Rate (HFR) and also High-Resolution (HR) counterparts.

Space-time Video Super-resolution Video Frame Interpolation +1

On the Optimization Landscape of Neural Collapse under MSE Loss: Global Optimality with Unconstrained Features

no code implementations2 Mar 2022 Jinxin Zhou, Xiao Li, Tianyu Ding, Chong You, Qing Qu, Zhihui Zhu

When training deep neural networks for classification tasks, an intriguing empirical phenomenon has been widely observed in the last-layer classifiers and features, where (i) the class means and the last-layer classifiers all collapse to the vertices of a Simplex Equiangular Tight Frame (ETF) up to scaling, and (ii) cross-example within-class variability of last-layer activations collapses to zero.

Only Train Once: A One-Shot Neural Network Training And Pruning Framework

1 code implementation NeurIPS 2021 Tianyi Chen, Bo Ji, Tianyu Ding, Biyi Fang, Guanyi Wang, Zhihui Zhu, Luming Liang, Yixin Shi, Sheng Yi, Xiao Tu

Structured pruning is a commonly used technique in deploying deep neural networks (DNNs) onto resource-constrained devices.

A Geometric Analysis of Neural Collapse with Unconstrained Features

1 code implementation NeurIPS 2021 Zhihui Zhu, Tianyu Ding, Jinxin Zhou, Xiao Li, Chong You, Jeremias Sulam, Qing Qu

In contrast to existing landscape analysis for deep neural networks which is often disconnected from practice, our analysis of the simplified model not only does it explain what kind of features are learned in the last layer, but it also shows why they can be efficiently optimized in the simplified settings, matching the empirical observations in practical deep network architectures.

CDFI: Compression-Driven Network Design for Frame Interpolation

1 code implementation CVPR 2021 Tianyu Ding, Luming Liang, Zhihui Zhu, Ilya Zharkov

DNN-based frame interpolation--that generates the intermediate frames given two consecutive frames--typically relies on heavy model architectures with a huge number of features, preventing them from being deployed on systems with limited resources, e. g., mobile devices.

 Ranked #1 on Video Frame Interpolation on Middlebury (LPIPS metric)

Video Frame Interpolation

A Half-Space Stochastic Projected Gradient Method for Group Sparsity Regularization

no code implementations1 Jan 2021 Tianyi Chen, Guanyi Wang, Tianyu Ding, Bo Ji, Sheng Yi, Zhihui Zhu

Optimizing with group sparsity is significant in enhancing model interpretability in machining learning applications, e. g., feature selection, compressed sensing and model compression.

feature selection Model Compression +1

Neural Network Compression Via Sparse Optimization

no code implementations10 Nov 2020 Tianyi Chen, Bo Ji, Yixin Shi, Tianyu Ding, Biyi Fang, Sheng Yi, Xiao Tu

The compression of deep neural networks (DNNs) to reduce inference cost becomes increasingly important to meet realistic deployment requirements of various applications.

Neural Network Compression Stochastic Optimization

Early Detection of Sepsis using Ensemblers

no code implementations20 Oct 2020 Shailesh Nirgudkar, Tianyu Ding

This paper describes a methodology to detect sepsis ahead of time by analyzing hourly patient records.

Imputation

Orthant Based Proximal Stochastic Gradient Method for $\ell_1$-Regularized Optimization

1 code implementation7 Apr 2020 Tianyi Chen, Tianyu Ding, Bo Ji, Guanyi Wang, Jing Tian, Yixin Shi, Sheng Yi, Xiao Tu, Zhihui Zhu

Sparsity-inducing regularization problems are ubiquitous in machine learning applications, ranging from feature selection to model compression.

feature selection Model Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.