Search Results for author: Hongxu Yin

Found 33 papers, 13 papers with code

LITA: Language Instructed Temporal-Localization Assistant

1 code implementation27 Mar 2024 De-An Huang, Shijia Liao, Subhashree Radhakrishnan, Hongxu Yin, Pavlo Molchanov, Zhiding Yu, Jan Kautz

In addition to leveraging existing video datasets with timestamps, we propose a new task, Reasoning Temporal Localization (RTL), along with the dataset, ActivityNet-RTL, for learning and evaluating this task.

RegionGPT: Towards Region Understanding Vision Language Model

no code implementations4 Mar 2024 Qiushan Guo, Shalini De Mello, Hongxu Yin, Wonmin Byeon, Ka Chun Cheung, Yizhou Yu, Ping Luo, Sifei Liu

Vision language models (VLMs) have experienced rapid advancements through the integration of large language models (LLMs) with image-text pairs, yet they struggle with detailed regional visual understanding due to limited spatial awareness of the vision encoder, and the use of coarse-grained training data that lacks detailed, region-specific captions.

Language Modelling

DoRA: Weight-Decomposed Low-Rank Adaptation

3 code implementations14 Feb 2024 Shih-Yang Liu, Chien-Yi Wang, Hongxu Yin, Pavlo Molchanov, Yu-Chiang Frank Wang, Kwang-Ting Cheng, Min-Hung Chen

By employing DoRA, we enhance both the learning capacity and training stability of LoRA while avoiding any additional inference overhead.

FedBPT: Efficient Federated Black-box Prompt Tuning for Large Language Models

no code implementations2 Oct 2023 Jingwei Sun, Ziyue Xu, Hongxu Yin, Dong Yang, Daguang Xu, Yiran Chen, Holger R. Roth

However, applying FL to finetune PLMs is hampered by challenges, including restricted model parameter access, high computational requirements, and communication overheads.

Federated Learning Privacy Preserving

Online Overexposed Pixels Hallucination in Videos with Adaptive Reference Frame Selection

no code implementations29 Aug 2023 Yazhou Xing, Amrita Mazumdar, Anjul Patney, Chao Liu, Hongxu Yin, Qifeng Chen, Jan Kautz, Iuri Frosio

We present a learning-based system to reduce these artifacts without resorting to complex acquisition mechanisms like alternating exposures or costly processing that are typical of high dynamic range (HDR) imaging.

Hallucination

Adaptive Sharpness-Aware Pruning for Robust Sparse Networks

no code implementations25 Jun 2023 Anna Bair, Hongxu Yin, Maying Shen, Pavlo Molchanov, Jose Alvarez

Robustness and compactness are two essential attributes of deep learning models that are deployed in the real world.

Image Classification object-detection +2

Heterogeneous Continual Learning

no code implementations CVPR 2023 Divyam Madaan, Hongxu Yin, Wonmin Byeon, Jan Kautz, Pavlo Molchanov

We propose a novel framework and a solution to tackle the continual learning (CL) problem with changing network architectures.

Continual Learning Knowledge Distillation +1

Recurrence without Recurrence: Stable Video Landmark Detection with Deep Equilibrium Models

1 code implementation CVPR 2023 Paul Micaelli, Arash Vahdat, Hongxu Yin, Jan Kautz, Pavlo Molchanov

Our Landmark DEQ (LDEQ) achieves state-of-the-art performance on the challenging WFLW facial landmark dataset, reaching $3. 92$ NME with fewer parameters and a training memory cost of $\mathcal{O}(1)$ in the number of recurrent modules.

Face Alignment

Structural Pruning via Latency-Saliency Knapsack

1 code implementation13 Oct 2022 Maying Shen, Hongxu Yin, Pavlo Molchanov, Lei Mao, Jianna Liu, Jose M. Alvarez

We propose Hardware-Aware Latency Pruning (HALP) that formulates structural pruning as a global resource allocation optimization problem, aiming at maximizing the accuracy while constraining latency under a predefined budget on targeting device.

Global Context Vision Transformers

8 code implementations20 Jun 2022 Ali Hatamizadeh, Hongxu Yin, Greg Heinrich, Jan Kautz, Pavlo Molchanov

Pre-trained GC ViT backbones in downstream tasks of object detection, instance segmentation, and semantic segmentation using MS COCO and ADE20K datasets outperform prior work consistently.

Image Classification Inductive Bias +4

GradViT: Gradient Inversion of Vision Transformers

no code implementations CVPR 2022 Ali Hatamizadeh, Hongxu Yin, Holger Roth, Wenqi Li, Jan Kautz, Daguang Xu, Pavlo Molchanov

In this work we demonstrate the vulnerability of vision transformers (ViTs) to gradient-based inversion attacks.

Scheduling

AdaViT: Adaptive Tokens for Efficient Vision Transformer

1 code implementation CVPR 2022 Hongxu Yin, Arash Vahdat, Jose Alvarez, Arun Mallya, Jan Kautz, Pavlo Molchanov

A-ViT achieves this by automatically reducing the number of tokens in vision transformers that are processed in the network as inference proceeds.

Image Classification

When to Prune? A Policy towards Early Structural Pruning

no code implementations CVPR 2022 Maying Shen, Pavlo Molchanov, Hongxu Yin, Jose M. Alvarez

Through extensive experiments on ImageNet, we show that EPI empowers a quick tracking of early training epochs suitable for pruning, offering same efficacy as an otherwise ``oracle'' grid-search that scans through epochs and requires orders of magnitude more compute.

Network Pruning

HALP: Hardware-Aware Latency Pruning

1 code implementation20 Oct 2021 Maying Shen, Hongxu Yin, Pavlo Molchanov, Lei Mao, Jianna Liu, Jose M. Alvarez

We propose Hardware-Aware Latency Pruning (HALP) that formulates structural pruning as a global resource allocation optimization problem, aiming at maximizing the accuracy while constraining latency under a predefined budget.

Global Vision Transformer Pruning with Hessian-Aware Saliency

1 code implementation CVPR 2023 Huanrui Yang, Hongxu Yin, Maying Shen, Pavlo Molchanov, Hai Li, Jan Kautz

This work aims on challenging the common design philosophy of the Vision Transformer (ViT) model with uniform dimension across all the stacked blocks in a model stage, where we redistribute the parameters both across transformer blocks and between different structures within the block via the first systematic attempt on global structural pruning.

Philosophy

Hardware-Aware Network Transformation

no code implementations29 Sep 2021 Pavlo Molchanov, Jimmy Hall, Hongxu Yin, Jan Kautz, Nicolo Fusi, Arash Vahdat

In the second phase, it solves the combinatorial selection of efficient operations using a novel constrained integer linear optimization approach.

Neural Architecture Search

Privacy Vulnerability of Split Computing to Data-Free Model Inversion Attacks

no code implementations13 Jul 2021 Xin Dong, Hongxu Yin, Jose M. Alvarez, Jan Kautz, Pavlo Molchanov, H. T. Kung

Prior works usually assume that SC offers privacy benefits as only intermediate features, instead of private data, are shared from devices to the cloud.

LANA: Latency Aware Network Acceleration

no code implementations12 Jul 2021 Pavlo Molchanov, Jimmy Hall, Hongxu Yin, Jan Kautz, Nicolo Fusi, Arash Vahdat

We analyze three popular network architectures: EfficientNetV1, EfficientNetV2 and ResNeST, and achieve accuracy improvement for all models (up to $3. 0\%$) when compressing larger models to the latency level of smaller models.

Neural Architecture Search Quantization

Optimal Quantization Using Scaled Codebook

no code implementations CVPR 2021 Yerlan Idelbayev, Pavlo Molchanov, Maying Shen, Hongxu Yin, Miguel A. Carreira-Perpinan, Jose M. Alvarez

We study the problem of quantizing N sorted, scalar datapoints with a fixed codebook containing K entries that are allowed to be rescaled.

Quantization

See through Gradients: Image Batch Recovery via GradInversion

2 code implementations CVPR 2021 Hongxu Yin, Arun Mallya, Arash Vahdat, Jose M. Alvarez, Jan Kautz, Pavlo Molchanov

In this work, we introduce GradInversion, using which input images from a larger batch (8 - 48 images) can also be recovered for large networks such as ResNets (50 layers), on complex datasets such as ImageNet (1000 classes, 224x224 px).

Federated Learning Inference Attack +1

MHDeep: Mental Health Disorder Detection System based on Body-Area and Deep Neural Networks

no code implementations20 Feb 2021 Shayan Hassantabar, Joe Zhang, Hongxu Yin, Niraj K. Jha

At the patient level, MHDeep DNNs achieve an accuracy of 100%, 100%, and 90. 0% for the three mental health disorders, respectively.

Synthetic Data Generation

Fully Dynamic Inference with Deep Neural Networks

no code implementations29 Jul 2020 Wenhan Xia, Hongxu Yin, Xiaoliang Dai, Niraj K. Jha

Modern deep neural networks are powerful and widely applicable models that extract task-relevant information through multi-level abstraction.

Computational Efficiency Self-Driving Cars

Efficient Synthesis of Compact Deep Neural Networks

no code implementations18 Apr 2020 Wenhan Xia, Hongxu Yin, Niraj K. Jha

These large, deep models are often unsuitable for real-world applications, due to their massive computational cost, high memory bandwidth, and long latency.

Autonomous Driving

DiabDeep: Pervasive Diabetes Diagnosis based on Wearable Medical Sensors and Efficient Neural Networks

no code implementations11 Oct 2019 Hongxu Yin, Bilal Mukadam, Xiaoliang Dai, Niraj K. Jha

For server (edge) side inference, we achieve a 96. 3% (95. 3%) accuracy in classifying diabetics against healthy individuals, and a 95. 7% (94. 6%) accuracy in distinguishing among type-1/type-2 diabetic, and healthy individuals.

Incremental Learning Using a Grow-and-Prune Paradigm with Efficient Neural Networks

no code implementations27 May 2019 Xiaoliang Dai, Hongxu Yin, Niraj K. Jha

Deep neural networks (DNNs) have become a widely deployed model for numerous machine learning applications.

Incremental Learning

Hardware-Guided Symbiotic Training for Compact, Accurate, yet Execution-Efficient LSTM

no code implementations30 Jan 2019 Hongxu Yin, Guoyang Chen, Yingmin Li, Shuai Che, Weifeng Zhang, Niraj K. Jha

In this work, we propose a hardware-guided symbiotic training methodology for compact, accurate, yet execution-efficient inference models.

Language Modelling Neural Network Compression +2

ChamNet: Towards Efficient Network Design through Platform-Aware Model Adaptation

1 code implementation CVPR 2019 Xiaoliang Dai, Peizhao Zhang, Bichen Wu, Hongxu Yin, Fei Sun, Yanghan Wang, Marat Dukhan, Yunqing Hu, Yiming Wu, Yangqing Jia, Peter Vajda, Matt Uyttendaele, Niraj K. Jha

We formulate platform-aware NN architecture search in an optimization framework and propose a novel algorithm to search for optimal architectures aided by efficient accuracy and resource (latency and/or energy) predictors.

Bayesian Optimization Efficient Neural Network +1

Grow and Prune Compact, Fast, and Accurate LSTMs

no code implementations30 May 2018 Xiaoliang Dai, Hongxu Yin, Niraj K. Jha

To address these problems, we propose a hidden-layer LSTM (H-LSTM) that adds hidden layers to LSTM's original one level non-linear control gates.

Image Captioning speech-recognition +1

NeST: A Neural Network Synthesis Tool Based on a Grow-and-Prune Paradigm

no code implementations6 Nov 2017 Xiaoliang Dai, Hongxu Yin, Niraj K. Jha

To address these problems, we introduce a network growth algorithm that complements network pruning to learn both weights and compact DNN architectures during training.

Network Pruning

Cannot find the paper you are looking for? You can Submit a new open access paper.