Search Results for author: Yuhang Li

Found 56 papers, 23 papers with code

Graph Neural Networks for Wireless Networks: Graph Representation, Architecture and Evaluation

no code implementations18 Apr 2024 Yang Lu, Yuhang Li, Ruichen Zhang, Wei Chen, Bo Ai, Dusit Niyato

Graph neural networks (GNNs) have been regarded as the basic model to facilitate deep learning (DL) to revolutionize resource allocation in wireless networks.

Boosting Visual Recognition for Autonomous Driving in Real-world Degradations with Deep Channel Prior

1 code implementation2 Apr 2024 Zhanwen Liu, Yuhang Li, Yang Wang, Bolin Gao, Yisheng An, Xiangmo Zhao

The environmental perception of autonomous vehicles in normal conditions have achieved considerable success in the past decade.

Autonomous Driving

Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization

no code implementations28 Mar 2024 Yuhang Li, Xin Dong, Chen Chen, Jingtao Li, Yuxin Wen, Michael Spranger, Lingjuan Lyu

Synthetic image data generation represents a promising avenue for training deep learning models, particularly in the realm of transfer learning, where obtaining real images within a specific domain can be prohibitively expensive due to privacy and intellectual property considerations.

Transfer Learning

Multiplane Quantitative Phase Imaging Using a Wavelength-Multiplexed Diffractive Optical Processor

no code implementations16 Mar 2024 Che-Yung Shen, Jingxi Li, Tianyi Gan, Yuhang Li, Langxing Bai, Mona Jarrahi, Aydogan Ozcan

These wavelength-multiplexed patterns are projected onto a single field-of-view (FOV) at the output plane of the diffractive processor, enabling the capture of quantitative phase distributions of input objects located at different axial planes using an intensity-only image sensor.

One-stage Prompt-based Continual Learning

no code implementations25 Feb 2024 Youngeun Kim, Yuhang Li, Priyadarshini Panda

With the QR loss, our approach maintains ~ 50% computational cost reduction during inference as well as outperforms the prior two-stage PCL methods by ~1. 4% on public class-incremental continual learning benchmarks including CIFAR-100, ImageNet-R, and DomainNet.

Continual Learning

Multiplexed all-optical permutation operations using a reconfigurable diffractive optical network

no code implementations4 Feb 2024 Guangdong Ma, Xilin Yang, Bijie Bai, Jingxi Li, Yuhang Li, Tianyi Gan, Che-Yung Shen, Yijie Zhang, Yuzhu Li, Mona Jarrahi, Aydogan Ozcan

We demonstrated the feasibility of this reconfigurable multiplexed diffractive design by approximating 256 randomly selected permutation matrices using K=4 rotatable diffractive layers.

All-optical complex field imaging using diffractive processors

no code implementations30 Jan 2024 Jingxi Li, Yuhang Li, Tianyi Gan, Che-Yung Shen, Mona Jarrahi, Aydogan Ozcan

Here, we present a complex field imager design that enables snapshot imaging of both the amplitude and quantitative phase information of input fields using an intensity-based sensor array without any digital processing.

Image Reconstruction

Subwavelength Imaging using a Solid-Immersion Diffractive Optical Processor

no code implementations17 Jan 2024 Jingtian Hu, Kun Liao, Niyazi Ulas Dinc, Carlo Gigli, Bijie Bai, Tianyi Gan, Xurong Li, Hanlong Chen, Xilin Yang, Yuhang Li, Cagatay Isil, Md Sadman Sakib Rahman, Jingxi Li, Xiaoyong Hu, Mona Jarrahi, Demetri Psaltis, Aydogan Ozcan

To resolve subwavelength features of an object, the diffractive imager uses a thin, high-index solid-immersion layer to transmit high-frequency information of the object to a spatially-optimized diffractive encoder, which converts/encodes high-frequency information of the input into low-frequency spatial modes for transmission through air.

TT-SNN: Tensor Train Decomposition for Efficient Spiking Neural Network Training

no code implementations15 Jan 2024 DongHyun Lee, Ruokai Yin, Youngeun Kim, Abhishek Moitra, Yuhang Li, Priyadarshini Panda

Spiking Neural Networks (SNNs) have gained significant attention as a potentially energy-efficient alternative for standard neural networks with their sparse binary activation.

Tensor Decomposition

Information hiding cameras: optical concealment of object information into ordinary images

no code implementations15 Jan 2024 Bijie Bai, Ryan Lee, Yuhang Li, Tianyi Gan, Yuntian Wang, Mona Jarrahi, Aydogan Ozcan

This information hiding transformation is valid for infinitely many combinations of secret messages, all of which are transformed into ordinary-looking output patterns, achieved all-optically through passive light-matter interactions within the optical processor.

GenQ: Quantization in Low Data Regimes with Generative Synthetic Data

no code implementations7 Dec 2023 Yuhang Li, Youngeun Kim, DongHyun Lee, Souvik Kundu, Priyadarshini Panda

In the realm of deep neural network deployment, low-bit quantization presents a promising avenue for enhancing computational efficiency.

Computational Efficiency Quantization +1

Rethinking Skip Connections in Spiking Neural Networks with Time-To-First-Spike Coding

no code implementations1 Dec 2023 Youngeun Kim, Adar Kahana, Ruokai Yin, Yuhang Li, Panos Stinis, George Em Karniadakis, Priyadarshini Panda

In this work, we delve into the role of skip connections, a widely used concept in Artificial Neural Networks (ANNs), within the domain of SNNs with TTFS coding.

Improving the Generation Quality of Watermarked Large Language Models via Word Importance Scoring

no code implementations16 Nov 2023 Yuhang Li, Yihan Wang, Zhouxing Shi, Cho-Jui Hsieh

In this work, we propose to improve the quality of texts generated by a watermarked language model by Watermarking with Importance Scoring (WIS).

Language Modelling

GNN-Based Beamforming for Sum-Rate Maximization in MU-MISO Networks

no code implementations7 Nov 2023 Yuhang Li, Yang Lu, Bo Ai, Octavia A. Dobre, Zhiguo Ding, Dusit Niyato

This paper studies the GNN-based learning approach for the sum-rate maximization in multiple-user multiple-input single-output (MU-MISO) networks subject to the users' individual data rate requirements and the power budget of the base station.

Artificial to Spiking Neural Networks Conversion for Scientific Machine Learning

no code implementations31 Aug 2023 Qian Zhang, Chenxi Wu, Adar Kahana, Youngeun Kim, Yuhang Li, George Em Karniadakis, Priyadarshini Panda

We introduce a method to convert Physics-Informed Neural Networks (PINNs), commonly used in scientific machine learning, to Spiking Neural Networks (SNNs), which are expected to have higher energy efficiency compared to traditional Artificial Neural Networks (ANNs).

Computational Efficiency

FinVis-GPT: A Multimodal Large Language Model for Financial Chart Analysis

1 code implementation31 Jul 2023 Ziao Wang, Yuhang Li, Junda Wu, Jaehyeon Soon, Xiaofeng Zhang

In this paper, we propose FinVis-GPT, a novel multimodal large language model (LLM) specifically designed for financial chart analysis.

Language Modelling Large Language Model

Diffusion Models for Probabilistic Deconvolution of Galaxy Images

1 code implementation20 Jul 2023 Zhiwei Xue, Yuhang Li, Yash Patel, Jeffrey Regier

As an alternative, we propose a classifier-free conditional diffusion model for PSF deconvolution of galaxy images.

SysNoise: Exploring and Benchmarking Training-Deployment System Inconsistency

no code implementations1 Jul 2023 Yan Wang, Yuhang Li, Ruihao Gong, Aishan Liu, Yanfei Wang, Jian Hu, Yongqiang Yao, Yunchen Zhang, Tianzi Xiao, Fengwei Yu, Xianglong Liu

Extensive studies have shown that deep learning models are vulnerable to adversarial and natural noises, yet little is known about model robustness on noises caused by different system implementations.

Benchmarking Data Augmentation +5

A Primal-Dual-Critic Algorithm for Offline Constrained Reinforcement Learning

no code implementations13 Jun 2023 Kihyuk Hong, Yuhang Li, Ambuj Tewari

Offline constrained reinforcement learning (RL) aims to learn a policy that maximizes the expected cumulative reward subject to constraints on expected cumulative cost using an existing dataset.

reinforcement-learning Reinforcement Learning (RL)

Input-Aware Dynamic Timestep Spiking Neural Networks for Efficient In-Memory Computing

1 code implementation27 May 2023 Yuhang Li, Abhishek Moitra, Tamar Geller, Priyadarshini Panda

Although the efficiency of SNNs can be realized on the In-Memory Computing (IMC) architecture, we show that the energy cost and latency of SNNs scale linearly with the number of timesteps used on IMC hardware.

Do We Really Need a Large Number of Visual Prompts?

no code implementations26 May 2023 Youngeun Kim, Yuhang Li, Abhishek Moitra, Priyadarshini Panda

Due to increasing interest in adapting models on resource-constrained edges, parameter-efficient transfer learning has been widely explored.

Transfer Learning Visual Prompt Tuning

Sharing Leaky-Integrate-and-Fire Neurons for Memory-Efficient Spiking Neural Networks

no code implementations26 May 2023 Youngeun Kim, Yuhang Li, Abhishek Moitra, Ruokai Yin, Priyadarshini Panda

Spiking Neural Networks (SNNs) have gained increasing attention as energy-efficient neural networks owing to their binary and asynchronous computation.

Human Activity Recognition

MINT: Multiplier-less INTeger Quantization for Energy Efficient Spiking Neural Networks

1 code implementation16 May 2023 Ruokai Yin, Yuhang Li, Abhishek Moitra, Priyadarshini Panda

We propose Multiplier-less INTeger (MINT) quantization, a uniform quantization scheme that efficiently compresses weights and membrane potentials in spiking neural networks (SNNs).

Quantization

Uncovering the Representation of Spiking Neural Networks Trained with Surrogate Gradient

1 code implementation25 Apr 2023 Yuhang Li, Youngeun Kim, Hyoungseob Park, Priyadarshini Panda

However, some essential questions exist pertaining to SNNs that are little studied: Do SNNs trained with surrogate gradient learn different representations from traditional Artificial Neural Networks (ANNs)?

Universal Polarization Transformations: Spatial programming of polarization scattering matrices using a deep learning-designed diffractive polarization transformer

no code implementations12 Apr 2023 Yuhang Li, Jingxi Li, Yifan Zhao, Tianyi Gan, Jingtian Hu, Mona Jarrahi, Aydogan Ozcan

We demonstrate universal polarization transformers based on an engineered diffractive volume, which can synthesize a large set of arbitrarily-selected, complex-valued polarization scattering matrices between the polarization states at different positions within its input and output field-of-views (FOVs).

SEENN: Towards Temporal Spiking Early-Exit Neural Networks

1 code implementation2 Apr 2023 Yuhang Li, Tamar Geller, Youngeun Kim, Priyadarshini Panda

However, we observe that the information capacity in SNNs is affected by the number of timesteps, leading to an accuracy-efficiency tradeoff.

Workload-Balanced Pruning for Sparse Spiking Neural Networks

no code implementations13 Feb 2023 Ruokai Yin, Youngeun Kim, Yuhang Li, Abhishek Moitra, Nitin Satpute, Anna Hambitzer, Priyadarshini Panda

Though the existing pruning methods can provide extremely high weight sparsity for deep SNNs, the high weight sparsity brings a workload imbalance problem.

Exploring Temporal Information Dynamics in Spiking Neural Networks

1 code implementation26 Nov 2022 Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Anna Hambitzer, Priyadarshini Panda

After training, we observe that information becomes highly concentrated in earlier few timesteps, a phenomenon we refer to as temporal information concentration.

AnimeRun: 2D Animation Visual Correspondence from Open Source 3D Movies

1 code implementation10 Nov 2022 Li SiYao, Yuhang Li, Bo Li, Chao Dong, Ziwei Liu, Chen Change Loy

Existing correspondence datasets for two-dimensional (2D) cartoon suffer from simple frame composition and monotonic movements, making them insufficient to simulate real animations.

Optical Flow Estimation

All-optical image classification through unknown random diffusers using a single-pixel diffractive network

no code implementations8 Aug 2022 Yi Luo, Bijie Bai, Yuhang Li, Ege Cetintas, Aydogan Ozcan

Classification of an object behind a random and unknown scattering medium sets a challenging task for computational imaging and machine vision fields.

Autonomous Driving Image Classification +1

Exploring Lottery Ticket Hypothesis in Spiking Neural Networks

1 code implementation4 Jul 2022 Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Ruokai Yin, Priyadarshini Panda

To scale up a pruning technique towards deep SNNs, we investigate Lottery Ticket Hypothesis (LTH) which states that dense networks contain smaller subnetworks (i. e., winning tickets) that achieve comparable performance to the dense networks.

An Optimization-based Algorithm for Non-stationary Kernel Bandits without Prior Knowledge

no code implementations29 May 2022 Kihyuk Hong, Yuhang Li, Ambuj Tewari

Moreover, when applied to the non-stationary linear bandit setting by using a linear kernel, our algorithm is nearly minimax optimal, solving an open problem in the non-stationary linear bandit literature.

To image, or not to image: Class-specific diffractive cameras with all-optical erasure of undesired objects

no code implementations26 May 2022 Bijie Bai, Yi Luo, Tianyi Gan, Jingtian Hu, Yuhang Li, Yifan Zhao, Deniz Mengu, Mona Jarrahi, Aydogan Ozcan

Here, we demonstrate a camera design that performs class-specific imaging of target objects with instantaneous all-optical erasure of other classes of objects.

Privacy Preserving

Converting Artificial Neural Networks to Spiking Neural Networks via Parameter Calibration

1 code implementation6 May 2022 Yuhang Li, Shikuang Deng, Xin Dong, Shi Gu

We demonstrate that our method can handle the SNN conversion with batch normalization layers and effectively preserve the high accuracy even in 32 time steps.

Analysis of Diffractive Neural Networks for Seeing Through Random Diffusers

no code implementations1 May 2022 Yuhang Li, Yi Luo, Bijie Bai, Aydogan Ozcan

During its training, random diffusers with a range of correlation lengths were used to improve the diffractive network's generalization performance.

Autonomous Driving Image Reconstruction

Addressing Client Drift in Federated Continual Learning with Adaptive Optimization

no code implementations24 Mar 2022 Yeshwanth Venkatesha, Youngeun Kim, Hyoungseob Park, Yuhang Li, Priyadarshini Panda

However, there is little attention towards additional challenges emerging when federated aggregation is performed in a continual learning system.

Continual Learning Federated Learning +1

Neuromorphic Data Augmentation for Training Spiking Neural Networks

1 code implementation11 Mar 2022 Yuhang Li, Youngeun Kim, Hyoungseob Park, Tamar Geller, Priyadarshini Panda

In an effort to minimize this generalization gap, we propose Neuromorphic Data Augmentation (NDA), a family of geometric augmentations specifically designed for event-based datasets with the goal of significantly stabilizing the SNN training and reducing the generalization gap between training and test performance.

 Ranked #1 on Event data classification on CIFAR10-DVS (using extra training data)

Contrastive Learning Data Augmentation +1

QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quantization

2 code implementations11 Mar 2022 Xiuying Wei, Ruihao Gong, Yuhang Li, Xianglong Liu, Fengwei Yu

With QDROP, the limit of PTQ is pushed to the 2-bit activation for the first time and the accuracy boost can be up to 51. 49%.

Image Classification object-detection +5

Temporal Efficient Training of Spiking Neural Network via Gradient Re-weighting

1 code implementation ICLR 2022 Shikuang Deng, Yuhang Li, Shanghang Zhang, Shi Gu

Then we introduce the temporal efficient training (TET) approach to compensate for the loss of momentum in the gradient descent with SG so that the training process can converge into flatter minima with better generalizability.

Neural Architecture Search for Spiking Neural Networks

1 code implementation23 Jan 2022 Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Priyadarshini Panda

Interestingly, SNASNet found by our search algorithm achieves higher performance with backward connections, demonstrating the importance of designing SNN architecture for suitably using temporal information.

Neural Architecture Search

Differentiable Spike: Rethinking Gradient-Descent for Training Spiking Neural Networks

no code implementations NeurIPS 2021 Yuhang Li, Yufei Guo, Shanghang Zhang, Shikuang Deng, Yongqing Hai, Shi Gu

Based on the introduced finite difference gradient, we propose a new family of Differentiable Spike (Dspike) functions that can adaptively evolve during training to find the optimal shape and smoothness for gradient estimation.

Event data classification Image Classification

MQBench: Towards Reproducible and Deployable Model Quantization Benchmark

1 code implementation5 Nov 2021 Yuhang Li, Mingzhu Shen, Jian Ma, Yan Ren, Mingxin Zhao, Qi Zhang, Ruihao Gong, Fengwei Yu, Junjie Yan

Surprisingly, no existing algorithm wins every challenge in MQBench, and we hope this work could inspire future research directions.

Quantization

Real World Robustness from Systematic Noise

no code implementations2 Sep 2021 Yan Wang, Yuhang Li, Ruihao Gong

Systematic error, which is not determined by chance, often refers to the inaccuracy (involving either the observation or measurement process) inherent to a system.

A Free Lunch From ANN: Towards Efficient, Accurate Spiking Neural Networks Calibration

1 code implementation13 Jun 2021 Yuhang Li, Shikuang Deng, Xin Dong, Ruihao Gong, Shi Gu

Moreover, our calibration algorithm can produce SNN with state-of-the-art architecture on the large-scale ImageNet dataset, including MobileNet and RegNet.

Diversifying Sample Generation for Accurate Data-Free Quantization

no code implementations CVPR 2021 Xiangguo Zhang, Haotong Qin, Yifu Ding, Ruihao Gong, Qinghua Yan, Renshuai Tao, Yuhang Li, Fengwei Yu, Xianglong Liu

Unfortunately, we find that in practice, the synthetic data identically constrained by BN statistics suffers serious homogenization at both distribution level and sample level and further causes a significant performance drop of the quantized model.

Data Free Quantization Image Classification

BRECQ: Pushing the Limit of Post-Training Quantization by Block Reconstruction

3 code implementations ICLR 2021 Yuhang Li, Ruihao Gong, Xu Tan, Yang Yang, Peng Hu, Qi Zhang, Fengwei Yu, Wei Wang, Shi Gu

To further employ the power of quantization, the mixed precision technique is incorporated in our framework by approximating the inter-layer and intra-layer sensitivity.

Image Classification object-detection +2

DeepFacePencil: Creating Face Images from Freehand Sketches

1 code implementation31 Aug 2020 Yuhang Li, Xuejin Chen, Binxin Yang, Zihan Chen, Zhihua Cheng, Zheng-Jun Zha

In this paper, we explore the task of generating photo-realistic face images from hand-drawn sketches.

Image-to-Image Translation Translation

Efficient Bitwidth Search for Practical Mixed Precision Neural Network

no code implementations17 Mar 2020 Yuhang Li, Wei Wang, Haoli Bai, Ruihao Gong, Xin Dong, Fengwei Yu

Network quantization has rapidly become one of the most widely used methods to compress and accelerate deep neural networks.

Quantization

RTN: Reparameterized Ternary Network

no code implementations4 Dec 2019 Yuhang Li, Xin Dong, Sai Qian Zhang, Haoli Bai, Yuanpeng Chen, Wei Wang

We first bring up three omitted issues in extremely low-bit networks: the squashing range of quantized values; the gradient vanishing during backpropagation and the unexploited hardware acceleration of ternary networks.

Quantization

LinesToFacePhoto: Face Photo Generation from Lines with Conditional Self-Attention Generative Adversarial Network

no code implementations20 Oct 2019 Yuhang Li, Xuejin Chen, Feng Wu, Zheng-Jun Zha

The large-scale discriminator enforces the completeness of global structures and the small-scale discriminator encourages fine details, thereby enhancing the realism of generated face images.

Generative Adversarial Network

Additive Powers-of-Two Quantization: An Efficient Non-uniform Discretization for Neural Networks

1 code implementation ICLR 2020 Yuhang Li, Xin Dong, Wei Wang

We propose Additive Powers-of-Two~(APoT) quantization, an efficient non-uniform quantization scheme for the bell-shaped and long-tailed distribution of weights and activations in neural networks.

Computational Efficiency Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.