Search Results for author: Haoxuan Wang

Found 6 papers, 1 papers with code

Seed Optimization with Frozen Generator for Superior Zero-shot Low-light Enhancement

no code implementations15 Feb 2024 Yuxuan Gu, Yi Jin, Ben Wang, Zhixiang Wei, Xiaoxiao Ma, Pengyang Ling, Haoxuan Wang, Huaian Chen, Enhong Chen

In this work, we observe that the generators, which are pre-trained on massive natural images, inherently hold the promising potential for superior low-light image enhancement against varying scenarios. Specifically, we embed a pre-trained generator to Retinex model to produce reflectance maps with enhanced detail and vividness, thereby recovering features degraded by low-light conditions. Taking one step further, we introduce a novel optimization strategy, which backpropagates the gradients to the input seeds rather than the parameters of the low-light enhancement model, thus intactly retaining the generative knowledge learned from natural images and achieving faster convergence speed.

Low-Light Image Enhancement

QuEST: Low-bit Diffusion Model Quantization via Efficient Selective Finetuning

1 code implementation6 Feb 2024 Haoxuan Wang, Yuzhang Shang, Zhihang Yuan, Junyi Wu, Yan Yan

Diffusion models have achieved remarkable success in image generation tasks, yet their practical deployment is restrained by the high memory and time consumption.

Image Generation Model Compression +1

Learning Gradient Fields for Scalable and Generalizable Irregular Packing

no code implementations18 Oct 2023 Tianyang Xue, Mingdong Wu, Lin Lu, Haoxuan Wang, Hao Dong, Baoquan Chen

In this work, we delve deeper into a novel machine learning-based approach that formulates the packing problem as conditional generative modeling.

Collision Avoidance Layout Design +1

Leveraging Angular Information Between Feature and Classifier for Long-tailed Learning: A Prediction Reformulation Approach

no code implementations3 Dec 2022 Haoxuan Wang, Junchi Yan

Deep neural networks still struggle on long-tailed image datasets, and one of the reasons is that the imbalance of training data across categories leads to the imbalance of trained model parameters.

Long-tail Learning

Learning Calibrated Uncertainties for Domain Shift: A Distributionally Robust Learning Approach

no code implementations8 Oct 2020 Haoxuan Wang, Zhiding Yu, Yisong Yue, Anima Anandkumar, Anqi Liu, Junchi Yan

We propose a framework for learning calibrated uncertainties under domain shifts, where the source (training) distribution differs from the target (test) distribution.

Density Ratio Estimation Unsupervised Domain Adaptation

Distributionally Robust Learning for Unsupervised Domain Adaptation

no code implementations28 Sep 2020 Haoxuan Wang, Anqi Liu, Zhiding Yu, Yisong Yue, Anima Anandkumar

This formulation motivates the use of two neural networks that are jointly trained --- a discriminative network between the source and target domains for density-ratio estimation, in addition to the standard classification network.

Density Ratio Estimation Unsupervised Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.