Search Results for author: Hong-Min Chu

Found 10 papers, 5 papers with code

Universal Guidance for Diffusion Models

1 code implementation14 Feb 2023 Arpit Bansal, Hong-Min Chu, Avi Schwarzschild, Soumyadip Sengupta, Micah Goldblum, Jonas Geiping, Tom Goldstein

Typical diffusion models are trained to accept a particular form of conditioning, most commonly text, and cannot be conditioned on other modalities without retraining.

Face Recognition object-detection +1

Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise

2 code implementations NeurIPS 2023 Arpit Bansal, Eitan Borgnia, Hong-Min Chu, Jie S. Li, Hamid Kazemi, Furong Huang, Micah Goldblum, Jonas Geiping, Tom Goldstein

We observe that the generative behavior of diffusion models is not strongly dependent on the choice of image degradation, and in fact an entire family of generative models can be constructed by varying this choice.

Image Restoration Variational Inference

Active Learning at the ImageNet Scale

1 code implementation25 Nov 2021 Zeyad Ali Sami Emam, Hong-Min Chu, Ping-Yeh Chiang, Wojciech Czaja, Richard Leapman, Micah Goldblum, Tom Goldstein

Active learning (AL) algorithms aim to identify an optimal subset of data for annotation, such that deep neural networks (DNN) can achieve better performance when trained on this labeled subset.

Active Learning

WrapNet: Neural Net Inference with Ultra-Low-Precision Arithmetic

no code implementations ICLR 2021 Renkun Ni, Hong-Min Chu, Oscar Castaneda, Ping-Yeh Chiang, Christoph Studer, Tom Goldstein

Low-precision neural networks represent both weights and activations with few bits, drastically reducing the multiplication complexity.

Quantization

WrapNet: Neural Net Inference with Ultra-Low-Resolution Arithmetic

no code implementations26 Jul 2020 Renkun Ni, Hong-Min Chu, Oscar Castañeda, Ping-Yeh Chiang, Christoph Studer, Tom Goldstein

Low-resolution neural networks represent both weights and activations with few bits, drastically reducing the multiplication complexity.

Quantization

Deep Generative Models for Weakly-Supervised Multi-Label Classification

no code implementations ECCV 2018 Hong-Min Chu, Chih-Kuan Yeh, Yu-Chiang Frank Wang

In order to train learning models for multi-label classification (MLC), it is typically desirable to have a large amount of fully annotated multi-label data.

Classification General Classification +1

Can Active Learning Experience Be Transferred?

no code implementations2 Aug 2016 Hong-Min Chu, Hsuan-Tien Lin

Empirical studies demonstrate that the learned experience not only is competitive with existing strategies on most single datasets, but also can be transferred across datasets to improve the performance on future learning tasks.

Active Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.