no code implementations • Findings (EMNLP) 2021 • Tao Huang, Hong Chen
To improve the privacy guarantee and efficiency, we combine a subsampling method with CGS and propose a novel LDA training algorithm with differential privacy, SUB-LDA.
1 code implementation • 21 May 2022 • Tao Huang, Shan You, Fei Wang, Chen Qian, Chang Xu
In this paper, we show that simply preserving the relations between the predictions of teacher and student would suffice, and propose a correlation-based loss to capture the intrinsic inter-class relations from the teacher explicitly.
no code implementations • 13 May 2022 • Jianan Liu, Hao Li, Tao Huang, Euijoon Ahn, Adeel Razi, Wei Xiang
Therefore, most of the previous studies proposed SR reconstruction by employing authentic HR images and synthetic LR images downsampled from the HR images, yet the difference in degradation representations between synthetic and authentic LR images suppresses the performance of SR reconstruction from authentic LR images.
1 code implementation • 24 Mar 2022 • Tao Huang, Shan You, Bohan Zhang, Yuxuan Du, Fei Wang, Chen Qian, Chang Xu
Structural re-parameterization (Rep) methods achieve noticeable improvements on simple VGG-style networks.
no code implementations • 13 Mar 2022 • Weiyi Xiong, Jianan Liu, Yuxuan Xia, Tao Huang, Bing Zhu, Wei Xiang
Deep learning-based instance segmentation enables real-time object identification from the radar detection points.
1 code implementation • ICLR 2022 • Tao Huang, Zekang Li, Hua Lu, Yong Shan, Shusheng Yang, Yang Feng, Fei Wang, Shan You, Chang Xu
Evaluation metrics in machine learning are often hardly taken as loss functions, as they could be non-differentiable and non-decomposable, e. g., average precision and F1 score.
no code implementations • 18 Jan 2022 • Tao Huang, Jiachen Wang, Xiao Chen
Learning informative representations from image-based observations is of fundamental concern in deep Reinforcement Learning (RL).
no code implementations • 24 Nov 2021 • Tao Huang, Shan You, Fei Wang, Chen Qian, ChangShui Zhang, Xiaogang Wang, Chang Xu
In this paper, we leverage an explicit path filter to capture the characteristics of paths and directly filter those weak ones, so that the search can be thus implemented on the shrunk space more greedily and efficiently.
no code implementations • 5 Oct 2021 • Jianan Liu, Weiyi Xiong, Liping Bai, Yuxuan Xia, Tao Huang, Wanli Ouyang, Bing Zhu
Automotive radar provides reliable environmental perception in all-weather conditions with affordable cost, but it hardly supplies semantic and geometry information due to the sparsity of radar detection points.
no code implementations • 29 Sep 2021 • Tao Huang, Xiao Chen, Jiachen Wang
Learning informative representations from image-based observations is a fundamental problem in deep Reinforcement Learning (RL).
no code implementations • 3 Jun 2021 • Hanyuan Hang, Tao Huang, Yuchao Cai, Hanfang Yang, Zhouchen Lin
In this paper, we propose a gradient boosting algorithm for large-scale regression problems called \textit{Gradient Boosted Binary Histogram Ensemble} (GBBHE) based on binary histogram partition and ensemble learning.
1 code implementation • CVPR 2021 • Xiu Su, Tao Huang, Yanxi Li, Shan You, Fei Wang, Chen Qian, ChangShui Zhang, Chang Xu
One-shot neural architecture search (NAS) methods significantly reduce the search cost by considering the whole search space as one network, which only needs to be trained once.
1 code implementation • CVPR 2021 • Tao Huang, Weisheng Dong, Xin Yuan, Jinjian Wu, Guangming Shi
Different from existing GSM models using hand-crafted scale priors (e. g., the Jeffrey's prior), we propose to learn the scale prior through a deep convolutional neural network (DCNN).
no code implementations • ICLR 2021 • Xiu Su, Shan You, Tao Huang, Fei Wang, Chen Qian, ChangShui Zhang, Chang Xu
In this paper, to better evaluate each width, we propose a locally free weight sharing strategy (CafeNet) accordingly.
5 code implementations • CVPR 2021 • Tao Huang, Songjiang Li, Xu Jia, Huchuan Lu, Jianzhuang Liu
In this paper, we present a very simple yet effective method named Neighbor2Neighbor to train an effective image denoising model with only noisy images.
no code implementations • 1 Jan 2021 • Zhuozhuo Tu, Shan You, Tao Huang, DaCheng Tao
Wasserstein distributionally robust optimization (DRO) has recently received significant attention in machine learning due to its connection to generalization, robustness and regularization.
no code implementations • 1 Jan 2021 • Tao Huang, Shan You, Yibo Yang, Zhuozhuo Tu, Fei Wang, Chen Qian, ChangShui Zhang
Differentiable neural architecture search (NAS) has gained much success in discovering more flexible and diverse cell types.
no code implementations • 18 Nov 2020 • Tao Huang, Shan You, Yibo Yang, Zhuozhuo Tu, Fei Wang, Chen Qian, ChangShui Zhang
However, even for this consistent search, the searched cells often suffer from poor performance, especially for the supernet with fewer layers, as current DARTS methods are prone to wide and shallow cells, and this topology collapse induces sub-optimal searched cells.
no code implementations • 17 Nov 2020 • Tao Huang, Yihan Zhang, Jiajing Wu, Junyuan Fang, Zibin Zheng
To tackle the dilemma between accuracy and efficiency, we propose to use aggregators with different granularities to gather neighborhood information in different layers.
no code implementations • 28 Oct 2020 • Xiu Su, Shan You, Tao Huang, Hongyan Xu, Fei Wang, Chen Qian, ChangShui Zhang, Chang Xu
To deploy a well-trained CNN model on low-end computation edge devices, it is usually supposed to compress or prune the model under certain computation budget (e. g., FLOPs).
1 code implementation • 20 Oct 2020 • Yuxuan Du, Tao Huang, Shan You, Min-Hsiu Hsieh, DaCheng Tao
Quantum error mitigation techniques are at the heart of quantum hardware implementation, and are the key to performance improvement of the variational quantum learning scheme (VQLS).
no code implementations • CVPR 2020 • Shan You, Tao Huang, Mingmin Yang, Fei Wang, Chen Qian, Chang-Shui Zhang
The training efficiency is thus boosted since the training space has been greedily shrunk from all paths to those potentially-good ones.
Ranked #46 on
Neural Architecture Search
on ImageNet
no code implementations • ICLR 2020 • Tao Huang, Zhen Han, Xu Jia, Hanyuan Hang
In this paper, we propose a novel kind of kernel, random forest kernel, to enhance the empirical performance of MMD GAN.
no code implementations • 20 Dec 2019 • Shujie Han, Jun Wu, Erci Xu, Cheng He, Patrick P. C. Lee, Yi Qiang, Qixing Zheng, Tao Huang, Zixi Huang, Rui Li
To provide proactive fault tolerance for modern cloud data centers, extensive studies have proposed machine learning (ML) approaches to predict imminent disk failures for early remedy and evaluated their approaches directly on public datasets (e. g., Backblaze SMART logs).