1 code implementation • 24 May 2022 • Shangyu Wu, Yufei Cui, Jinghuan Yu, Xuan Sun, Tei-Wei Kuo, Chun Jason Xue
Based on the characteristics of the transformed keys, we propose a robust After-Flow Learned Index (AFLI).
1 code implementation • 30 Mar 2022 • Yu Mao, Yufei Cui, Tei-Wei Kuo, Chun Jason Xue
To ease this problem, this paper targets on cutting down the execution time of deep-learning-based compressors.
no code implementations • 6 Feb 2021 • Ziquan Liu, Yufei Cui, Jia Wan, Yu Mao, Antoni B. Chan
On the one hand, when the non-adaptive learning rate e. g. SGD with momentum is used, the effective learning rate continues to increase even after the initial training stage, which leads to an overfitting effect in many neural architectures.
1 code implementation • CVPR 2021 • Yufei Cui, Yu Mao, Ziquan Liu, Qiao Li, Antoni B. Chan, Xue Liu, Tei-Wei Kuo, Chun Jason Xue
Nested dropout is a variant of dropout operation that is able to order network parameters or features based on the pre-defined importance during training.
no code implementations • ICML Workshop AML 2021 • Ziquan Liu, Yufei Cui, Antoni B. Chan
The derived regularizer is an upper bound for the input gradient of the network so minimizing the improved regularizer also benefits the adversarial robustness.
no code implementations • 25 Sep 2019 • Yufei Cui, Wuguannan Yao, Qiao Li, Antoni Chan, Chun Jason Xue
In this work, assuming that the exact posterior or a decent approximation is obtained, we propose a generic framework to approximate the output probability distribution induced by model posterior with a parameterized model and in an amortized fashion.
1 code implementation • 29 May 2019 • Yufei Cui, Wuguannan Yao, Qiao Li, Antoni B. Chan, Chun Jason Xue
In this work, assuming that the exact posterior or a decent approximation is obtained, we propose a generic framework to approximate the output probability distribution induced by model posterior with a parameterized model and in an amortized fashion.