no code implementations • 16 Jul 2024 • Yunling Zheng, Zeyi Xu, Fanghui Xue, Biao Yang, Jiancheng Lyu, Shuai Zhang, Yingyong Qi, Jack Xin
We propose and demonstrate an alternating Fourier and image domain filtering approach for feature extraction as an efficient alternative to build a vision backbone without using the computationally intensive attention.
no code implementations • 14 Jun 2023 • Zhiyuan Hu, Jiancheng Lyu, Dashan Gao, Nuno Vasconcelos
We show that a foundation model equipped with POP learning is able to outperform classic CL methods by a significant margin.
no code implementations • 30 Mar 2023 • Renhong Zhang, Tianheng Cheng, Shusheng Yang, Haoyi Jiang, Shuai Zhang, Jiancheng Lyu, Xin Li, Xiaowen Ying, Dashan Gao, Wenyu Liu, Xinggang Wang
To address those issues, we present MobileInst, a lightweight and mobile-friendly framework for video instance segmentation on mobile devices.
no code implementations • CVPR 2023 • Zhiyuan Hu, Yunsheng Li, Jiancheng Lyu, Dashan Gao, Nuno Vasconcelos
This is accomplished by the introduction of dense connections between the intermediate layers of the task expert networks, that enable the transfer of knowledge from old to new tasks via feature sharing and reusing.
no code implementations • 12 Sep 2019 • Jiancheng Lyu, Spencer Sheen
We show an effective three-stage procedure to balance accuracy and sparsity in network training.
no code implementations • ICLR 2019 • Penghang Yin, Jiancheng Lyu, Shuai Zhang, Stanley Osher, Yingyong Qi, Jack Xin
We prove that if the STE is properly chosen, the expected coarse gradient correlates positively with the population gradient (not available for the training), and its negation is a descent direction for minimizing the population loss.
no code implementations • 24 Jan 2019 • Jiancheng Lyu, Shuai Zhang, Yingyong Qi, Jack Xin
In addition, we found experimentally that the standard convex relaxation of permutation matrices into stochastic matrices leads to poor performance.
no code implementations • 7 Nov 2018 • Spencer Sheen, Jiancheng Lyu
We propose and study a new projection formula for training binary weight convolutional neural networks.
no code implementations • 15 Aug 2018 • Penghang Yin, Shuai Zhang, Jiancheng Lyu, Stanley Osher, Yingyong Qi, Jack Xin
We introduce the notion of coarse gradient and propose the blended coarse gradient descent (BCGD) algorithm, for training fully quantized neural networks.
2 code implementations • 19 Jan 2018 • Penghang Yin, Shuai Zhang, Jiancheng Lyu, Stanley Osher, Yingyong Qi, Jack Xin
We propose BinaryRelax, a simple two-phase algorithm, for training deep neural networks with quantized weights.