Search Results for author: Jiancheng Lyu

Found 9 papers, 1 papers with code

POP: Prompt Of Prompts for Continual Learning

no code implementations14 Jun 2023 Zhiyuan Hu, Jiancheng Lyu, Dashan Gao, Nuno Vasconcelos

We show that a foundation model equipped with POP learning is able to outperform classic CL methods by a significant margin.

Continual Learning Open-Ended Question Answering

MobileInst: Video Instance Segmentation on the Mobile

no code implementations30 Mar 2023 Renhong Zhang, Tianheng Cheng, Shusheng Yang, Haoyi Jiang, Shuai Zhang, Jiancheng Lyu, Xin Li, Xiaowen Ying, Dashan Gao, Wenyu Liu, Xinggang Wang

To address those issues, we present MobileInst, a lightweight and mobile-friendly framework for video instance segmentation on mobile devices.

Instance Segmentation Segmentation +2

Dense Network Expansion for Class Incremental Learning

no code implementations CVPR 2023 Zhiyuan Hu, Yunsheng Li, Jiancheng Lyu, Dashan Gao, Nuno Vasconcelos

This is accomplished by the introduction of dense connections between the intermediate layers of the task expert networks, that enable the transfer of knowledge from old to new tasks via feature sharing and reusing.

Class Incremental Learning Incremental Learning

Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets

no code implementations ICLR 2019 Penghang Yin, Jiancheng Lyu, Shuai Zhang, Stanley Osher, Yingyong Qi, Jack Xin

We prove that if the STE is properly chosen, the expected coarse gradient correlates positively with the population gradient (not available for the training), and its negation is a descent direction for minimizing the population loss.

Negation

AutoShuffleNet: Learning Permutation Matrices via an Exact Lipschitz Continuous Penalty in Deep Convolutional Neural Networks

no code implementations24 Jan 2019 Jiancheng Lyu, Shuai Zhang, Yingyong Qi, Jack Xin

In addition, we found experimentally that the standard convex relaxation of permutation matrices into stochastic matrices leads to poor performance.

Graph Matching

Median Binary-Connect Method and a Binary Convolutional Neural Nework for Word Recognition

no code implementations7 Nov 2018 Spencer Sheen, Jiancheng Lyu

We propose and study a new projection formula for training binary weight convolutional neural networks.

General Classification

Blended Coarse Gradient Descent for Full Quantization of Deep Neural Networks

no code implementations15 Aug 2018 Penghang Yin, Shuai Zhang, Jiancheng Lyu, Stanley Osher, Yingyong Qi, Jack Xin

We introduce the notion of coarse gradient and propose the blended coarse gradient descent (BCGD) algorithm, for training fully quantized neural networks.

Binarization Quantization

BinaryRelax: A Relaxation Approach For Training Deep Neural Networks With Quantized Weights

2 code implementations19 Jan 2018 Penghang Yin, Shuai Zhang, Jiancheng Lyu, Stanley Osher, Yingyong Qi, Jack Xin

We propose BinaryRelax, a simple two-phase algorithm, for training deep neural networks with quantized weights.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.