no code implementations • 4 Feb 2025 • Dong Zhang, Rui Yan, Pingcheng Dong, Kwang-Ting Cheng
While current Vision Transformer (ViT) adapter methods have shown promising accuracy, their inference speed is implicitly hindered by inefficient memory access operations, e. g., standard normalization and frequent reshaping.
no code implementations • 2 Jan 2025 • Dong Zhang, Kwang-Ting Cheng
Theoretically, we have proved that the optimization direction of the image enhancement model will not be biased by the auxiliary visual recognition model under the implementation of GradProm.
no code implementations • 12 Dec 2024 • Dongting Hu, Jierun Chen, Xijie Huang, Huseyin Coskun, Arpit Sahni, Aarush Gupta, Anujraaj Goyal, Dishani Lahiri, Rajesh Singh, Yerlan Idelbayev, Junli Cao, Yanyu Li, Kwang-Ting Cheng, S. -H. Gary Chan, Mingming Gong, Sergey Tulyakov, Anil Kag, Yanwu Xu, Jian Ren
For the first time, our model SnapGen, demonstrates the generation of 1024x1024 px images on a mobile device around 1. 4 seconds.
no code implementations • 28 Oct 2024 • Shih-Yang Liu, Huck Yang, Chien-Yi Wang, Nai Chit Fung, Hongxu Yin, Charbel Sakr, Saurav Muralidharan, Kwang-Ting Cheng, Jan Kautz, Yu-Chiang Frank Wang, Pavlo Molchanov, Min-Hung Chen
In this work, we re-formulate the model compression problem into the customized compensation problem: Given a compressed model, we aim to introduce residual low-rank paths to compensate for compression errors under customized requirements from users (e. g., tasks, compression ratios), resulting in greater flexibility in adjusting overall capacity without being constrained by specific compression formats.
1 code implementation • 26 Sep 2024 • Yi Gu, Yi Lin, Kwang-Ting Cheng, Hao Chen
To tackle these issues, we propose D2UE, a Diversified Dual-space Uncertainty Estimation framework for medical anomaly detection.
1 code implementation • 5 Sep 2024 • Xixi Jiang, Dong Zhang, Xiang Li, Kangyi Liu, Kwang-Ting Cheng, Xin Yang
However, the limited availability of labeled foreground organs and the absence of supervision to distinguish unlabeled foreground organs from the background pose a significant challenge, which leads to a distribution mismatch between labeled and unlabeled pixels.
1 code implementation • 31 Aug 2024 • Xiao Fang, Yi Lin, Dong Zhang, Kwang-Ting Cheng, Hao Chen
Pre-trained large vision-language models (VLMs) like CLIP have revolutionized visual representation learning using natural language as supervisions, and demonstrated promising generalization ability.
1 code implementation • 26 Jul 2024 • Jiabo Ma, Zhengrui Guo, Fengtao Zhou, Yihui Wang, Yingxue Xu, Yu Cai, Zhengjie ZHU, Cheng Jin, Yi Lin, Xinrui Jiang, Anjia Han, Li Liang, Ronald Cheong Kin Chan, Jiguang Wang, Kwang-Ting Cheng, Hao Chen
To address this gap, we established a most comprehensive benchmark to evaluate the performance of off-the-shelf foundation models across six distinct clinical task types, encompassing a total of 39 specific tasks.
1 code implementation • 17 Jul 2024 • Jeffry Wicaksana, Zengqiang Yan, Kwang-Ting Cheng
Limited training data and severe class imbalance pose significant challenges to developing clinically robust deep learning models.
no code implementations • 12 Jul 2024 • Yue Zhang, Woyu Zhang, Shaocong Wang, Ning Lin, Yifei Yu, Yangu He, Bo wang, Hao Jiang, Peng Lin, Xiaoxin Xu, Xiaojuan Qi, Zhongrui Wang, Xumeng Zhang, Dashan Shang, Qi Liu, Kwang-Ting Cheng, Ming Liu
In contrast, AI models are static, unable to associate inputs with past experiences, and run on digital computers with physically separated memory and processing.
1 code implementation • 10 Jul 2024 • Xijie Huang, Zechun Liu, Shih-Yang Liu, Kwang-Ting Cheng
Low-Rank Adaptation (LoRA), as a representative Parameter-Efficient Fine-Tuning (PEFT)method, significantly enhances the training efficiency by updating only a small portion of the weights in Large Language Models (LLMs).
1 code implementation • 2 Jul 2024 • Yangyang Xiang, Nannan Wu, Li Yu, Xin Yang, Kwang-Ting Cheng, Zengqiang Yan
We begin by evaluating the completeness of annotations at the client level using a designed indicator.
1 code implementation • 27 Jun 2024 • Zhaobin Sun, Nannan Wu, Junjie Shi, Li Yu, Xin Yang, Kwang-Ting Cheng, Zengqiang Yan
Experiments on two publicly-available medical datasets validate the superiority of FedMLP against the state-of-the-art both federated semi-supervised and noisy label learning approaches under task heterogeneity.
1 code implementation • 12 Jun 2024 • Hegan Chen, Jichang Yang, Jia Chen, Songqi Wang, Shaocong Wang, Dingchen Wang, Xinyu Tian, Yifei Yu, Xi Chen, Yinan Lin, Yangu He, Xiaoshan Wu, Xinyuan Zhang, Ning Lin, Meng Xu, Yi Li, Xumeng Zhang, Zhongrui Wang, Han Wang, Dashan Shang, Qi Liu, Kwang-Ting Cheng, Ming Liu
We experimentally validate our approach by developing a digital twin of the HP memristor, which accurately extrapolates its nonlinear dynamics, achieving a 4. 2-fold projected speedup and a 41. 4-fold projected decrease in energy consumption compared to state-of-the-art digital hardware, while maintaining an acceptable error margin.
no code implementations • 15 Apr 2024 • Yifei Yu, Shaocong Wang, Woyu Zhang, Xinyuan Zhang, Xiuzhe Wu, Yangu He, Jichang Yang, Yue Zhang, Ning Lin, Bo wang, Xi Chen, Songqi Wang, Xumeng Zhang, Xiaojuan Qi, Zhongrui Wang, Dashan Shang, Qi Liu, Kwang-Ting Cheng, Ming Liu
The GE harnesses the intrinsic stochasticity of resistive memory for efficient input encoding, while the PE achieves precise weight mapping through a Hardware-Aware Quantization (HAQ) circuit.
1 code implementation • 8 Apr 2024 • Jichang Yang, Hegan Chen, Jia Chen, Songqi Wang, Shaocong Wang, Yifei Yu, Xi Chen, Bo wang, Xinyuan Zhang, Binbin Cui, Ning Lin, Meng Xu, Yi Li, Xiaoxin Xu, Xiaojuan Qi, Zhongrui Wang, Xumeng Zhang, Dashan Shang, Han Wang, Qi Liu, Kwang-Ting Cheng, Ming Liu
Demonstrating equivalent generative quality to the software baseline, our system achieved remarkable enhancements in generative speed for both unconditional and conditional generation tasks, by factors of 64. 8 and 156. 5, respectively.
1 code implementation • 6 Apr 2024 • Yu Cai, Weiwen Zhang, Hao Chen, Kwang-Ting Cheng
Anomaly detection (AD) aims at detecting abnormal samples that deviate from the expected normal patterns.
1 code implementation • 28 Mar 2024 • Pingcheng Dong, Yonghao Tan, Dong Zhang, Tianwei Ni, Xuejiao Liu, Yu Liu, Peng Luo, Luhong Liang, Shih-Yang Liu, Xijie Huang, Huaiyu Zhu, Yun Pan, Fengwei An, Kwang-Ting Cheng
Non-linear functions are prevalent in Transformers and their lightweight variants, incurring substantial and frequently underestimated hardware costs.
1 code implementation • 20 Mar 2024 • Xian lin, Yangyang Xiang, Zhehao Wang, Kwang-Ting Cheng, Zengqiang Yan, Li Yu
Specifically, based on SAM, SAMCT is further equipped with a U-shaped CNN image encoder, a cross-branch interaction module, and a task-indicator prompt encoder.
no code implementations • 19 Mar 2024 • Yi Lin, Zhengjie ZHU, Kwang-Ting Cheng, Hao Chen
To address this issue, we propose PAMT, a novel Prompt-guided Adaptive Model Transformation framework that enhances MIL classification performance by seamlessly adapting pre-trained models to the specific characteristics of histopathology data.
1 code implementation • 14 Mar 2024 • Yu Cai, Hao Chen, Kwang-Ting Cheng
To the best of our knowledge, this is the first effort to theoretically clarify the principles and design philosophy of AE for anomaly detection.
no code implementations • 13 Mar 2024 • Shuhan LI, Yi Lin, Hao Chen, Kwang-Ting Cheng
In this paper, we introduce an Iterative Online Image Synthesis (IOIS) framework to address the class imbalance problem in medical image classification.
5 code implementations • 14 Feb 2024 • Shih-Yang Liu, Chien-Yi Wang, Hongxu Yin, Pavlo Molchanov, Yu-Chiang Frank Wang, Kwang-Ting Cheng, Min-Hung Chen
By employing \ours, we enhance both the learning capacity and training stability of LoRA while avoiding any additional inference overhead.
Ranked #2 on
parameter-efficient fine-tuning
on WinoGrande
(using extra training data)
no code implementations • 24 Jan 2024 • Dong Zhang, Pingcheng Dong, Xinting Hu, Long Chen, Kwang-Ting Cheng
In this work, we propose a complementary boundary and context distillation (BCD) method within the KD framework for EDIPs, which facilitates the targeted knowledge transfer from large accurate teacher models to compact efficient student models.
1 code implementation • 15 Jan 2024 • Yi Lin, Zeyu Wang, Dong Zhang, Kwang-Ting Cheng, Hao Chen
To alleviate this problem, in this paper, we propose a weakly-supervised nuclei segmentation method that only requires partial point labels of nuclei.
no code implementations • 14 Dec 2023 • Shaocong Wang, Yizhao Gao, Yi Li, Woyu Zhang, Yifei Yu, Bo wang, Ning Lin, Hegan Chen, Yue Zhang, Yang Jiang, Dingchen Wang, Jia Chen, Peng Dai, Hao Jiang, Peng Lin, Xumeng Zhang, Xiaojuan Qi, Xiaoxin Xu, Hayden So, Zhongrui Wang, Dashan Shang, Qi Liu, Kwang-Ting Cheng, Ming Liu
Our random resistive memory-based deep extreme point learning machine may pave the way for energy-efficient and training-friendly edge AI across various data modalities and tasks.
no code implementations • 14 Dec 2023 • Xijie Huang, Li Lyna Zhang, Kwang-Ting Cheng, Fan Yang, Mao Yang
In this work, we propose CoT-Influx, a novel approach that pushes the boundary of few-shot Chain-of-Thoughts (CoT) learning to improve LLM mathematical reasoning.
Ranked #113 on
Arithmetic Reasoning
on GSM8K
no code implementations • 14 Dec 2023 • Chi-Hsuan Wu, Shih-Yang Liu, Xijie Huang, Xingbo Wang, Rong Zhang, Luca Minciullo, Wong Kai Yiu, Kenny Kwan, Kwang-Ting Cheng
However, a major doubt about online learning is whether students are as engaged as they are in face-to-face classes.
no code implementations • 13 Nov 2023 • Yi Li, Songqi Wang, Yaping Zhao, Shaocong Wang, Woyu Zhang, Yangu He, Ning Lin, Binbin Cui, Xi Chen, Shiming Zhang, Hao Jiang, Peng Lin, Xumeng Zhang, Xiaojuan Qi, Zhongrui Wang, Xiaoxin Xu, Dashan Shang, Qi Liu, Kwang-Ting Cheng, Ming Liu
Here, we report a universal solution, software-hardware co-design using structural plasticity-inspired edge pruning to optimize the topology of a randomly weighted analogue resistive memory neural network.
1 code implementation • 25 Oct 2023 • Shih-Yang Liu, Zechun Liu, Xijie Huang, Pingcheng Dong, Kwang-Ting Cheng
Our method, for the first time, can quantize both weights and activations in the LLaMA-13B to only 4-bit and achieves an average score of 63. 1 on the common sense zero-shot reasoning tasks, which is only 5. 8 lower than the full-precision model, significantly outperforming the previous state-of-the-art by 12. 7 points.
no code implementations • 27 Aug 2023 • Xin Yang, Yi Lin, Zhiwei Wang, Xin Li, Kwang-Ting Cheng
A method for measuring the synthesis complexity is proposed to automatically determine the synthesis order in our sequential GAN.
1 code implementation • 14 Aug 2023 • Weihang Dai, Xiaomeng Li, Taihui Yu, Di Zhao, Jun Shen, Kwang-Ting Cheng
Furthermore, we ensure complementary information is learned by deep and radiomic features by designing a novel feature de-correlation loss.
no code implementations • 12 Jul 2023 • Xiaomeng Wang, Fengshi Tian, Xizi Chen, Jiakun Zheng, Xuejiao Liu, Fengbin Tu, Jie Yang, Mohamad Sawan, Kwang-Ting Cheng, Chi-Ying Tsui
In this paper, we propose a high-precision SRAM-based CIM macro that can perform 4x4-bit MAC operations and yield 9-bit signed output.
2 code implementations • 1 Jul 2023 • Xijie Huang, Zhiqiang Shen, Pingcheng Dong, Kwang-Ting Cheng
We explore the best practices to alleviate the variation's influence during low-bit transformer QAT and propose a variation-aware quantization scheme for both vision and language transformers.
1 code implementation • 12 Jun 2023 • Xijie Huang, Zechun Liu, Shih-Yang Liu, Kwang-Ting Cheng
Quantization-aware training (QAT) is a representative model compression method to reduce redundancy in weights and activations.
3 code implementations • 9 May 2023 • Nannan Wu, Li Yu, Xuefeng Jiang, Kwang-Ting Cheng, Zengqiang Yan
Federated noisy label learning (FNLL) is emerging as a promising tool for privacy-preserving multi-source decentralized learning.
1 code implementation • 1 May 2023 • Yi Lin, Dong Zhang, Xiao Fang, Yufan Chen, Kwang-Ting Cheng, Hao Chen
Medical image segmentation is a fundamental task in the community of medical image analysis.
1 code implementation • 1 May 2023 • Jeffry Wicaksana, Zengqiang Yan, Kwang-Ting Cheng
To overcome this, we propose federated classifier anchoring (FCA) by adding a personalized classifier at each client to guide and debias the federated model through consistency learning.
2 code implementations • 15 Apr 2023 • Huimin Wu, Xiaomeng Li, Yiqun Lin, Kwang-Ting Cheng
This study investigates barely-supervised medical image segmentation where only few labeled data, i. e., single-digit cases are available.
no code implementations • 13 Apr 2023 • Luyang Luo, Xi Wang, Yi Lin, Xiaoqi Ma, Andong Tan, Ronald Chan, Varut Vardhanabhuti, Winnie CW Chu, Kwang-Ting Cheng, Hao Chen
Breast cancer has reached the highest incidence rate worldwide among all malignancies since 2020.
1 code implementation • 24 Mar 2023 • Yi Lin, Yufan Chen, Kwang-Ting Cheng, Hao Chen
Our proposed network mines the correlations between the support image and query image, limiting them to focus only on useful foreground information and boosting the representation capacity of both the support prototype and query features.
no code implementations • 23 Mar 2023 • Yi Lin, Xiao Fang, Dong Zhang, Kwang-Ting Cheng, Hao Chen
Recently, the advent of vision Transformer (ViT) has brought substantial advancements in 3D dataset benchmarks, particularly in 3D volumetric medical image segmentation (Vol-MedSeg).
no code implementations • 23 Mar 2023 • Yi Lin, Zhongchen Zhao, Zhengjie ZHU, Lisheng Wang, Kwang-Ting Cheng, Hao Chen
Multiple instance learning (MIL) has emerged as a popular method for classifying histopathology whole slide images (WSIs).
1 code implementation • 13 Mar 2023 • Shuhan LI, Dong Zhang, Xiaomeng Li, Chubin Ou, Lin An, Yanwu Xu, Kwang-Ting Cheng
Our TransPro method is primarily driven by two novel ideas that have been overlooked by prior work.
no code implementations • 24 Feb 2023 • Jiajun Zhou, Jiajun Wu, Yizhao Gao, Yuhao Ding, Chaofan Tao, Boyu Li, Fengbin Tu, Kwang-Ting Cheng, Hayden Kwok-Hay So, Ngai Wong
To accelerate the inference of deep neural networks (DNNs), quantization with low-bitwidth numbers is actively researched.
1 code implementation • 15 Feb 2023 • Weihang Dai, Xiaomeng Li, Kwang-Ting Cheng
In this work, we propose a novel approach to semi-supervised regression, namely Uncertainty-Consistent Variational Model Ensembling (UCVME), which improves training by generating high-quality pseudo-labels and uncertainty estimates for heteroscedastic regression.
1 code implementation • 4 Feb 2023 • Shih-Yang Liu, Zechun Liu, Kwang-Ting Cheng
In addition, we also found that the interdependence between quantized weights in $\textit{query}$ and $\textit{key}$ of a self-attention layer makes ViT vulnerable to oscillation.
4 code implementations • ICCV 2023 • Huimin Wu, Chenyang Lei, Xiao Sun, Peng-Shuai Wang, Qifeng Chen, Kwang-Ting Cheng, Stephen Lin, Zhirong Wu
Self-supervised representation learning follows a paradigm of withholding some part of the data and tasking the network to predict it from the remaining part.
1 code implementation • 20 Oct 2022 • Weihang Dai, Xiaomeng Li, Xinpeng Ding, Kwang-Ting Cheng
We also introduce teacher-student distillation to distill the information from LV segmentation masks into an end-to-end LVEF regression model that only requires video inputs.
1 code implementation • 9 Oct 2022 • Yu Cai, Hao Chen, Xin Yang, Yu Zhou, Kwang-Ting Cheng
Due to the high-cost annotations of abnormal images, most methods utilize only known normal images during training and identify samples deviating from the normal profile as anomalies in the testing phase.
no code implementations • 20 Sep 2022 • Dong Zhang, Jinhui Tang, Kwang-Ting Cheng
In this paper, we propose a novel Graph Reasoning Transformer (GReaT) for image parsing to enable image patches to interact following a relation reasoning pattern.
1 code implementation • 3 Jul 2022 • Shuhan LI, Xiaomeng Li, Xiaowei Xu, Kwang-Ting Cheng
To achieve the objective of the second branch, we present a cluster loss to learn image similarities via unsupervised clustering.
1 code implementation • 30 Jun 2022 • Yi Lin, Zeyu Wang, Kwang-Ting Cheng, Hao Chen
Nuclei Segmentation from histology images is a fundamental task in digital pathology analysis.
2 code implementations • 29 Jun 2022 • Xian lin, Li Yu, Kwang-Ting Cheng, Zengqiang Yan
Specifically, to fully explore the benefits of transformers in long-range dependency establishment, a cross-scale global transformer (CGT) module is introduced to jointly utilize multiple small-scale feature maps for richer global features with lower computational complexity.
1 code implementation • 29 Jun 2022 • Xian lin, Li Yu, Kwang-Ting Cheng, Zengqiang Yan
To our best knowledge, this is the first work on transformer pruning for medical image analysis tasks.
1 code implementation • 28 Jun 2022 • Nannan Wu, Li Yu, Xin Yang, Kwang-Ting Cheng, Zengqiang Yan
In this paper, we present a privacy-preserving FL method named FedIIC to combat class imbalance from two perspectives: feature learning and classifier learning.
no code implementations • 9 Jun 2022 • Xijie Huang, Zhiqiang Shen, Shichao Li, Zechun Liu, Xianghong Hu, Jeffry Wicaksana, Eric Xing, Kwang-Ting Cheng
In order to deploy deep models in a computationally efficient manner, model quantization approaches have been frequently used.
1 code implementation • 8 Jun 2022 • Yu Cai, Hao Chen, Xin Yang, Yu Zhou, Kwang-Ting Cheng
Subsequently, inter-discrepancy between the two modules, and intra-discrepancy inside the module that is trained on only normal images are designed as anomaly scores to indicate anomalies.
1 code implementation • 4 May 2022 • Jeffry Wicaksana, Zengqiang Yan, Dong Zhang, Xijie Huang, Huimin Wu, Xin Yang, Kwang-Ting Cheng
To relax this assumption, in this work, we propose a label-agnostic unified federated learning framework, named FedMix, for medical image segmentation based on mixed image labels.
1 code implementation • 21 Mar 2022 • Shichao Li, Zechun Liu, Zhiqiang Shen, Kwang-Ting Cheng
We propose a new object-centric framework for learning-based stereo 3D object detection.
1 code implementation • 16 Feb 2022 • Yi Lin, Zhiyong Qu, Hao Chen, Zhongke Gao, Yuexiang Li, Lili Xia, Kai Ma, Yefeng Zheng, Kwang-Ting Cheng
Third, a self-supervised visual representation learning method is tailored for nuclei segmentation of pathology images that transforms the hematoxylin component images into the H&E stained images to gain better understanding of the relationship between the nuclei and cytoplasm.
1 code implementation • CVPR 2022 • Arnav Chavan, Zhiqiang Shen, Zhuang Liu, Zechun Liu, Kwang-Ting Cheng, Eric Xing
This paper explores the feasibility of finding an optimal sub-model from a vision transformer and introduces a pure vision transformer slimming (ViT-Slim) framework.
no code implementations • 3 Dec 2021 • Zechun Liu, Zhiqiang Shen, Yun Long, Eric Xing, Kwang-Ting Cheng, Chas Leichner
We identify that the NAS task requires the synthesized data (we target at image domain here) with enough semantics, diversity, and a minimal domain gap from the natural images.
1 code implementation • CVPR 2022 • Zechun Liu, Kwang-Ting Cheng, Dong Huang, Eric Xing, Zhiqiang Shen
The nonuniform quantization strategy for compressing neural networks usually achieves better performance than its counterpart, i. e., uniform strategy, due to its superior representational capacity.
1 code implementation • 25 Nov 2021 • Shichao Li, Xijie Huang, Zechun Liu, Kwang-Ting Cheng
We present a new learning-based framework S-3D-RCNN that can recover accurate object orientation in SO(3) and simultaneously predict implicit rigid shapes from stereo RGB images.
1 code implementation • 22 Nov 2021 • Huimin Wu, Xiaomeng Li, Kwang-Ting Cheng
A stage-adaptive contrastive learning method is proposed, containing a boundary-aware contrastive loss that takes advantage of the labeled images in the first stage, as well as a prototype-aware contrastive loss to optimize both labeled and pseudo labeled images in the second stage.
no code implementations • 10 Nov 2021 • Yi Lin, Jianchao Su, Xiang Wang, Xiang Li, Jingen Liu, Kwang-Ting Cheng, Xin Yang
We have evaluated our approach using the 20 CTPA test dataset from the PE challenge, achieving a sensitivity of 78. 9%, 80. 7% and 80. 7% at 2 false positives per volume at 0mm, 2mm and 5mm localization error, which is superior to the state-of-the-art methods.
no code implementations • 7 Jul 2021 • Dawen Xu, Meng He, Cheng Liu, Ying Wang, Long Cheng, Huawei Li, Xiaowei Li, Kwang-Ting Cheng
It takes the remote AIoT processor with soft errors in the training loop such that the on-site computing errors can be learned with the application data on the server and the retrained models can be resilient to the soft errors.
no code implementations • 21 Jun 2021 • Zechun Liu, Zhiqiang Shen, Shichao Li, Koen Helwegen, Dong Huang, Kwang-Ting Cheng
We show the regularization effect of second-order momentum in Adam is crucial to revitalize the weights that are dead due to the activation saturation in BNNs.
1 code implementation • 12 Jun 2021 • Yi Lin, Yanfei Liu, Hao Chen, Xin Yang, Kai Ma, Yefeng Zheng, Kwang-Ting Cheng
To mitigate the complexity introduced by the model ensemble, we adopt the teacher-student paradigm, leveraging the diverse outputs from multiple learned networks as supervisory signals to guide the training of the student network.
no code implementations • ICLR 2021 • Zhiqiang Shen, Zechun Liu, Dejia Xu, Zitian Chen, Kwang-Ting Cheng, Marios Savvides
This work aims to empirically clarify a recently discovered perspective that label smoothing is incompatible with knowledge distillation.
1 code implementation • CVPR 2021 • Zhiqiang Shen, Zechun Liu, Jie Qin, Lei Huang, Kwang-Ting Cheng, Marios Savvides
In this paper, we focus on this more difficult scenario: learning networks where both weights and activations are binary, meanwhile, without any human annotated labels.
no code implementations • 8 Feb 2021 • Zhiqiang Shen, Zechun Liu, Jie Qin, Marios Savvides, Kwang-Ting Cheng
A common practice for this task is to train a model on the base set first and then transfer to novel classes through fine-tuning (Here fine-tuning procedure is defined as transferring knowledge from base to novel data, i. e. learning to transfer in few-shot scenario.)
1 code implementation • CVPR 2021 • Shichao Li, Zengqiang Yan, Hongyang Li, Kwang-Ting Cheng
The latter question motivates us to incorporate geometry knowledge with a new loss function based on a projective invariant.
Ranked #1 on
Vehicle Pose Estimation
on KITTI Cars Hard
1 code implementation • CVPR 2020 • Shichao Li, Lei Ke, Kevin Pratama, Yu-Wing Tai, Chi-Keung Tang, Kwang-Ting Cheng
End-to-end deep representation learning has achieved remarkable accuracy for monocular 3D human pose estimation, yet these models may fail for unseen poses with limited and fixed training data.
Ranked #13 on
Weakly-supervised 3D Human Pose Estimation
on Human3.6M
no code implementations • 18 May 2020 • Zechun Liu, Xiangyu Zhang, Zhiqiang Shen, Zhe Li, Yichen Wei, Kwang-Ting Cheng, Jian Sun
To tackle these three naturally different dimensions, we proposed a general framework by defining pruning as seeking the best pruning vector (i. e., the numerical value of layer-wise channel number, spacial size, depth) and construct a unique mapping from the pruning vector to the pruned network structures.
no code implementations • CVPR 2020 • Hai Phan, Zechun Liu, Dang Huynh, Marios Savvides, Kwang-Ting Cheng, Zhiqiang Shen
Inspired by one-shot architecture search frameworks, we manipulate the idea of group convolution to design efficient 1-Bit Convolutional Neural Networks (CNNs), assuming an approximately optimal trade-off between computational cost and model accuracy.
4 code implementations • ECCV 2020 • Zechun Liu, Zhiqiang Shen, Marios Savvides, Kwang-Ting Cheng
In this paper, we propose several ideas for enhancing a binary network to close its accuracy gap from real-valued networks without incurring any additional computational cost.
2 code implementations • 28 Aug 2019 • Shichao Li, Kwang-Ting Cheng
Residual representation learning simplifies the optimization problem of learning complex functions and has been widely used by traditional convolutional neural networks.
Ranked #12 on
Age Estimation
on CACD
3 code implementations • NeurIPS 2019 • Koen Helwegen, James Widdicombe, Lukas Geiger, Zechun Liu, Kwang-Ting Cheng, Roeland Nusselder
Together, the redefinition of latent weights as inertia and the introduction of Bop enable a better understanding of BNN optimization and open up the way for further improvements in training methodologies for BNNs.
1 code implementation • 19 Apr 2019 • Shichao Li, Kwang-Ting Cheng
Deep neural decision forest (NDF) achieved remarkable performance on various vision tasks via combining decision tree and deep representation learning.
no code implementations • 17 Dec 2018 • Zhiwei Wang, Yi Lin, Kwang-Ting Cheng, Xin Yang
Experimental results show that our method can effectively synthesize a large variety of mpMRI images which contain meaningful CS PCa lesions, display a good visual quality and have the correct paired relationship.
Synthesizing Multi-Parameter Magnetic Resonance Imaging (Mp-Mri) Data
1 code implementation • 4 Nov 2018 • Zechun Liu, Wenhan Luo, Baoyuan Wu, Xin Yang, Wei Liu, Kwang-Ting Cheng
To address the training difficulty, we propose a training algorithm using a tighter approximation to the derivative of the sign function, a magnitude-aware gradient for weight updating, a better initialization method, and a two-step scheme for training a deep network.
4 code implementations • ECCV 2018 • Zechun Liu, Baoyuan Wu, Wenhan Luo, Xin Yang, Wei Liu, Kwang-Ting Cheng
In this work, we study the 1-bit convolutional neural networks (CNNs), of which both the weights and activations are binary.