no code implementations • 5 Oct 2023 • Shaotian Yan, Chen Shen, Junjie Liu, Jieping Ye
By perceiving concise and organized proofs, the deductive reasoning abilities of LLMs can be better elicited, and the risk of acquiring errors caused by excessive reasoning stages is mitigated.
1 code implementation • 11 Aug 2023 • Yang Zhang, Chenyun Xiong, Junjie Liu, Xuhui Ye, Guodong Sun
Efficient RGB-D semantic segmentation has received considerable attention in mobile robots, which plays a vital role in analyzing and recognizing environmental information.
Ranked #57 on Semantic Segmentation on NYU Depth v2
1 code implementation • ICCV 2023 • Tao Ma, Xuemeng Yang, Hongbin Zhou, Xin Li, Botian Shi, Junjie Liu, Yuchen Yang, Zhizheng Liu, Liang He, Yu Qiao, Yikang Li, Hongsheng Li
Extensive experiments on Waymo Open Dataset show our DetZero outperforms all state-of-the-art onboard and offboard 3D detection methods.
no code implementations • 12 May 2023 • Junjie Liu, Junlong Liu, Rongxin Jiang, Yaowu Chen, Chen Shen, Jieping Ye
Then, SLS-MPC proposes a novel self-learning probability function without any prior knowledge and hyper-parameters to learn each view's individual distribution.
no code implementations • 25 Mar 2023 • Yifu Ding, Haotong Qin, Qinghua Yan, Zhenhua Chai, Junjie Liu, Xiaolin Wei, Xianglong Liu
We find the main reasons lie in (1) the existing calibration metric is inaccurate in measuring the quantization influence for extremely low-bit representation, and (2) the existing quantization paradigm is unfriendly to the power-law distribution of Softmax.
1 code implementation • CVPR 2022 • Huanyu Wang, Junjie Liu, Xin Ma, Yang Yong, Zhenhua Chai, Jianxin Wu
Hence, previous methods optimize the compressed model layer-by-layer and try to make every layer have the same outputs as the corresponding layer in the teacher model, which is cumbersome.
no code implementations • CVPR 2022 • Junjie Liu, Junlong Liu, Shaotian Yan, Rongxin Jiang, Xiang Tian, Boxuan Gu, Yaowu Chen, Chen Shen, Jianqiang Huang
Despite the promising progress having been made, the two challenges of multi-view clustering (MVC) are still waiting for better solutions: i) Most existing methods are either not qualified or require additional steps for incomplete multi-view clustering and ii) noise or outliers might significantly degrade the overall clustering performance.
1 code implementation • 2 Aug 2021 • Kaiming Cui, Junjie Liu, Fabo Feng, Jifeng Liu
In this work, we develop a novel detection algorithm based on a well proven object detection framework in the computer vision field.
no code implementations • 29 Jul 2021 • Jing Ma, Yiyang Sun, Junjie Liu, Huaxiong Huang, Xiaoshuang Zhou, Shixin Xu
The experimental results showed that the QIDNN model with 7 interactive features achieve the state-of-art accuracy $83. 25\%$.
no code implementations • 29 May 2021 • Junjie Liu, Yiyang Sun, Jing Ma, Jiachen Tu, Yuhui Deng, Ping He, Huaxiong Huang, Xiaoshuang Zhou, Shixin Xu
Evaluation of the risk of getting stroke is important for the prevention and treatment of stroke in China.
no code implementations • 29 Apr 2021 • Tse-Wei Chen, Motoki Yoshinaga, Hongxing Gao, Wei Tao, Dongchao Wen, Junjie Liu, Kinya Osa, Masami Kato
"Lightweight convolutional neural networks" is an important research topic in the field of embedded vision.
Ranked #1 on Face Detection on FDDB (Accuracy metric)
no code implementations • 29 Sep 2020 • Junjie Liu, Dongchao Wen, Deyu Wang, Wei Tao, Tse-Wei Chen, Kinya Osa, Masami Kato
In this paper, we provide an explicit convex optimization example where training the BNNs with the traditionally adaptive optimization methods still faces the risk of non-convergence, and identify that constraining the range of gradients is critical for optimizing the deep binary model to avoid highly suboptimal solutions.
no code implementations • 10 Sep 2020 • Junjie Liu, Dongchao Wen, Deyu Wang, Wei Tao, Tse-Wei Chen, Kinya Osa, Masami Kato
Despite the achievements of recent binarization methods on reducing the performance degradation of Binary Neural Networks (BNNs), gradient mismatching caused by the Straight-Through-Estimator (STE) still dominates quantized networks.
Ranked #930 on Image Classification on ImageNet
1 code implementation • ICLR 2020 • Junjie Liu, Zhe Xu, Runbin Shi, Ray C. C. Cheung, Hayden K. -H. So
We present a novel network pruning algorithm called Dynamic Sparse Training that can jointly find the optimal network parameters and sparse network structure in a unified optimization process with trainable pruning thresholds.
no code implementations • 13 Nov 2019 • Hongxing Gao, Wei Tao, Dongchao Wen, Junjie Liu, Tse-Wei Chen, Kinya Osa, Masami Kato
Firstly, we employ weights with duplicated channels for the weight-intensive layers to reduce the model size.
Ranked #1 on Face Detection on WIDER Face (GFLOPs metric)
no code implementations • 13 Nov 2019 • Junjie Liu, Dongchao Wen, Hongxing Gao, Wei Tao, Tse-Wei Chen, Kinya Osa, Masami Kato
Despite the recent works on knowledge distillation (KD) have achieved a further improvement through elaborately modeling the decision boundary as the posterior knowledge, their performance is still dependent on the hypothesis that the target network has a powerful capacity (representation ability).
Ranked #182 on Image Classification on CIFAR-10 (using extra training data)