Search Results for author: Junjie Liu

Found 16 papers, 5 papers with code

Concise and Organized Perception Facilitates Large Language Models for Deductive Reasoning

no code implementations5 Oct 2023 Shaotian Yan, Chen Shen, Junjie Liu, Jieping Ye

By perceiving concise and organized proofs, the deductive reasoning abilities of LLMs can be better elicited, and the risk of acquiring errors caused by excessive reasoning stages is mitigated.

LAMBADA

Spatial-information Guided Adaptive Context-aware Network for Efficient RGB-D Semantic Segmentation

1 code implementation11 Aug 2023 Yang Zhang, Chenyun Xiong, Junjie Liu, Xuhui Ye, Guodong Sun

Efficient RGB-D semantic segmentation has received considerable attention in mobile robots, which plays a vital role in analyzing and recognizing environmental information.

Segmentation Semantic Segmentation

Self-Learning Symmetric Multi-view Probabilistic Clustering

no code implementations12 May 2023 Junjie Liu, Junlong Liu, Rongxin Jiang, Yaowu Chen, Chen Shen, Jieping Ye

Then, SLS-MPC proposes a novel self-learning probability function without any prior knowledge and hyper-parameters to learn each view's individual distribution.

Clustering Incomplete multi-view clustering +1

Towards Accurate Post-Training Quantization for Vision Transformer

no code implementations25 Mar 2023 Yifu Ding, Haotong Qin, Qinghua Yan, Zhenhua Chai, Junjie Liu, Xiaolin Wei, Xianglong Liu

We find the main reasons lie in (1) the existing calibration metric is inaccurate in measuring the quantization influence for extremely low-bit representation, and (2) the existing quantization paradigm is unfriendly to the power-law distribution of Softmax.

Model Compression Quantization

Compressing Models with Few Samples: Mimicking then Replacing

1 code implementation CVPR 2022 Huanyu Wang, Junjie Liu, Xin Ma, Yang Yong, Zhenhua Chai, Jianxin Wu

Hence, previous methods optimize the compressed model layer-by-layer and try to make every layer have the same outputs as the corresponding layer in the teacher model, which is cumbersome.

MPC: Multi-View Probabilistic Clustering

no code implementations CVPR 2022 Junjie Liu, Junlong Liu, Shaotian Yan, Rongxin Jiang, Xiang Tian, Boxuan Gu, Yaowu Chen, Chen Shen, Jianqiang Huang

Despite the promising progress having been made, the two challenges of multi-view clustering (MVC) are still waiting for better solutions: i) Most existing methods are either not qualified or require additional steps for incomplete multi-view clustering and ii) noise or outliers might significantly degrade the overall clustering performance.

Clustering Incomplete multi-view clustering

Identify Light-Curve Signals with Deep Learning Based Object Detection Algorithm. I. Transit Detection

1 code implementation2 Aug 2021 Kaiming Cui, Junjie Liu, Fabo Feng, Jifeng Liu

In this work, we develop a novel detection algorithm based on a well proven object detection framework in the computer vision field.

object-detection Object Detection

Multi-objective optimization and explanation for stroke risk assessment in Shanxi province

no code implementations29 Jul 2021 Jing Ma, Yiyang Sun, Junjie Liu, Huaxiong Huang, Xiaoshuang Zhou, Shixin Xu

The experimental results showed that the QIDNN model with 7 interactive features achieve the state-of-art accuracy $83. 25\%$.

BAMSProd: A Step towards Generalizing the Adaptive Optimization Methods to Deep Binary Model

no code implementations29 Sep 2020 Junjie Liu, Dongchao Wen, Deyu Wang, Wei Tao, Tse-Wei Chen, Kinya Osa, Masami Kato

In this paper, we provide an explicit convex optimization example where training the BNNs with the traditionally adaptive optimization methods still faces the risk of non-convergence, and identify that constraining the range of gradients is critical for optimizing the deep binary model to avoid highly suboptimal solutions.

Quantization

QuantNet: Learning to Quantize by Learning within Fully Differentiable Framework

no code implementations10 Sep 2020 Junjie Liu, Dongchao Wen, Deyu Wang, Wei Tao, Tse-Wei Chen, Kinya Osa, Masami Kato

Despite the achievements of recent binarization methods on reducing the performance degradation of Binary Neural Networks (BNNs), gradient mismatching caused by the Straight-Through-Estimator (STE) still dominates quantized networks.

Binarization Image Classification +1

Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers

1 code implementation ICLR 2020 Junjie Liu, Zhe Xu, Runbin Shi, Ray C. C. Cheung, Hayden K. -H. So

We present a novel network pruning algorithm called Dynamic Sparse Training that can jointly find the optimal network parameters and sparse network structure in a unified optimization process with trainable pruning thresholds.

Network Pruning

DupNet: Towards Very Tiny Quantized CNN with Improved Accuracy for Face Detection

no code implementations13 Nov 2019 Hongxing Gao, Wei Tao, Dongchao Wen, Junjie Liu, Tse-Wei Chen, Kinya Osa, Masami Kato

Firstly, we employ weights with duplicated channels for the weight-intensive layers to reduce the model size.

 Ranked #1 on Face Detection on WIDER Face (GFLOPs metric)

Face Detection Quantization

Knowledge Representing: Efficient, Sparse Representation of Prior Knowledge for Knowledge Distillation

no code implementations13 Nov 2019 Junjie Liu, Dongchao Wen, Hongxing Gao, Wei Tao, Tse-Wei Chen, Kinya Osa, Masami Kato

Despite the recent works on knowledge distillation (KD) have achieved a further improvement through elaborately modeling the decision boundary as the posterior knowledge, their performance is still dependent on the hypothesis that the target network has a powerful capacity (representation ability).

Ranked #182 on Image Classification on CIFAR-10 (using extra training data)

Image Classification Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.