Search Results for author: Junjie Liu

Found 23 papers, 8 papers with code

Diffusion-4K: Ultra-High-Resolution Image Synthesis with Latent Diffusion Models

1 code implementation CVPR 2025 Jinjin Zhang, Qiuyu Huang, Junjie Liu, Xiefan Guo, Di Huang

In this paper, we present Diffusion-4K, a novel framework for direct ultra-high-resolution image synthesis using text-to-image diffusion models.

4k Image Generation

Don't Take Things Out of Context: Attention Intervention for Enhancing Chain-of-Thought Reasoning in Large Language Models

no code implementations14 Mar 2025 Shaotian Yan, Chen Shen, Wenxiao Wang, Liang Xie, Junjie Liu, Jieping Ye

Few-shot Chain-of-Thought (CoT) significantly enhances the reasoning capabilities of large language models (LLMs), functioning as a whole to guide these models in generating reasoning steps toward final answers.

Knowledge Distillation with Adapted Weight

no code implementations6 Jan 2025 Sirong Wu, Xi Luo, Junjie Liu, Yuhui Deng

Although large models have shown a strong capacity to solve large-scale problems in many areas including natural language and computer vision, their voluminous parameters are hard to deploy in a real-time system due to computational and energy constraints.

4k Fairness +1

NeutraSum: A Language Model can help a Balanced Media Diet by Neutralizing News Summaries

no code implementations2 Jan 2025 Xi Luo, Junjie Liu, Sirong Wu, Yuhui Deng

Media bias in news articles arises from the political polarisation of media outlets, which can reinforce societal stereotypes and beliefs.

Language Modeling Language Modelling

Seeking the Sufficiency and Necessity Causal Features in Multimodal Representation Learning

no code implementations29 Aug 2024 BoYu Chen, Junjie Liu, Zhu Li, Mengyue Yang

We address these challenges by first conceptualizing multimodal representations as comprising modality-invariant and modality-specific components.

Representation Learning

Face Clustering via Early Stopping and Edge Recall

1 code implementation24 Aug 2024 Junjie Liu

An efficient and effective neighbor-based edge probability and a novel early stopping strategy are proposed in FC-ES, guaranteeing the accuracy and recall of large-scale face clustering simultaneously.

Clustering Face Clustering

Multiple Prior Representation Learning for Self-Supervised Monocular Depth Estimation via Hybrid Transformer

1 code implementation13 Jun 2024 Guodong Sun, Junjie Liu, Mingxuan Liu, Moyun Liu, Yang Zhang

To address these challenges, we introduce a novel self-supervised monocular depth estimation model that leverages multiple priors to bolster representation capabilities across spatial, context, and semantic dimensions.

Decoder Monocular Depth Estimation +1

Spatial-information Guided Adaptive Context-aware Network for Efficient RGB-D Semantic Segmentation

1 code implementation11 Aug 2023 Yang Zhang, Chenyun Xiong, Junjie Liu, Xuhui Ye, Guodong Sun

Efficient RGB-D semantic segmentation has received considerable attention in mobile robots, which plays a vital role in analyzing and recognizing environmental information.

Decoder Segmentation +1

Self-Learning Symmetric Multi-view Probabilistic Clustering

no code implementations12 May 2023 Junjie Liu, Junlong Liu, Rongxin Jiang, Yaowu Chen, Chen Shen, Jieping Ye

Then, SLS-MPC proposes a novel self-learning probability function without any prior knowledge and hyper-parameters to learn each view's individual distribution.

Clustering Incomplete multi-view clustering +1

Towards Accurate Post-Training Quantization for Vision Transformer

no code implementations25 Mar 2023 Yifu Ding, Haotong Qin, Qinghua Yan, Zhenhua Chai, Junjie Liu, Xiaolin Wei, Xianglong Liu

We find the main reasons lie in (1) the existing calibration metric is inaccurate in measuring the quantization influence for extremely low-bit representation, and (2) the existing quantization paradigm is unfriendly to the power-law distribution of Softmax.

Model Compression Quantization

Compressing Models with Few Samples: Mimicking then Replacing

1 code implementation CVPR 2022 Huanyu Wang, Junjie Liu, Xin Ma, Yang Yong, Zhenhua Chai, Jianxin Wu

Hence, previous methods optimize the compressed model layer-by-layer and try to make every layer have the same outputs as the corresponding layer in the teacher model, which is cumbersome.

MPC: Multi-View Probabilistic Clustering

no code implementations CVPR 2022 Junjie Liu, Junlong Liu, Shaotian Yan, Rongxin Jiang, Xiang Tian, Boxuan Gu, Yaowu Chen, Chen Shen, Jianqiang Huang

Despite the promising progress having been made, the two challenges of multi-view clustering (MVC) are still waiting for better solutions: i) Most existing methods are either not qualified or require additional steps for incomplete multi-view clustering and ii) noise or outliers might significantly degrade the overall clustering performance.

Clustering Incomplete multi-view clustering

Identify Light-Curve Signals with Deep Learning Based Object Detection Algorithm. I. Transit Detection

1 code implementation2 Aug 2021 Kaiming Cui, Junjie Liu, Fabo Feng, Jifeng Liu

In this work, we develop a novel detection algorithm based on a well proven object detection framework in the computer vision field.

object-detection Object Detection

Multi-objective optimization and explanation for stroke risk assessment in Shanxi province

no code implementations29 Jul 2021 Jing Ma, Yiyang Sun, Junjie Liu, Huaxiong Huang, Xiaoshuang Zhou, Shixin Xu

The experimental results showed that the QIDNN model with 7 interactive features achieve the state-of-art accuracy $83. 25\%$.

Prediction

BAMSProd: A Step towards Generalizing the Adaptive Optimization Methods to Deep Binary Model

no code implementations29 Sep 2020 Junjie Liu, Dongchao Wen, Deyu Wang, Wei Tao, Tse-Wei Chen, Kinya Osa, Masami Kato

In this paper, we provide an explicit convex optimization example where training the BNNs with the traditionally adaptive optimization methods still faces the risk of non-convergence, and identify that constraining the range of gradients is critical for optimizing the deep binary model to avoid highly suboptimal solutions.

Quantization

QuantNet: Learning to Quantize by Learning within Fully Differentiable Framework

no code implementations10 Sep 2020 Junjie Liu, Dongchao Wen, Deyu Wang, Wei Tao, Tse-Wei Chen, Kinya Osa, Masami Kato

Despite the achievements of recent binarization methods on reducing the performance degradation of Binary Neural Networks (BNNs), gradient mismatching caused by the Straight-Through-Estimator (STE) still dominates quantized networks.

Binarization Image Classification +1

Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers

1 code implementation ICLR 2020 Junjie Liu, Zhe Xu, Runbin Shi, Ray C. C. Cheung, Hayden K. -H. So

We present a novel network pruning algorithm called Dynamic Sparse Training that can jointly find the optimal network parameters and sparse network structure in a unified optimization process with trainable pruning thresholds.

Network Pruning

DupNet: Towards Very Tiny Quantized CNN with Improved Accuracy for Face Detection

no code implementations13 Nov 2019 Hongxing Gao, Wei Tao, Dongchao Wen, Junjie Liu, Tse-Wei Chen, Kinya Osa, Masami Kato

Firstly, we employ weights with duplicated channels for the weight-intensive layers to reduce the model size.

 Ranked #1 on Face Detection on WIDER Face (GFLOPs metric)

Face Detection Quantization

Knowledge Representing: Efficient, Sparse Representation of Prior Knowledge for Knowledge Distillation

no code implementations13 Nov 2019 Junjie Liu, Dongchao Wen, Hongxing Gao, Wei Tao, Tse-Wei Chen, Kinya Osa, Masami Kato

Despite the recent works on knowledge distillation (KD) have achieved a further improvement through elaborately modeling the decision boundary as the posterior knowledge, their performance is still dependent on the hypothesis that the target network has a powerful capacity (representation ability).

Ranked #200 on Image Classification on CIFAR-10 (using extra training data)

Image Classification Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.