Search Results for author: Chunrui Han

Found 12 papers, 4 papers with code

Face Recognition with Contrastive Convolution

no code implementations ECCV 2018 Chunrui Han, Shiguang Shan, Meina Kan, Shuzhe Wu, Xilin Chen

In current face recognition approaches with convolutional neural network (CNN), a pair of faces to compare are independently fed into the CNN for feature extraction.

Face Recognition Face Verification

Meta-Learning with Individualized Feature Space for Few-Shot Classification

no code implementations27 Sep 2018 Chunrui Han, Shiguang Shan, Meina Kan, Shuzhe Wu, Xilin Chen

Specifically, we introduce a kernel generator as meta-learner to learn to construct feature embedding for query images.

Classification Meta-Learning +1

Exploring Recurrent Long-term Temporal Fusion for Multi-view 3D Perception

no code implementations10 Mar 2023 Chunrui Han, Jianjian Sun, Zheng Ge, Jinrong Yang, Runpei Dong, HongYu Zhou, Weixin Mao, Yuang Peng, Xiangyu Zhang

In this paper, we explore an embarrassingly simple long-term recurrent fusion strategy built upon the LSS-based methods and find it already able to enjoy the merits from both sides, i. e., rich long-term information and efficient fusion pipeline.

motion prediction object-detection +1

Triplet Knowledge Distillation

no code implementations25 May 2023 Xijun Wang, Dongyang Liu, Meina Kan, Chunrui Han, Zhongqin Wu, Shiguang Shan

Distillation then begins in an online manner, and the teacher is only allowed to express solutions within the aforementioned subspace.

Face Recognition Image Classification +1

ChatSpot: Bootstrapping Multimodal LLMs via Precise Referring Instruction Tuning

no code implementations18 Jul 2023 Liang Zhao, En Yu, Zheng Ge, Jinrong Yang, Haoran Wei, HongYu Zhou, Jianjian Sun, Yuang Peng, Runpei Dong, Chunrui Han, Xiangyu Zhang

Based on precise referring instruction, we propose ChatSpot, a unified end-to-end multimodal large language model that supports diverse forms of interactivity including mouse clicks, drag-and-drop, and drawing boxes, which provides a more flexible and seamless interactive experience.

Instruction Following Language Modelling +1

GroupLane: End-to-End 3D Lane Detection with Channel-wise Grouping

no code implementations18 Jul 2023 Zhuoling Li, Chunrui Han, Zheng Ge, Jinrong Yang, En Yu, Haoqian Wang, Hengshuang Zhao, Xiangyu Zhang

Besides, GroupLane with ResNet18 still surpasses PersFormer by 4. 9% F1 score, while the inference speed is nearly 7x faster and the FLOPs is only 13. 3% of it.

3D Lane Detection

SCSC: Spatial Cross-scale Convolution Module to Strengthen both CNNs and Transformers

no code implementations14 Aug 2023 Xijun Wang, Xiaojie Chu, Chunrui Han, Xiangyu Zhang

This paper presents a module, Spatial Cross-scale Convolution (SCSC), which is verified to be effective in improving both CNNs and Transformers.

Face Recognition

DreamLLM: Synergistic Multimodal Comprehension and Creation

1 code implementation20 Sep 2023 Runpei Dong, Chunrui Han, Yuang Peng, Zekun Qi, Zheng Ge, Jinrong Yang, Liang Zhao, Jianjian Sun, HongYu Zhou, Haoran Wei, Xiangwen Kong, Xiangyu Zhang, Kaisheng Ma, Li Yi

This paper presents DreamLLM, a learning framework that first achieves versatile Multimodal Large Language Models (MLLMs) empowered with frequently overlooked synergy between multimodal comprehension and creation.

 Ranked #1 on Visual Question Answering on MMBench (GPT-3.5 score metric)

multimodal generation Visual Question Answering +2

Small Language Model Meets with Reinforced Vision Vocabulary

no code implementations23 Jan 2024 Haoran Wei, Lingyu Kong, Jinyue Chen, Liang Zhao, Zheng Ge, En Yu, Jianjian Sun, Chunrui Han, Xiangyu Zhang

In Vary-toy, we introduce an improved vision vocabulary, allowing the model to not only possess all features of Vary but also gather more generality.

Language Modelling Large Language Model +3

ShapeLLM: Universal 3D Object Understanding for Embodied Interaction

3 code implementations27 Feb 2024 Zekun Qi, Runpei Dong, Shaochen Zhang, Haoran Geng, Chunrui Han, Zheng Ge, He Wang, Li Yi, Kaisheng Ma

This paper presents ShapeLLM, the first 3D Multimodal Large Language Model (LLM) designed for embodied interaction, exploring a universal 3D object understanding with 3D point clouds and languages.

3D Point Cloud Linear Classification 3D Question Answering (3D-QA) +8

Cannot find the paper you are looking for? You can Submit a new open access paper.