Search Results for author: Kunyu Peng

Found 45 papers, 37 papers with code

Advancing Open-Set Domain Generalization Using Evidential Bi-Level Hardest Domain Scheduler

1 code implementation26 Sep 2024 Kunyu Peng, Di Wen, Kailun Yang, Ao Luo, Yufan Chen, Jia Fu, M. Saquib Sarfraz, Alina Roitberg, Rainer Stiefelhagen

In this paper, we observe that an adaptive domain scheduler benefits more in OSDG compared with prefixed sequential and random domain schedulers.

Data Augmentation Domain Generalization +1

Probing Fine-Grained Action Understanding and Cross-View Generalization of Foundation Models

no code implementations22 Jul 2024 Thinesh Thiyakesan Ponbagavathi, Kunyu Peng, Alina Roitberg

This is the first systematic study of different foundation models and specific design choices for human activity recognition from unknown views, conducted with the goal to provide guidance for backbone- and temporal- fusion scheme selection.

Action Understanding Human Activity Recognition

Occlusion-Aware Seamless Segmentation

1 code implementation2 Jul 2024 Yihong Cao, Jiaming Zhang, Hao Shi, Kunyu Peng, Yuhongxuan Zhang, HUI ZHANG, Rainer Stiefelhagen, Kailun Yang

Our method achieves state-of-the-art performance on the BlendPASS dataset, reaching a remarkable mAPQ of 26. 58% and mIoU of 43. 66%.

Benchmarking Domain Adaptation +2

Open Panoramic Segmentation

1 code implementation2 Jul 2024 Junwei Zheng, Ruiping Liu, Yufan Chen, Kunyu Peng, Chengzhi Wu, Kailun Yang, Jiaming Zhang, Rainer Stiefelhagen

To tackle this problem, in this work, we define a new task termed Open Panoramic Segmentation (OPS), where models are trained with FoV-restricted pinhole images in the source domain in an open-vocabulary setting while evaluated with FoV-open panoramic images in the target domain, enabling the zero-shot open panoramic semantic segmentation ability of models.

Open-Vocabulary Panoramic Semantic Segmentation

Referring Atomic Video Action Recognition

1 code implementation2 Jul 2024 Kunyu Peng, Jia Fu, Kailun Yang, Di Wen, Yufan Chen, Ruiping Liu, Junwei Zheng, Jiaming Zhang, M. Saquib Sarfraz, Rainer Stiefelhagen, Alina Roitberg

Since these existing methods underperform on RAVAR, we introduce RefAtomNet -- a novel cross-stream attention-driven method specialized for the unique challenges of RAVAR: the need to interpret a textual referring expression for the targeted individual, utilize this reference to guide the spatial localization and harvest the prediction of the atomic actions for the referring person.

Action Recognition Question Answering +4

Position: Quo Vadis, Unsupervised Time Series Anomaly Detection?

1 code implementation4 May 2024 M. Saquib Sarfraz, Mei-Yen Chen, Lukas Layer, Kunyu Peng, Marios Koulakis

The current state of machine learning scholarship in Timeseries Anomaly Detection (TAD) is plagued by the persistent use of flawed evaluation metrics, inconsistent benchmarking practices, and a lack of proper justification for the choices made in novel deep learning-based model designs.

Anomaly Detection Benchmarking +3

Skeleton-Based Human Action Recognition with Noisy Labels

1 code implementation15 Mar 2024 Yi Xu, Kunyu Peng, Di Wen, Ruiping Liu, Junwei Zheng, Yufan Chen, Jiaming Zhang, Alina Roitberg, Kailun Yang, Rainer Stiefelhagen

In this study, we bridge this gap by implementing a framework that augments well-established skeleton-based human action recognition methods with label-denoising strategies from various research areas to serve as the initial benchmark.

Action Recognition Denoising +3

EchoTrack: Auditory Referring Multi-Object Tracking for Autonomous Driving

1 code implementation28 Feb 2024 Jiacheng Lin, Jiajun Chen, Kunyu Peng, Xuan He, Zhiyong Li, Rainer Stiefelhagen, Kailun Yang

This paper introduces the task of Auditory Referring Multi-Object Tracking (AR-MOT), which dynamically tracks specific objects in a video sequence based on audio expressions and appears as a challenging problem in autonomous driving.

Autonomous Driving Object +1

Fourier Prompt Tuning for Modality-Incomplete Scene Segmentation

1 code implementation30 Jan 2024 Ruiping Liu, Jiaming Zhang, Kunyu Peng, Yufan Chen, Ke Cao, Junwei Zheng, M. Saquib Sarfraz, Kailun Yang, Rainer Stiefelhagen

Integrating information from multiple modalities enhances the robustness of scene perception systems in autonomous vehicles, providing a more comprehensive and reliable sensory framework.

Autonomous Vehicles Scene Segmentation

Navigating Open Set Scenarios for Skeleton-based Action Recognition

1 code implementation11 Dec 2023 Kunyu Peng, Cheng Yin, Junwei Zheng, Ruiping Liu, David Schneider, Jiaming Zhang, Kailun Yang, M. Saquib Sarfraz, Rainer Stiefelhagen, Alina Roitberg

In real-world scenarios, human actions often fall outside the distribution of training data, making it crucial for models to recognize known actions and reject unknown ones.

cross-modal alignment Novelty Detection +4

Quantized Distillation: Optimizing Driver Activity Recognition Models for Resource-Constrained Environments

1 code implementation10 Nov 2023 Calvin Tanama, Kunyu Peng, Zdravko Marinov, Rainer Stiefelhagen, Alina Roitberg

The framework enhances 3D MobileNet, a neural architecture optimized for speed in video classification, by incorporating knowledge distillation and model quantization to balance model accuracy and computational efficiency.

Activity Recognition Autonomous Driving +4

Elevating Skeleton-Based Action Recognition with Efficient Multi-Modality Self-Supervision

1 code implementation21 Sep 2023 Yiping Wei, Kunyu Peng, Alina Roitberg, Jiaming Zhang, Junwei Zheng, Ruiping Liu, Yufan Chen, Kailun Yang, Rainer Stiefelhagen

These works overlooked the differences in performance among modalities, which led to the propagation of erroneous knowledge between modalities while only three fundamental modalities, i. e., joints, bones, and motions are used, hence no additional modalities are explored.

Action Recognition Knowledge Distillation +3

Towards Privacy-Supporting Fall Detection via Deep Unsupervised RGB2Depth Adaptation

1 code implementation23 Aug 2023 Hejun Xiao, Kunyu Peng, Xiangsheng Huang, Alina Roitberg1, Hao Li, Zhaohui Wang, Rainer Stiefelhagen

In this paper, we introduce a privacy-supporting solution that makes the RGB-trained model applicable in depth domain and utilizes depth data at test time for fall detection.

Domain Adaptation Triplet

OAFuser: Towards Omni-Aperture Fusion for Light Field Semantic Segmentation

2 code implementations28 Jul 2023 Fei Teng, Jiaming Zhang, Kunyu Peng, Yaonan Wang, Rainer Stiefelhagen, Kailun Yang

To simultaneously streamline the redundant information from the light field cameras and avoid feature loss during network propagation, we present a simple yet very effective Sub-Aperture Fusion Module (SAFM).

Autonomous Driving Scene Understanding +1

Open Scene Understanding: Grounded Situation Recognition Meets Segment Anything for Helping People with Visual Impairments

1 code implementation15 Jul 2023 Ruiping Liu, Jiaming Zhang, Kunyu Peng, Junwei Zheng, Ke Cao, Yufan Chen, Kailun Yang, Rainer Stiefelhagen

Grounded Situation Recognition (GSR) is capable of recognizing and interpreting visual scenes in a contextually intuitive way, yielding salient activities (verbs) and the involved entities (roles) depicted in images.

Decoder Grounded Situation Recognition +2

Tightly-Coupled LiDAR-Visual SLAM Based on Geometric Features for Mobile Agents

no code implementations15 Jul 2023 Ke Cao, Ruiping Liu, Ze Wang, Kunyu Peng, Jiaming Zhang, Junwei Zheng, Zhifeng Teng, Kailun Yang, Rainer Stiefelhagen

On the other hand, the entire line segment detected by the visual subsystem overcomes the limitation of the LiDAR subsystem, which can only perform the local calculation for geometric features.

Autonomous Navigation Pose Estimation +2

Exploring Few-Shot Adaptation for Activity Recognition on Diverse Domains

2 code implementations15 May 2023 Kunyu Peng, Di Wen, David Schneider, Jiaming Zhang, Kailun Yang, M. Saquib Sarfraz, Rainer Stiefelhagen, Alina Roitberg

In this work, we focus on Few-Shot Domain Adaptation for Activity Recognition (FSDA-AR), which leverages a very small amount of labeled target videos to achieve effective adaptation.

Action Recognition Unsupervised Domain Adaptation

FishDreamer: Towards Fisheye Semantic Completion via Unified Image Outpainting and Segmentation

1 code implementation24 Mar 2023 Hao Shi, Yu Li, Kailun Yang, Jiaming Zhang, Kunyu Peng, Alina Roitberg, Yaozu Ye, Huajian Ni, Kaiwei Wang, Rainer Stiefelhagen

This paper raises the new task of Fisheye Semantic Completion (FSC), where dense texture, structure, and semantics of a fisheye image are inferred even beyond the sensor field-of-view (FoV).

Image Outpainting Semantic Segmentation

Behind Every Domain There is a Shift: Adapting Distortion-aware Vision Transformers for Panoramic Semantic Segmentation

1 code implementation25 Jul 2022 Jiaming Zhang, Kailun Yang, Hao Shi, Simon Reiß, Kunyu Peng, Chaoxiang Ma, Haodong Fu, Philip H. S. Torr, Kaiwei Wang, Rainer Stiefelhagen

In this paper, we address panoramic semantic segmentation which is under-explored due to two critical challenges: (1) image distortions and object deformations on panoramas; (2) lack of semantic annotations in the 360{\deg} imagery.

Pseudo Label Segmentation +2

Trans4Map: Revisiting Holistic Bird's-Eye-View Mapping from Egocentric Images to Allocentric Semantics with Vision Transformers

1 code implementation13 Jul 2022 Chang Chen, Jiaming Zhang, Kailun Yang, Kunyu Peng, Rainer Stiefelhagen

Humans have an innate ability to sense their surroundings, as they can extract the spatial representation from the egocentric perception and form an allocentric semantic map via spatial transformation and memory updating.

Decoder Semantic Segmentation

Multi-modal Depression Estimation based on Sub-attentional Fusion

1 code implementation13 Jul 2022 Ping-Cheng Wei, Kunyu Peng, Alina Roitberg, Kailun Yang, Jiaming Zhang, Rainer Stiefelhagen

Failure to timely diagnose and effectively treat depression leads to over 280 million people suffering from this psychological disorder worldwide.

Is my Driver Observation Model Overconfident? Input-guided Calibration Networks for Reliable and Interpretable Confidence Estimates

no code implementations10 Apr 2022 Alina Roitberg, Kunyu Peng, David Schneider, Kailun Yang, Marios Koulakis, Manuel Martinez, Rainer Stiefelhagen

In this work, we for the first time examine how well the confidence values of modern driver observation models indeed match the probability of the correct outcome and show that raw neural network-based approaches tend to significantly overestimate their prediction quality.

Action Recognition Image Classification

A Comparative Analysis of Decision-Level Fusion for Multimodal Driver Behaviour Understanding

no code implementations10 Apr 2022 Alina Roitberg, Kunyu Peng, Zdravko Marinov, Constantin Seibold, David Schneider, Rainer Stiefelhagen

Visual recognition inside the vehicle cabin leads to safer driving and more intuitive human-vehicle interaction but such systems face substantial obstacles as they need to capture different granularities of driver behaviour while dealing with highly limited body visibility and changing illumination.

MatchFormer: Interleaving Attention in Transformers for Feature Matching

1 code implementation17 Mar 2022 Qing Wang, Jiaming Zhang, Kailun Yang, Kunyu Peng, Rainer Stiefelhagen

While detector-based methods coupled with feature descriptors struggle in low-texture scenes, CNN-based methods with a sequential extract-to-match pipeline, fail to make use of the matching capacity of the encoder and tend to overburden the decoder for matching.

Decoder Homography Estimation +2

TransDARC: Transformer-based Driver Activity Recognition with Latent Space Feature Calibration

1 code implementation2 Mar 2022 Kunyu Peng, Alina Roitberg, Kailun Yang, Jiaming Zhang, Rainer Stiefelhagen

This module operates in the latent feature-space enriching and diversifying the training set at feature-level in order to improve generalization to novel data appearances, (e. g., sensor changes) and general feature quality.

Human Activity Recognition

Bending Reality: Distortion-aware Transformers for Adapting to Panoramic Semantic Segmentation

1 code implementation CVPR 2022 Jiaming Zhang, Kailun Yang, Chaoxiang Ma, Simon Reiß, Kunyu Peng, Rainer Stiefelhagen

To get around this domain difference and bring together semantic annotations from pinhole- and 360-degree surround-visuals, we propose to learn object deformations and panoramic image distortions in the Deformable Patch Embedding (DPE) and Deformable MLP (DMLP) components which blend into our Transformer for PAnoramic Semantic Segmentation (Trans4PASS) model.

Scene Understanding Semantic Segmentation +1

TransKD: Transformer Knowledge Distillation for Efficient Semantic Segmentation

2 code implementations27 Feb 2022 Ruiping Liu, Kailun Yang, Alina Roitberg, Jiaming Zhang, Kunyu Peng, Huayao Liu, Yaonan Wang, Rainer Stiefelhagen

Furthermore, we introduce two optimization modules to enhance the patch embedding distillation from different perspectives: (1) Global-Local Context Mixer (GL-Mixer) extracts both global and local information of a representative embedding; (2) Embedding Assistant (EA) acts as an embedding method to seamlessly bridge teacher and student models with the teacher's number of channels.

Autonomous Driving Knowledge Distillation +3

Delving Deep into One-Shot Skeleton-based Action Recognition with Diverse Occlusions

2 code implementations23 Feb 2022 Kunyu Peng, Alina Roitberg, Kailun Yang, Jiaming Zhang, Rainer Stiefelhagen

Yet, the research of data-scarce recognition from skeleton sequences, such as one-shot action recognition, does not explicitly consider occlusions despite their everyday pervasiveness.

Action Classification Action Recognition +2

Should I take a walk? Estimating Energy Expenditure from Video Data

1 code implementation1 Feb 2022 Kunyu Peng, Alina Roitberg, Kailun Yang, Jiaming Zhang, Rainer Stiefelhagen

To study this underresearched task, we introduce Vid2Burn -- an omni-source benchmark for estimating caloric expenditure from video data featuring both, high- and low-intensity activities for which we derive energy expenditure annotations based on models established in medical literature.

Video Recognition

Affect-DML: Context-Aware One-Shot Recognition of Human Affect using Deep Metric Learning

1 code implementation30 Nov 2021 Kunyu Peng, Alina Roitberg, David Schneider, Marios Koulakis, Kailun Yang, Rainer Stiefelhagen

Human affect recognition is a well-established research area with numerous applications, e. g., in psychological care, but existing methods assume that all emotions-of-interest are given a priori as annotated training examples.

Emotion Recognition Metric Learning +2

Transfer beyond the Field of View: Dense Panoramic Semantic Segmentation via Unsupervised Domain Adaptation

1 code implementation21 Oct 2021 Jiaming Zhang, Chaoxiang Ma, Kailun Yang, Alina Roitberg, Kunyu Peng, Rainer Stiefelhagen

We look at this problem from the perspective of domain adaptation and bring panoramic semantic segmentation to a setting, where labelled training data originates from a different distribution of conventional pinhole camera images.

Ranked #7 on Semantic Segmentation on DensePASS (using extra training data)

Autonomous Vehicles Segmentation +2

Trans4Trans: Efficient Transformer for Transparent Object and Semantic Scene Segmentation in Real-World Navigation Assistance

1 code implementation20 Aug 2021 Jiaming Zhang, Kailun Yang, Angela Constantinescu, Kunyu Peng, Karin Müller, Rainer Stiefelhagen

In this paper, we build a wearable system with a novel dual-head Transformer for Transparency (Trans4Trans) perception model, which can segment general- and transparent objects.

Ranked #2 on Semantic Segmentation on DADA-seg (using extra training data)

Decoder Navigate +2

Trans4Trans: Efficient Transformer for Transparent Object Segmentation to Help Visually Impaired People Navigate in the Real World

1 code implementation7 Jul 2021 Jiaming Zhang, Kailun Yang, Angela Constantinescu, Kunyu Peng, Karin Müller, Rainer Stiefelhagen

Common fully glazed facades and transparent objects present architectural barriers and impede the mobility of people with low vision or blindness, for instance, a path detected behind a glass door is inaccessible unless it is correctly perceived and reacted.

Decoder Navigate +2

MASS: Multi-Attentional Semantic Segmentation of LiDAR Data for Dense Top-View Understanding

1 code implementation1 Jul 2021 Kunyu Peng, Juncong Fei, Kailun Yang, Alina Roitberg, Jiaming Zhang, Frank Bieder, Philipp Heidenreich, Christoph Stiller, Rainer Stiefelhagen

At the heart of all automated driving systems is the ability to sense the surroundings, e. g., through semantic segmentation of LiDAR sequences, which experienced a remarkable progress due to the release of large datasets such as SemanticKITTI and nuScenes-LidarSeg.

3D Object Detection Graph Attention +4

PillarSegNet: Pillar-based Semantic Grid Map Estimation using Sparse LiDAR Data

no code implementations10 May 2021 Juncong Fei, Kunyu Peng, Philipp Heidenreich, Frank Bieder, Christoph Stiller

The recent publication of the SemanticKITTI dataset stimulates the research on semantic segmentation of LiDAR point clouds in urban scenarios.

2D Semantic Segmentation Segmentation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.