no code implementations • 21 Mar 2024 • Yufan Chen, Jiaming Zhang, Kunyu Peng, Junwei Zheng, Ruiping Liu, Philip Torr, Rainer Stiefelhagen
To address this, we are the first to introduce a robustness benchmark for DLA models, which includes 450K document images of three datasets.
1 code implementation • 15 Mar 2024 • Yi Xu, Kunyu Peng, Di Wen, Ruiping Liu, Junwei Zheng, Yufan Chen, Jiaming Zhang, Alina Roitberg, Kailun Yang, Rainer Stiefelhagen
In this study, we bridge this gap by implementing a framework that augments well-established skeleton-based human action recognition methods with label-denoising strategies from various research areas to serve as the initial benchmark.
1 code implementation • 30 Jan 2024 • Ruiping Liu, Jiaming Zhang, Kunyu Peng, Yufan Chen, Ke Cao, Junwei Zheng, M. Saquib Sarfraz, Kailun Yang, Rainer Stiefelhagen
Integrating information from multiple modalities enhances the robustness of scene perception systems in autonomous vehicles, providing a more comprehensive and reliable sensory framework.
1 code implementation • 11 Dec 2023 • Kunyu Peng, Cheng Yin, Junwei Zheng, Ruiping Liu, David Schneider, Jiaming Zhang, Kailun Yang, M. Saquib Sarfraz, Rainer Stiefelhagen, Alina Roitberg
In real-world scenarios, human actions often fall outside the distribution of training data, making it crucial for models to recognize known actions and reject unknown ones.
1 code implementation • 21 Sep 2023 • Yifei Chen, Kunyu Peng, Alina Roitberg, David Schneider, Jiaming Zhang, Junwei Zheng, Ruiping Liu, Yufan Chen, Kailun Yang, Rainer Stiefelhagen
To integrate action recognition methods into autonomous robotic systems, it is crucial to consider adverse situations involving target occlusions.
1 code implementation • 21 Sep 2023 • Yiping Wei, Kunyu Peng, Alina Roitberg, Jiaming Zhang, Junwei Zheng, Ruiping Liu, Yufan Chen, Kailun Yang, Rainer Stiefelhagen
These works overlooked the differences in performance among modalities, which led to the propagation of erroneous knowledge between modalities while only three fundamental modalities, i. e., joints, bones, and motions are used, hence no additional modalities are explored.
no code implementations • 2 Sep 2023 • Xuan He, Kailun Yang, Junwei Zheng, Jin Yuan, Luis M. Bergasa, HUI ZHANG, Zhiyong Li
These methods typically use visual and depth representations to generate query points on objects, whose quality plays a decisive role in the detection accuracy.
1 code implementation • 15 Jul 2023 • Ruiping Liu, Jiaming Zhang, Kunyu Peng, Junwei Zheng, Ke Cao, Yufan Chen, Kailun Yang, Rainer Stiefelhagen
Grounded Situation Recognition (GSR) is capable of recognizing and interpreting visual scenes in a contextually intuitive way, yielding salient activities (verbs) and the involved entities (roles) depicted in images.
no code implementations • 15 Jul 2023 • Ke Cao, Ruiping Liu, Ze Wang, Kunyu Peng, Jiaming Zhang, Junwei Zheng, Zhifeng Teng, Kailun Yang, Rainer Stiefelhagen
On the other hand, the entire line segment detected by the visual subsystem overcomes the limitation of the LiDAR subsystem, which can only perform the local calculation for geometric features.
no code implementations • 17 Apr 2023 • Chengzhi Wu, Junwei Zheng, Julius Pfrommer, Jürgen Beyerer
Modeling a 3D volumetric shape as an assembly of decomposed shape parts is much more challenging, but semantically more valuable than direct reconstruction from a full shape representation.
1 code implementation • CVPR 2023 • Chengzhi Wu, Junwei Zheng, Julius Pfrommer, Jürgen Beyerer
Point cloud sampling is a less explored research topic for this data representation.
Ranked #31 on 3D Point Cloud Classification on ModelNet40
1 code implementation • 28 Feb 2023 • Junwei Zheng, Jiaming Zhang, Kailun Yang, Kunyu Peng, Rainer Stiefelhagen
People with Visual Impairments (PVI) typically recognize objects through haptic perception.