Search Results for author: Yihong Xu

Found 11 papers, 7 papers with code

Annealed Winner-Takes-All for Motion Forecasting

1 code implementation17 Sep 2024 Yihong Xu, Victor Letzelter, Mickaël Chen, Éloi Zablocki, Matthieu Cord

Additionally, to compensate for limited performance, some approaches rely on training with a large set of hypotheses, requiring a post-selection step during inference to significantly reduce the number of predictions.

Motion Forecasting motion prediction +2

Valeo4Cast: A Modular Approach to End-to-End Forecasting

1 code implementation12 Jun 2024 Yihong Xu, Éloi Zablocki, Alexandre Boulch, Gilles Puy, Mickael Chen, Florent Bartoccioni, Nermin Samet, Oriane Siméoni, Spyros Gidaris, Tuan-Hung Vu, Andrei Bursuc, Eduardo Valle, Renaud Marlet, Matthieu Cord

In end-to-end forecasting, the model must jointly detect and track from sensor data (cameras or LiDARs) the past trajectories of the different elements of the scene and predict their future locations.

Motion Forecasting

Learning Kernel-Modulated Neural Representation for Efficient Light Field Compression

no code implementations12 Jul 2023 Jinglei Shi, Yihong Xu, Christine Guillemot

Light field is a type of image data that captures the 3D scene information by recording light rays emitted from a scene at various orientations.

Descriptive Quantization +1

Towards Motion Forecasting with Real-World Perception Inputs: Are End-to-End Approaches Competitive?

1 code implementation15 Jun 2023 Yihong Xu, Loïck Chambon, Éloi Zablocki, Mickaël Chen, Alexandre Alahi, Matthieu Cord, Patrick Pérez

In fact, conventional forecasting methods are usually not trained nor tested in real-world pipelines (e. g., with upstream detection, tracking, and mapping modules).

Benchmarking Motion Forecasting

Learning-based Spatial and Angular Information Separation for Light Field Compression

no code implementations13 Apr 2023 Jinglei Shi, Yihong Xu, Christine Guillemot

Light fields are a type of image data that capture both spatial and angular scene information by recording light rays emitted by a scene from different orientations.

Tensor Decomposition

DNN Training Acceleration via Exploring GPGPU Friendly Sparsity

no code implementations11 Mar 2022 Zhuoran Song, Yihong Xu, Han Li, Naifeng Jing, Xiaoyao Liang, Li Jiang

The training phases of Deep neural network~(DNN) consumes enormous processing time and energy.

CP-ViT: Cascade Vision Transformer Pruning via Progressive Sparsity Prediction

1 code implementation9 Mar 2022 Zhuoran Song, Yihong Xu, Zhezhi He, Li Jiang, Naifeng Jing, Xiaoyao Liang

We explore the sparsity in ViT and observe that informative patches and heads are sufficient for accurate image recognition.

TransCenter: Transformers with Dense Representations for Multiple-Object Tracking

2 code implementations28 Mar 2021 Yihong Xu, Yutong Ban, Guillaume Delorme, Chuang Gan, Daniela Rus, Xavier Alameda-Pineda

Methodologically, we propose the use of image-related dense detection queries and efficient sparse tracking queries produced by our carefully designed query learning networks (QLN).

Ranked #17 on Multi-Object Tracking on MOT20 (MOTA metric, using extra training data)

Decoder Image Classification +5

How To Train Your Deep Multi-Object Tracker

2 code implementations CVPR 2020 Yihong Xu, Aljosa Osep, Yutong Ban, Radu Horaud, Laura Leal-Taixe, Xavier Alameda-Pineda

In this paper, we bridge this gap by proposing a differentiable proxy of MOTA and MOTP, which we combine in a loss function suitable for end-to-end training of deep multi-object trackers.

Multi-Object Tracking Multiple Object Tracking +1

CANU-ReID: A Conditional Adversarial Network for Unsupervised person Re-IDentification

no code implementations2 Apr 2019 Guillaume Delorme, Yihong Xu, Stephane Lathuilière, Radu Horaud, Xavier Alameda-Pineda

Unsupervised person re-ID is the task of identifying people on a target data set for which the ID labels are unavailable during training.

Clustering Domain Adaptation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.