Search Results for author: Vadim Tschernezki

Found 5 papers, 3 papers with code

EPIC Fields: Marrying 3D Geometry and Video Understanding

1 code implementation NeurIPS 2023 Vadim Tschernezki, Ahmad Darkhalil, Zhifan Zhu, David Fouhey, Iro Laina, Diane Larlus, Dima Damen, Andrea Vedaldi

Compared to other neural rendering datasets, EPIC Fields is better tailored to video understanding because it is paired with labelled action segments and the recent VISOR segment annotations.

Neural Rendering Video Understanding

Neural Feature Fusion Fields: 3D Distillation of Self-Supervised 2D Image Representations

no code implementations7 Sep 2022 Vadim Tschernezki, Iro Laina, Diane Larlus, Andrea Vedaldi

We present Neural Feature Fusion Fields (N3F), a method that improves dense 2D image feature extractors when the latter are applied to the analysis of multiple images reconstructible as a 3D scene.

Neural Rendering Retrieval

NeuralDiff: Segmenting 3D objects that move in egocentric videos

no code implementations19 Oct 2021 Vadim Tschernezki, Diane Larlus, Andrea Vedaldi

Given a raw video sequence taken from a freely-moving camera, we study the problem of decomposing the observed 3D scene into a static background and a dynamic foreground containing the objects that move in the video sequence.

Neural Rendering Semantic Segmentation

Improving Deep Metric Learning by Divide and Conquer

1 code implementation9 Sep 2021 Artsiom Sanakoyeu, Pingchuan Ma, Vadim Tschernezki, Björn Ommer

We propose to build a more expressive representation by jointly splitting the embedding space and the data hierarchically into smaller sub-parts.

Image Retrieval Metric Learning +1

Divide and Conquer the Embedding Space for Metric Learning

1 code implementation CVPR 2019 Artsiom Sanakoyeu, Vadim Tschernezki, Uta Büchler, Björn Ommer

Approaches for learning a single distance metric often struggle to encode all different types of relationships and do not generalize well.

Clustering Metric Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.