1 code implementation • 21 Mar 2023 • Diana Wofk, René Ranftl, Matthias Müller, Vladlen Koltun
We evaluate on the TartanAir and VOID datasets, observing up to 30% reduction in inverse RMSE with dense scale alignment relative to performing just global alignment alone.
no code implementations • 18 Apr 2022 • Feihu Zhang, Vladlen Koltun, Philip Torr, René Ranftl, Stephan R. Richter
Semantic segmentation models struggle to generalize in the presence of domain shift.
1 code implementation • ICLR 2022 • Boyi Li, Kilian Q. Weinberger, Serge Belongie, Vladlen Koltun, René Ranftl
We present LSeg, a novel model for language-driven semantic image segmentation.
Ranked #1 on
Few-Shot Semantic Segmentation
on FSS-1000
1 code implementation • 11 Oct 2021 • Antonio Loquercio, Elia Kaufmann, René Ranftl, Matthias Müller, Vladlen Koltun, Davide Scaramuzza
Indeed, the subtasks are executed sequentially, leading to increased processing latency and a compounding of errors through the pipeline.
no code implementations • 4 Oct 2021 • Kaicheng Yu, René Ranftl, Mathieu Salzmann
Weight sharing promises to make neural architecture search (NAS) tractable even on commodity hardware.
15 code implementations • ICCV 2021 • René Ranftl, Alexey Bochkovskiy, Vladlen Koltun
We introduce dense vision transformers, an architecture that leverages vision transformers in place of convolutional networks as a backbone for dense prediction tasks.
Ranked #12 on
Semantic Segmentation
on PASCAL Context
1 code implementation • 10 Jun 2020 • Elia Kaufmann, Antonio Loquercio, René Ranftl, Matthias Müller, Vladlen Koltun, Davide Scaramuzza
In this paper, we propose to learn a sensorimotor policy that enables an autonomous quadrotor to fly extreme acrobatic maneuvers with only onboard sensing and computation.
Robotics
14 code implementations • 2 Jul 2019 • René Ranftl, Katrin Lasinger, David Hafner, Konrad Schindler, Vladlen Koltun
In particular, we propose a robust training objective that is invariant to changes in depth range and scale, advocate the use of principled multi-objective learning to combine data from different sources, and highlight the importance of pretraining encoders on auxiliary tasks.
Ranked #2 on
Depth Estimation
on eBDtheque
1 code implementation • 15 Jun 2019 • Henri Rebecq, René Ranftl, Vladlen Koltun, Davide Scaramuzza
In this work we propose to learn to reconstruct intensity images from event streams directly from data instead of relying on any hand-crafted priors.
no code implementations • CVPR 2019 • Maxim Tatarchenko, Stephan R. Richter, René Ranftl, Zhuwen Li, Vladlen Koltun, Thomas Brox
Convolutional networks for single-view object reconstruction have shown impressive performance and have become a popular subject of research.
Ranked #1 on
3D Reconstruction
on 300W
no code implementations • CVPR 2019 • Henri Rebecq, René Ranftl, Vladlen Koltun, Davide Scaramuzza
Since the output of event cameras is fundamentally different from conventional cameras, it is commonly accepted that they require the development of specialized algorithms to accommodate the particular nature of events.
no code implementations • CVPR 2017 • Jia Xu, René Ranftl, Vladlen Koltun
We present an optical flow estimation approach that operates on the full four-dimensional cost volume.
no code implementations • 21 Apr 2014 • Yunjin Chen, Wensen Feng, René Ranftl, Hong Qiao, Thomas Pock
The Fields of Experts (FoE) image prior model, a filter-based higher-order Markov Random Fields (MRF) model, has been shown to be effective for many image restoration problems.
no code implementations • 16 Jan 2014 • Yunjin Chen, René Ranftl, Thomas Pock
Inpainting based image compression approaches, especially linear and non-linear diffusion models, are an active research topic for lossy image compression.
no code implementations • 16 Jan 2014 • Yunjin Chen, Thomas Pock, René Ranftl, Horst Bischof
It is now well known that Markov random fields (MRFs) are particularly effective for modeling image priors in low-level vision.
no code implementations • 13 Jan 2014 • Yunjin Chen, René Ranftl, Thomas Pock
Numerical experiments show that our trained models clearly outperform existing analysis operator learning approaches and are on par with state-of-the-art image denoising algorithms.