1 code implementation • 5 Mar 2025 • Xuanchi Ren, Tianchang Shen, Jiahui Huang, Huan Ling, Yifan Lu, Merlin Nimier-David, Thomas Müller, Alexander Keller, Sanja Fidler, Jun Gao
Our results demonstrate more precise camera control than prior work, as well as state-of-the-art results in sparse-view novel view synthesis, even in challenging settings such as driving scenes and monocular dynamic video.
no code implementations • 27 Jan 2025 • Ziyi Zhang, Nicolas Roussel, Thomas Müller, Tizian Zeltner, Merlin Nimier-David, Fabrice Rousselle, Wenzel Jakob
Building on existing emissive volume reconstruction algorithms, we introduce a subtle yet impactful modification of the loss function requiring changes to only a few lines of code: instead of integrating the radiance field along rays and supervising the resulting images, we project the training images into the scene to directly supervise the spatio-directional radiance field.
1 code implementation • 19 Jun 2024 • Matéo Mahaut, Laura Aina, Paula Czarnowska, Momchil Hardalov, Thomas Müller, Lluís Màrquez
Our experiments across a series of LLMs indicate that trained hidden-state probes provide the most reliable confidence estimates, albeit at the expense of requiring access to weights and training data.
no code implementations • CONLL 2015 • Ryan Cotterell, Thomas Müller, Alexander Fraser, Hinrich Schütze
We present labeled morphological segmentation, an alternative view of morphological processing that unifies several tasks.
no code implementations • 28 Dec 2023 • Towaki Takikawa, Thomas Müller, Merlin Nimier-David, Alex Evans, Sanja Fidler, Alec Jacobson, Alexander Keller
Neural graphics primitives are faster and achieve higher quality when their neural networks are augmented by spatial data structures that hold trainable features arranged in a grid.
no code implementations • 16 Nov 2023 • Zian Wang, Tianchang Shen, Merlin Nimier-David, Nicholas Sharp, Jun Gao, Alexander Keller, Sanja Fidler, Thomas Müller, Zan Gojcic
We then extract an explicit mesh of a narrow band around the surface, with width determined by the kernel size, and fine-tune the radiance field within this band.
1 code implementation • CVPR 2023 • Zhaoshuo Li, Thomas Müller, Alex Evans, Russell H. Taylor, Mathias Unberath, Ming-Yu Liu, Chen-Hsuan Lin
Neural surface reconstruction has been shown to be powerful for recovering dense 3D surfaces via image-based neural rendering.
no code implementations • 31 Jan 2023 • Fulvio Flamini, Marius Krumm, Lukas J. Fiderer, Thomas Müller, Hans J. Briegel
Variational quantum algorithms represent a promising approach to quantum machine learning where classical neural networks are replaced by parametrized quantum circuits.
1 code implementation • 18 Oct 2022 • Yunzhi Lin, Thomas Müller, Jonathan Tremblay, Bowen Wen, Stephen Tyree, Alex Evans, Patricio A. Vela, Stan Birchfield
We present a parallelized optimization method based on fast Neural Radiance Fields (NeRF) for estimating 6-DoF pose of a camera with respect to an object or scene.
1 code implementation • 15 Jun 2022 • Towaki Takikawa, Alex Evans, Jonathan Tremblay, Thomas Müller, Morgan McGuire, Alec Jacobson, Sanja Fidler
Neural approximations of scalar and vector fields, such as signed distance functions and radiance fields, have emerged as accurate, high-quality representations.
no code implementations • 14 May 2022 • Jonathan Tremblay, Moustafa Meshry, Alex Evans, Jan Kautz, Alexander Keller, Sameh Khamis, Thomas Müller, Charles Loop, Nathan Morrical, Koki Nagano, Towaki Takikawa, Stan Birchfield
We present a large-scale synthetic dataset for novel view synthesis consisting of ~300k images rendered from nearly 2000 complex scenes using high-quality ray tracing at high resolution (1600 x 1600 pixels).
Ranked #1 on
Novel View Synthesis
on RTMV
no code implementations • 22 Apr 2022 • Mara Chinea-Rios, Thomas Müller, Gretel Liz De la Peña Sarracén, Francisco Rangel, Marc Franco-Salvador
We find that entailment-based models out-perform supervised text classifiers based on roberta-XLM and that we can reach 80% of the accuracy of previous approaches using less than 50\% of the training data on average.
1 code implementation • 20 Apr 2022 • Thomas Müller, Guillermo Pérez-Torró, Angelo Basile, Marc Franco-Salvador
Recent advances in natural language processing (NLP) have led to strong text classification models for many tasks.
1 code implementation • ACL 2022 • Thomas Müller, Guillermo Pérez-Torró, Marc Franco-Salvador
We study the problem of building text classifiers with little or no training data, commonly known as zero and few-shot text classification.
17 code implementations • 16 Jan 2022 • Thomas Müller, Alex Evans, Christoph Schied, Alexander Keller
Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate.
1 code implementation • 25 Nov 2021 • Ana Dodik, Marios Papas, Cengiz Öztireli, Thomas Müller
In particular, we approximate incident radiance as an online-trained $5$D mixture that is accelerated by a $k$D-tree.
2 code implementations • CVPR 2022 • Jacob Munkberg, Jon Hasselgren, Tianchang Shen, Jun Gao, Wenzheng Chen, Alex Evans, Thomas Müller, Sanja Fidler
We present an efficient method for joint optimization of topology, materials and lighting from multi-view image observations.
Ranked #2 on
Depth Prediction
on Stanford-ORB
1 code implementation • EMNLP 2021 • Julian Martin Eisenschlos, Maharshi Gor, Thomas Müller, William W. Cohen
However, more than 20% of relational tables on the web have 20 or more rows (Cafarella et al., 2008), and these large tables present a challenge for current Transformer models, which are typically limited to 512 tokens.
Ranked #2 on
Question Answering
on HybridQA
2 code implementations • 23 Jun 2021 • Thomas Müller, Fabrice Rousselle, Jan Novák, Alexander Keller
Since pretraining neural networks to handle novel, dynamic scenes is a formidable generalization challenge, we do away with pretraining and instead achieve generalization via adaptation, i. e. we opt for training the radiance cache while rendering.
1 code implementation • Findings (ACL) 2021 • Syrine Krichene, Thomas Müller, Julian Martin Eisenschlos
To improve efficiency while maintaining a high accuracy, we propose a new architecture, DoT, a double transformer model, that decomposes the problem into two sub-tasks: A shallow pruning transformer that selects the top-K tokens, followed by a deep task-specific transformer that takes as input those K tokens.
no code implementations • SEMEVAL 2021 • Thomas Müller, Julian Martin Eisenschlos, Syrine Krichene
We adopt the binary TAPAS model of Eisenschlos et al. (2020) to this task.
1 code implementation • NAACL 2021 • Jonathan Herzig, Thomas Müller, Syrine Krichene, Julian Martin Eisenschlos
Recent advances in open-domain QA have led to strong models based on dense retrieval, but only focused on retrieving textual passages.
no code implementations • 14 Oct 2020 • Andrea López-Incera, Morgane Nouvian, Katja Ried, Thomas Müller, Hans J. Briegel
Social insect colonies routinely face large vertebrate predators, against which they need to mount a collective defense.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Julian Martin Eisenschlos, Syrine Krichene, Thomas Müller
To be able to use long examples as input of BERT models, we evaluate table pruning techniques as a pre-processing step to drastically improve the training and prediction efficiency at a moderate drop in accuracy.
Ranked #11 on
Table-based Fact Verification
on TabFact
no code implementations • 2 Jun 2020 • Thomas Müller, Fabrice Rousselle, Jan Novák, Alexander Keller
We propose neural control variates (NCV) for unbiased variance reduction in parametric Monte Carlo integration.
8 code implementations • ACL 2020 • Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno, Julian Martin Eisenschlos
In this paper, we present TAPAS, an approach to question answering over tables without generating logical forms.
Ranked #1 on
Semantic Parsing
on SQA
(Accuracy metric)
no code implementations • 1 Apr 2020 • Andrea López-Incera, Katja Ried, Thomas Müller, Hans J. Briegel
Collective behavior, and swarm formation in particular, has been studied from several perspectives within a large variety of fields, ranging from biology to physics.
no code implementations • 15 Oct 2019 • Katja Ried, Benjamin Eva, Thomas Müller, Hans J. Briegel
According to a mainstream position in contemporary cognitive science and philosophy, the use of abstract compositional concepts is both a necessary and a sufficient condition for the presence of genuine thought.
Explainable artificial intelligence
Explainable Artificial Intelligence (XAI)
+2
no code implementations • IJCNLP 2019 • Thomas Müller, Francesco Piccinno, Massimo Nicosia, Peter Shaw, Yasemin Altun
We present a novel approach to answering sequential questions based on structured objects such as knowledge bases or tables without using a logical form as an intermediate representation.
2 code implementations • 11 Aug 2018 • Thomas Müller, Brian McWilliams, Fabrice Rousselle, Markus Gross, Jan Novák
We propose to use deep neural networks for generating samples in Monte Carlo integration.
no code implementations • 4 Dec 2017 • Katja Ried, Thomas Müller, Hans J. Briegel
Collective motion is an intriguing phenomenon, especially considering that it arises from a set of simple rules governing local interactions between individuals.
no code implementations • 15 Sep 2017 • Simon Kallweit, Thomas Müller, Brian McWilliams, Markus Gross, Jan Novák
To render a new scene, we sample visible points of the cloud and, for each, extract a hierarchical 3D descriptor of the cloud geometry with respect to the shading location and the light source.