Search Results for author: Thomas Müller

Found 22 papers, 12 papers with code

Zero and Few-shot Learning for Author Profiling

no code implementations22 Apr 2022 Mara Chinea-Rios, Thomas Müller, Gretel Liz De la Peña Sarracén, Francisco Rangel, Marc Franco-Salvador

We find that entailment-based models out-perform supervised text classifiers based on roberta-XLM and that we can reach 80% of the accuracy of previous approaches using less than 50\% of the training data on average.

Few-Shot Learning

Active Few-Shot Learning with FASL

1 code implementation20 Apr 2022 Thomas Müller, Guillermo Pérez-Torró, Angelo Basile, Marc Franco-Salvador

Recent advances in natural language processing (NLP) have led to strong text classification models for many tasks.

Active Learning Few-Shot Learning +1

Few-Shot Learning with Siamese Networks and Label Tuning

1 code implementation ACL 2022 Thomas Müller, Guillermo Pérez-Torró, Marc Franco-Salvador

We study the problem of building text classifiers with little or no training data, commonly known as zero and few-shot text classification.

Few-Shot Learning Few-Shot Text Classification +2

Instant Neural Graphics Primitives with a Multiresolution Hash Encoding

6 code implementations16 Jan 2022 Thomas Müller, Alex Evans, Christoph Schied, Alexander Keller

Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate.

3D Reconstruction 3D Shape Reconstruction +2

Path Guiding Using Spatio-Directional Mixture Models

1 code implementation25 Nov 2021 Ana Dodik, Marios Papas, Cengiz Öztireli, Thomas Müller

In particular, we approximate incident radiance as an online-trained $5$D mixture that is accelerated by a $k$D-tree.

Extracting Triangular 3D Models, Materials, and Lighting From Images

2 code implementations24 Nov 2021 Jacob Munkberg, Jon Hasselgren, Tianchang Shen, Jun Gao, Wenzheng Chen, Alex Evans, Thomas Müller, Sanja Fidler

We present an efficient method for joint optimization of topology, materials and lighting from multi-view image observations.

MATE: Multi-view Attention for Table Transformer Efficiency

1 code implementation EMNLP 2021 Julian Martin Eisenschlos, Maharshi Gor, Thomas Müller, William W. Cohen

However, more than 20% of relational tables on the web have 20 or more rows (Cafarella et al., 2008), and these large tables present a challenge for current Transformer models, which are typically limited to 512 tokens.

Question Answering

Real-time Neural Radiance Caching for Path Tracing

1 code implementation23 Jun 2021 Thomas Müller, Fabrice Rousselle, Jan Novák, Alexander Keller

Since pretraining neural networks to handle novel, dynamic scenes is a formidable generalization challenge, we do away with pretraining and instead achieve generalization via adaptation, i. e. we opt for training the radiance cache while rendering.

Neural Radiance Caching

DoT: An efficient Double Transformer for NLP tasks with tables

1 code implementation Findings (ACL) 2021 Syrine Krichene, Thomas Müller, Julian Martin Eisenschlos

To improve efficiency while maintaining a high accuracy, we propose a new architecture, DoT, a double transformer model, that decomposes the problem into two sub-tasks: A shallow pruning transformer that selects the top-K tokens, followed by a deep task-specific transformer that takes as input those K tokens.

Question Answering

Open Domain Question Answering over Tables via Dense Retrieval

1 code implementation NAACL 2021 Jonathan Herzig, Thomas Müller, Syrine Krichene, Julian Martin Eisenschlos

Recent advances in open-domain QA have led to strong models based on dense retrieval, but only focused on retrieving textual passages.

Open-Domain Question Answering

Collective defense of honeybee colonies: experimental results and theoretical modeling

no code implementations14 Oct 2020 Andrea López-Incera, Morgane Nouvian, Katja Ried, Thomas Müller, Hans J. Briegel

Social insect colonies routinely face large vertebrate predators, against which they need to mount a collective defense.

Understanding tables with intermediate pre-training

1 code implementation Findings of the Association for Computational Linguistics 2020 Julian Martin Eisenschlos, Syrine Krichene, Thomas Müller

To be able to use long examples as input of BERT models, we evaluate table pruning techniques as a pre-processing step to drastically improve the training and prediction efficiency at a moderate drop in accuracy.

Data Augmentation Natural Language Inference +1

Neural Control Variates

no code implementations2 Jun 2020 Thomas Müller, Fabrice Rousselle, Jan Novák, Alexander Keller

We propose neural control variates (NCV) for unbiased variance reduction in parametric Monte Carlo integration.

Development of swarm behavior in artificial learning agents that adapt to different foraging environments

no code implementations1 Apr 2020 Andrea López-Incera, Katja Ried, Thomas Müller, Hans J. Briegel

Collective behavior, and swarm formation in particular, has been studied from several perspectives within a large variety of fields, ranging from biology to physics.

How a minimal learning agent can infer the existence of unobserved variables in a complex environment

no code implementations15 Oct 2019 Katja Ried, Benjamin Eva, Thomas Müller, Hans J. Briegel

According to a mainstream position in contemporary cognitive science and philosophy, the use of abstract compositional concepts is both a necessary and a sufficient condition for the presence of genuine thought.

Explainable artificial intelligence

Answering Conversational Questions on Structured Data without Logical Forms

no code implementations IJCNLP 2019 Thomas Müller, Francesco Piccinno, Massimo Nicosia, Peter Shaw, Yasemin Altun

We present a novel approach to answering sequential questions based on structured objects such as knowledge bases or tables without using a logical form as an intermediate representation.

Question Answering

Neural Importance Sampling

2 code implementations11 Aug 2018 Thomas Müller, Brian McWilliams, Fabrice Rousselle, Markus Gross, Jan Novák

We propose to use deep neural networks for generating samples in Monte Carlo integration.

Modelling collective motion based on the principle of agency

no code implementations4 Dec 2017 Katja Ried, Thomas Müller, Hans J. Briegel

Collective motion is an intriguing phenomenon, especially considering that it arises from a set of simple rules governing local interactions between individuals.

Deep Scattering: Rendering Atmospheric Clouds with Radiance-Predicting Neural Networks

no code implementations15 Sep 2017 Simon Kallweit, Thomas Müller, Brian McWilliams, Markus Gross, Jan Novák

To render a new scene, we sample visible points of the cloud and, for each, extract a hierarchical 3D descriptor of the cloud geometry with respect to the shading location and the light source.

Cannot find the paper you are looking for? You can Submit a new open access paper.