no code implementations • 30 Jan 2025 • Ruofan Liang, Zan Gojcic, Huan Ling, Jacob Munkberg, Jon Hasselgren, Zhi-Hao Lin, Jun Gao, Alexander Keller, Nandita Vijaykumar, Sanja Fidler, Zian Wang
Classic physically-based rendering (PBR) accurately simulates the light transport, but relies on precise scene representations--explicit 3D geometry, high-quality material properties, and lighting conditions--that are often impractical to obtain in real-world scenarios.
no code implementations • 28 Dec 2023 • Towaki Takikawa, Thomas Müller, Merlin Nimier-David, Alex Evans, Sanja Fidler, Alec Jacobson, Alexander Keller
Neural graphics primitives are faster and achieve higher quality when their neural networks are augmented by spatial data structures that hold trainable features arranged in a grid.
no code implementations • 30 Nov 2023 • Jakob Hoydis, Fayçal Aït Aoudia, Sebastian Cammerer, Florian Euchner, Merlin Nimier-David, Stephan ten Brink, Alexander Keller
Ray tracing (RT) is instrumental in 6G research in order to generate spatially-consistent and environment-specific channel impulse responses (CIRs).
no code implementations • 16 Nov 2023 • Zian Wang, Tianchang Shen, Merlin Nimier-David, Nicholas Sharp, Jun Gao, Alexander Keller, Sanja Fidler, Thomas Müller, Zan Gojcic
We then extract an explicit mesh of a narrow band around the surface, with width determined by the kernel size, and fine-tune the radiance field within this band.
1 code implementation • 20 Mar 2023 • Jakob Hoydis, Fayçal Aït Aoudia, Sebastian Cammerer, Merlin Nimier-David, Nikolaus Binder, Guillermo Marcus, Alexander Keller
Sionna is a GPU-accelerated open-source library for link-level simulations based on TensorFlow.
2 code implementations • 29 Jul 2022 • Sebastian Cammerer, Jakob Hoydis, Fayçal Aït Aoudia, Alexander Keller
In this work, we propose a fully differentiable graph neural network (GNN)-based architecture for channel decoding and showcase a competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes.
no code implementations • 13 Jun 2022 • Daniel Schäufele, Guillermo Marcus, Nikolaus Binder, Matthias Mehlhose, Alexander Keller, Sławomir Stańczak
Non-orthogonal multiple access (NOMA) is an interesting technology that enables massive connectivity as required in future 5G and 6G networks.
1 code implementation • 22 May 2022 • Fayçal Aït Aoudia, Jakob Hoydis, Sebastian Cammerer, Matthijs Van Keirsbilck, Alexander Keller
We propose a neural network (NN)-based algorithm for device detection and time of arrival (ToA) and carrier frequency offset (CFO) estimation for the narrowband physical random-access channel (NPRACH) of narrowband internet of things (NB-IoT).
no code implementations • 14 May 2022 • Jonathan Tremblay, Moustafa Meshry, Alex Evans, Jan Kautz, Alexander Keller, Sameh Khamis, Thomas Müller, Charles Loop, Nathan Morrical, Koki Nagano, Towaki Takikawa, Stan Birchfield
We present a large-scale synthetic dataset for novel view synthesis consisting of ~300k images rendered from nearly 2000 complex scenes using high-quality ray tracing at high resolution (1600 x 1600 pixels).
Ranked #1 on
Novel View Synthesis
on RTMV
2 code implementations • 22 Mar 2022 • Jakob Hoydis, Sebastian Cammerer, Fayçal Ait Aoudia, Avinash Vem, Nikolaus Binder, Guillermo Marcus, Alexander Keller
Sionna is a GPU-accelerated open-source library for link-level simulations based on TensorFlow.
17 code implementations • 16 Jan 2022 • Thomas Müller, Alex Evans, Christoph Schied, Alexander Keller
Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate.
1 code implementation • 13 Jan 2022 • Matthias Mehlhose, Guillermo Marcus, Daniel Schäufele, Daniyal Amir Awan, Nikolaus Binder, Martin Kasparick, Renato L. G. Cavalcante, Sławomir Stańczak, Alexander Keller
In this feasibility study, we have implemented a recently proposed partially linear multiuser detection algorithm in reproducing kernel Hilbert spaces (RKHSs) on a GPU-accelerated platform.
2 code implementations • 23 Jun 2021 • Thomas Müller, Fabrice Rousselle, Jan Novák, Alexander Keller
Since pretraining neural networks to handle novel, dynamic scenes is a formidable generalization challenge, we do away with pretraining and instead achieve generalization via adaptation, i. e. we opt for training the radiance cache while rendering.
no code implementations • 31 Mar 2021 • Gonçalo Mordido, Matthijs Van Keirsbilck, Alexander Keller
We demonstrate that 1x1-convolutions in 1D time-channel separable convolutions may be replaced by constant, sparse random ternary matrices with weights in $\{-1, 0,+1\}$.
no code implementations • 5 Mar 2021 • Alexander Keller, Matthijs Van Keirsbilck
Artificial neural networks can be represented by paths.
no code implementations • 2 Jun 2020 • Thomas Müller, Fabrice Rousselle, Jan Novák, Alexander Keller
We propose neural control variates (NCV) for unbiased variance reduction in parametric Monte Carlo integration.
no code implementations • 29 May 2019 • Gonçalo Mordido, Matthijs Van Keirsbilck, Alexander Keller
Low bit-width integer weights and activations are very important for efficient inference, especially with respect to lower power consumption.
no code implementations • 29 May 2019 • Matthijs Van Keirsbilck, Alexander Keller, Xiaodong Yang
We study structurally sparse RNNs, showing that they are well suited for acceleration on parallel hardware, with a greatly reduced cost of the recurrent operations as well as orders of magnitude less recurrent weights.
1 code implementation • 2 Jan 2019 • Nikolaus Binder, Alexander Keller
We compare different methods for sampling from discrete probability distributions and introduce a new algorithm which is especially efficient on massively parallel processors, such as GPUs.
Distributed, Parallel, and Cluster Computing Graphics
no code implementations • 17 Dec 2017 • Alexander Keller, Ken Dahm
As both light transport simulation and reinforcement learning are ruled by the same Fredholm integral equation of the second kind, reinforcement learning techniques may be used for photorealistic image synthesis: Efficiency may be dramatically improved by guiding light transport paths by an approximate solution of the integral equation that is learned during rendering.
2 code implementations • 25 Jan 2017 • Ken Dahm, Alexander Keller
We show that the equations of reinforcement learning and light transport simulation are related integral equations.