We introduce k-planes, a white-box model for radiance fields in arbitrary dimensions.
Ranked #1 on
Novel View Synthesis
on LLFF
Most work on reward learning has used simulated environments, but complex information about values is often expressed in natural language, and we believe reward learning for language is a key to making RL practical and safe for real-world tasks.
Deep learning shows high potential for many medical image analysis tasks.
Retrieval-Augmented Language Modeling (RALM) methods, that condition a language model (LM) on relevant documents from a grounding corpus during generation, have been shown to significantly improve language modeling while also providing a natural source attribution mechanism.
In this work, we measure and improve the factual accuracy of large-scale LMs for open-ended text generation.
Text-to-image synthesis has recently seen significant progress thanks to large pretrained language models, large-scale training data, and the introduction of scalable model families such as diffusion and autoregressive models.
Ranked #14 on
Text-to-Image Generation
on COCO
Attention-based models trained on protein sequences have demonstrated incredible success at classification and generation tasks relevant for artificial intelligence-driven protein design.
Guided diffusion is a technique for conditioning the output of a diffusion model at sampling time without retraining the network for each specific task.
This is the first use of sparse convolution for 2D masked modeling.
Ranked #1 on
Instance Segmentation
on COCO 2017 val
We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet.
Ranked #1 on
Speech Recognition
on CHiME6