In RDMs, a set of nearest neighbors is retrieved from an external database during training for each training instance, and the diffusion model is conditioned on these informative samples.
We argue that the theory and practice of diffusion-based generative models are currently unnecessarily convoluted and seek to remedy the situation by presenting a design space that clearly separates the concrete design choices.
This paper presents Deepchecks, a Python library for comprehensively validating machine learning models and data.
This article addresses the problem of distilling knowledge from a large teacher model to a slim student network for LiDAR semantic segmentation.
Ranked #14 on
LIDAR Semantic Segmentation
on nuScenes
The training of modern speech processing systems often requires a large amount of simulated room impulse response (RIR) data in order to allow the systems to generalize well in real-world, reverberant environments.
We introduce GPT-NeoX-20B, a 20 billion parameter autoregressive language model trained on the Pile, whose weights will be made freely and openly available to the public through a permissive license.
Ranked #7 on
Multi-task Language Understanding
on MMLU
We observe that MIM essentially teaches the model to learn better middle-level interactions among patches and extract more generalized features.
Many interpretability tools allow practitioners and researchers to explain Natural Language Processing systems.
We study model patching, where the goal is to improve accuracy on specific tasks without degrading accuracy on tasks where performance is already adequate.
Towhee is a framework that is dedicated to making neural data processing pipelines simple and fast.