Inductive Bias
515 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Inductive Bias
Libraries
Use these libraries to find Inductive Bias models and implementationsMost implemented papers
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class.
Relational inductive biases, deep learning, and graph networks
As a companion to this paper, we have released an open-source software library for building graph networks, with demonstrations of how to use them in practice.
Deep Image Prior
In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning.
CoAtNet: Marrying Convolution and Attention for All Data Sizes
Transformers have attracted increasing interests in computer vision, but they still fall behind state-of-the-art convolutional networks.
Video Swin Transformer
The vision community is witnessing a modeling shift from CNNs to Transformers, where pure Transformer architectures have attained top accuracy on the major video recognition benchmarks.
How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers
Vision Transformers (ViT) have been shown to attain highly competitive performance for a wide range of vision applications, such as image classification, object detection and semantic image segmentation.
Taming Transformers for High-Resolution Image Synthesis
We demonstrate how combining the effectiveness of the inductive bias of CNNs with the expressivity of transformers enables them to model and thereby synthesize high-resolution images.
ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases
We initialise the GPSA layers to mimic the locality of convolutional layers, then give each attention head the freedom to escape locality by adjusting a gating parameter regulating the attention paid to position versus content information.
Universal Transformers
Feed-forward and convolutional architectures have recently been shown to achieve superior results on some sequence modeling tasks such as machine translation, with the added advantage that they concurrently process all inputs in the sequence, leading to easy parallelization and faster training times.
Inductive Relation Prediction by Subgraph Reasoning
The dominant paradigm for relation prediction in knowledge graphs involves learning and operating on latent representations (i. e., embeddings) of entities and relations.