Inductive Bias

329 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Inductive Bias models and implementations

Most implemented papers

Prototypical Networks for Few-shot Learning

oscarknagg/few-shot NeurIPS 2017

We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class.

Relational inductive biases, deep learning, and graph networks

deepmind/graph_nets 4 Jun 2018

As a companion to this paper, we have released an open-source software library for building graph networks, with demonstrations of how to use them in practice.

Deep Image Prior

DmitryUlyanov/deep-image-prior CVPR 2018

In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning.

Video Swin Transformer

SwinTransformer/Video-Swin-Transformer CVPR 2022

The vision community is witnessing a modeling shift from CNNs to Transformers, where pure Transformer architectures have attained top accuracy on the major video recognition benchmarks.

CoAtNet: Marrying Convolution and Attention for All Data Sizes

xmu-xiaoma666/External-Attention-pytorch NeurIPS 2021

Transformers have attracted increasing interests in computer vision, but they still fall behind state-of-the-art convolutional networks.

How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers

rwightman/pytorch-image-models 18 Jun 2021

Vision Transformers (ViT) have been shown to attain highly competitive performance for a wide range of vision applications, such as image classification, object detection and semantic image segmentation.

Universal Transformers

tensorflow/tensor2tensor ICLR 2019

Feed-forward and convolutional architectures have recently been shown to achieve superior results on some sequence modeling tasks such as machine translation, with the added advantage that they concurrently process all inputs in the sequence, leading to easy parallelization and faster training times.

Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks

yikangshen/Ordered-Neurons ICLR 2019

When a larger constituent ends, all of the smaller constituents that are nested within it must also be closed.

Inductive Relation Prediction by Subgraph Reasoning

kkteru/grail ICML 2020

The dominant paradigm for relation prediction in knowledge graphs involves learning and operating on latent representations (i. e., embeddings) of entities and relations.

Taming Transformers for High-Resolution Image Synthesis

CompVis/taming-transformers CVPR 2021

We demonstrate how combining the effectiveness of the inductive bias of CNNs with the expressivity of transformers enables them to model and thereby synthesize high-resolution images.