# regression

2272 papers with code • 1 benchmarks • 18 datasets

## Libraries

Use these libraries to find regression models and implementations## Datasets

## Most implemented papers

# Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks

We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning.

# Weight Uncertainty in Neural Networks

We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop.

# Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning

In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost.

# Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles

Deep neural networks (NNs) are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks.

# Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression

By incorporating DIoU and CIoU losses into state-of-the-art object detection algorithms, e. g., YOLO v3, SSD and Faster RCNN, we achieve notable performance gains in terms of not only IoU metric but also GIoU metric.

# SQuAD: 100,000+ Questions for Machine Comprehension of Text

We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100, 000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage.

# Implicit Quantile Networks for Distributional Reinforcement Learning

In this work, we build on recent advances in distributional reinforcement learning to give a generally applicable, flexible, and state-of-the-art distributional variant of DQN.

# Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics

Numerous deep learning applications benefit from multi-task learning with multiple regression and classification objectives.

# Distributional Reinforcement Learning with Quantile Regression

In this paper, we build on recent work advocating a distributional approach to reinforcement learning in which the distribution over returns is modeled explicitly instead of only estimating the mean.

# Bayesian regression and Bitcoin

In this paper, we discuss the method of Bayesian regression and its efficacy for predicting price variation of Bitcoin, a recently popularized virtual, cryptographic currency.