Philosophy
140 papers with code • 1 benchmarks • 1 datasets
Most implemented papers
Gradient Harmonized Single-stage Detector
Despite the great success of two-stage detectors, single-stage detector is still a more elegant and efficient way, yet suffers from the two well-known disharmonies during training, i. e. the huge difference in quantity between positive and negative examples as well as between easy and hard examples.
Pylearn2: a machine learning research library
Pylearn2 is a machine learning research library.
Analyzing and Improving the Training Dynamics of Diffusion Models
Diffusion models currently dominate the field of data-driven image synthesis with their unparalleled scaling to large datasets.
PyPOTS: A Python Toolbox for Data Mining on Partially-Observed Time Series
PyPOTS is an open-source Python library dedicated to data mining and analysis on multivariate partially-observed time series, i. e. incomplete time series with missing values, A. K. A.
Neural Network Distiller: A Python Package For DNN Compression Research
This paper presents the philosophy, design and feature-set of Neural Network Distiller, an open-source Python package for DNN compression research.
VanillaNet: the Power of Minimalism in Deep Learning
In this study, we introduce VanillaNet, a neural network architecture that embraces elegance in design.
MIOpen: An Open Source Library For Deep Learning Primitives
Deep Learning has established itself to be a common occurrence in the business lexicon.
ATHENA: A Framework based on Diverse Weak Defenses for Building Adversarial Defense
There has been extensive research on developing defense techniques against adversarial attacks; however, they have been mainly designed for specific model families or application domains, therefore, they cannot be easily extended.
Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world.
A Simple Single-Scale Vision Transformer for Object Localization and Instance Segmentation
In this paper, we comprehensively study three architecture design choices on ViT -- spatial reduction, doubled channels, and multiscale features -- and demonstrate that a vanilla ViT architecture can fulfill this goal without handcrafting multiscale features, maintaining the original ViT design philosophy.