Philosophy

116 papers with code • 1 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Datasets


Most implemented papers

Gradient Harmonized Single-stage Detector

libuyu/GHM_Detection 13 Nov 2018

Despite the great success of two-stage detectors, single-stage detector is still a more elegant and efficient way, yet suffers from the two well-known disharmonies during training, i. e. the huge difference in quantity between positive and negative examples as well as between easy and hard examples.

Pylearn2: a machine learning research library

lisa-lab/pylearn2 20 Aug 2013

Pylearn2 is a machine learning research library.

PyPOTS: A Python Toolbox for Data Mining on Partially-Observed Time Series

WenjieDu/PyPOTS 30 May 2023

PyPOTS is an open-source Python library dedicated to data mining and analysis on multivariate partially-observed time series, i. e. incomplete time series with missing values, A. K. A.

Neural Network Distiller: A Python Package For DNN Compression Research

NervanaSystems/distiller 27 Oct 2019

This paper presents the philosophy, design and feature-set of Neural Network Distiller, an open-source Python package for DNN compression research.

VanillaNet: the Power of Minimalism in Deep Learning

huawei-noah/vanillanet NeurIPS 2023

In this study, we introduce VanillaNet, a neural network architecture that embraces elegance in design.

Analyzing and Improving the Training Dynamics of Diffusion Models

nvlabs/edm2 5 Dec 2023

Diffusion models currently dominate the field of data-driven image synthesis with their unparalleled scaling to large datasets.

MIOpen: An Open Source Library For Deep Learning Primitives

asleepzzz/nhwc_shuffle 30 Sep 2019

Deep Learning has established itself to be a common occurrence in the business lexicon.

ATHENA: A Framework based on Diverse Weak Defenses for Building Adversarial Defense

softsys4ai/athena 2 Jan 2020

There has been extensive research on developing defense techniques against adversarial attacks; however, they have been mainly designed for specific model families or application domains, therefore, they cannot be easily extended.

A Simple Single-Scale Vision Transformer for Object Localization and Instance Segmentation

tensorflow/models 17 Dec 2021

In this paper, we comprehensively study three architecture design choices on ViT -- spatial reduction, doubled channels, and multiscale features -- and demonstrate that a vanilla ViT architecture can fulfill this goal without handcrafting multiscale features, maintaining the original ViT design philosophy.