Convolutional Neural Networks

McKernel introduces a framework to use kernel approximates in the mini-batch setting with Stochastic Gradient Descent (SGD) as an alternative to Deep Learning.

The core library was developed in 2014 as integral part of a thesis of Master of Science [1,2] at Carnegie Mellon and City University of Hong Kong. The original intend was to implement a speedup of Random Kitchen Sinks (Rahimi and Recht 2007) by writing a very efficient HADAMARD tranform, which was the main bottleneck of the construction. The code though was later expanded at ETH Zürich (in McKernel by Curtó et al. 2017) to propose a framework that could explain both Kernel Methods and Neural Networks. This manuscript and the corresponding theses, constitute one of the first usages (if not the first) in the literature of FOURIER features and Deep Learning; which later got a lot of research traction and interest in the community.

More information can be found in this presentation that the first author gave at ICLR 2020 iclr2020_DeCurto.

[1] https://www.curto.hk/c/decurto.pdf

[2] https://www.zarza.hk/z/dezarza.pdf

Source: McKernel: A Library for Approximate Kernel Expansions in Log-linear Time

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
General Classification 1 100.00%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories