no code implementations • 20 Aug 2022 • Shingo Yashima
Knowledge distillation is an effective approach for training compact recognizers required in autonomous driving.
1 code implementation • 2 Jun 2022 • Shingo Yashima, Teppei Suzuki, Kohta Ishikawa, Ikuro Sato, Rei Kawakami
Ensembles of deep neural networks demonstrate improved performance over single models.
no code implementations • 13 Nov 2019 • Shingo Yashima, Atsushi Nitanda, Taiji Suzuki
To address this problem, sketching and stochastic gradient methods are the most commonly used techniques to derive efficient large-scale learning algorithms.