no code implementations • 6 Oct 2022 • Ryo Karakida, Tomoumi Takase, Tomohiro Hayase, Kazuki Osawa
In this study, we first reveal that a specific finite-difference computation, composed of both gradient ascent and descent steps, reduces the computational cost of GR.
no code implementations • 29 Oct 2020 • Tomoumi Takase, Ryo Karakida, Hideki Asoh
A typical method that applies data augmentation to all training samples disregards sample suitability, which may reduce classifier performance.