Search Results for author: Tomoumi Takase

Found 2 papers, 0 papers with code

Understanding Gradient Regularization in Deep Learning: Efficient Finite-Difference Computation and Implicit Bias

no code implementations6 Oct 2022 Ryo Karakida, Tomoumi Takase, Tomohiro Hayase, Kazuki Osawa

In this study, we first reveal that a specific finite-difference computation, composed of both gradient ascent and descent steps, reduces the computational cost of GR.

Self-paced Data Augmentation for Training Neural Networks

no code implementations29 Oct 2020 Tomoumi Takase, Ryo Karakida, Hideki Asoh

A typical method that applies data augmentation to all training samples disregards sample suitability, which may reduce classifier performance.

Data Augmentation Single Particle Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.