no code implementations • 1 Apr 2022 • Mitsumasa Nakajima, Katsuma Inoue, Kenji Tanaka, Yasuo Kuniyoshi, Toshikazu Hashimoto, Kohei Nakajima
In addition, we can emulate and accelerate the computation for this training on a simple and scalable physical system.
no code implementations • 6 Jun 2021 • Katsuma Inoue, Soh Ohara, Yasuo Kuniyoshi, Kohei Nakajima
A Lite BERT (ALBERT) is literally characterized as a lightweight version of BERT, in which the number of BERT parameters is reduced by repeatedly applying the same neural network called Transformer's encoder layer.