no code implementations • 20 Feb 2024 • Hikari Otsuka, Daiki Chijiwa, Ángel López García-Arias, Yasuyuki Okoshi, Kazushi Kawamura, Thiem Van Chu, Daichi Fujiki, Susumu Takeuchi, Masato Motomura
In addition to reducing search space, the random freezing pattern can also be exploited to reduce model size in inference.
1 code implementation • 6 Dec 2023 • Jiale Yan, Hiroaki Ito, Ángel López García-Arias, Yasuyuki Okoshi, Hikari Otsuka, Kazushi Kawamura, Thiem Van Chu, Masato Motomura
The Strong Lottery Ticket Hypothesis (SLTH) demonstrates the existence of high-performing subnetworks within a randomly initialized model, discoverable through pruning a convolutional neural network (CNN) without any weight training.
1 code implementation • 24 Nov 2021 • Ángel López García-Arias, Masanori Hashimoto, Masato Motomura, Jaehoon Yu
Deep neural networks (DNNs) are so over-parametrized that recent research has found them to already contain a subnetwork with high accuracy at their randomly initialized state.
no code implementations • 6 Mar 2020 • Yafei Ou, Prasoon Ambalathankandy, Masayuki Ikebe, Shinya Takamaeda, Masato Motomura, Tetsuya Asai
In this state of the art report, we present a comprehensive survey of 50+ tone mapping algorithms that have been implemented on hardware for acceleration and real-time performance.