no code implementations • 25 Feb 2024 • Sho Sonoda, Isao Ishikawa, Masahiro Ikeda
For depth-2 fully-connected networks on a Euclidean space, the ridgelet transform has been discovered up to the closed-form expression, thus we could describe how the parameters are distributed.
1 code implementation • 31 Jan 2024 • Toshinori Kitamura, Tadashi Kozuno, Masahiro Kato, Yuki Ichihara, Soichiro Nishimori, Akiyoshi Sannai, Sho Sonoda, Wataru Kumagai, Yutaka Matsuo
We study a primal-dual reinforcement learning (RL) algorithm for the online constrained Markov decision processes (CMDP) problem, wherein the agent explores an optimal policy that maximizes return while satisfying constraints.
no code implementations • 5 Oct 2023 • Sho Sonoda, Yuka Hashimoto, Isao Ishikawa, Masahiro Ikeda
We identify hidden layers inside a deep neural network (DNN) with group actions on the data domain, and formulate a formal deep network as a dual voice transform with respect to the Koopman operator, a linear representation of the group action.
no code implementations • 5 Oct 2023 • Sho Sonoda, Hideyuki Ishi, Isao Ishikawa, Masahiro Ikeda
The symmetry and geometry of input data are considered to be encoded in the internal data representation inside the neural network, but the specific encoding rule has been less investigated.
no code implementations • 21 Sep 2023 • Ryutaro Yamauchi, Sho Sonoda, Akiyoshi Sannai, Wataru Kumagai
In this paper, we propose a novel framework that integrates the Chain-of-Thought (CoT) method with an external tool (Python REPL).
no code implementations • 12 Feb 2023 • Yuka Hashimoto, Sho Sonoda, Isao Ishikawa, Atsushi Nitanda, Taiji Suzuki
Our bound is tighter than existing norm-based bounds when the condition numbers of weight matrices are small.
no code implementations • 27 Jan 2023 • Hayata Yamasaki, Sathyawageeswar Subramanian, Satoshi Hayakawa, Sho Sonoda
To address this problem, we develop a quantum ridgelet transform (QRT), which implements the ridgelet transform of a quantum state within a linear runtime $O(D)$ of quantum computation.
no code implementations • 30 May 2022 • Sho Sonoda, Isao Ishikawa, Masahiro Ikeda
We show the universality of depth-2 group convolutional neural networks (GCNNs) in a unified and constructive manner based on the ridgelet theory.
no code implementations • 3 Mar 2022 • Sho Sonoda, Isao Ishikawa, Masahiro Ikeda
Based on the well-established framework of the Helgason-Fourier transform on the noncompact symmetric space, we present a fully-connected network and its associated ridgelet transform on the noncompact symmetric space, covering the hyperbolic neural network (HNN) and the SPDNet as special cases.
1 code implementation • 10 Feb 2022 • Kaito Watanabe, Kotaro Sakamoto, Ryo Karakida, Sho Sonoda, Shun-ichi Amari
In this paper, we investigate such neural fields in a multilayer architecture to investigate the supervised learning of the fields.
no code implementations • 16 Jun 2021 • Hayata Yamasaki, Sho Sonoda
We prove that our algorithm can achieve the exponential error convergence under the low-noise condition even with optimized RFs; at the same time, our algorithm can exploit the advantage of the significant reduction of the number of features without the computational hardness owing to QML.
no code implementations • 9 Jun 2021 • Sho Sonoda, Isao Ishikawa, Masahiro Ikeda
In this paper, we present a structure theorem of the null space for a general class of neural networks.
no code implementations • NeurIPS 2021 • Stefano Massaroli, Michael Poli, Sho Sonoda, Taji Suzuki, Jinkyoo Park, Atsushi Yamashita, Hajime Asama
We detail a novel class of implicit neural models.
no code implementations • 19 Aug 2020 • Ming Li, Sho Sonoda, Feilong Cao, Yu Guang Wang, Jiye Liang
Despite the well-known fact that a neural network is a universal approximator, in this study, we mathematically show that when hidden parameters are distributed in a bounded domain, the network may not achieve zero approximation error.
no code implementations • 7 Jul 2020 • Sho Sonoda, Isao Ishikawa, Masahiro Ikeda
We develop a new theory of ridgelet transform, a wavelet-like integral transform that provides a powerful and general framework for the theoretical study of neural networks involving not only the ReLU but general activation functions.
no code implementations • NeurIPS 2020 • Hayata Yamasaki, Sathyawageeswar Subramanian, Sho Sonoda, Masato Koashi
Here, we develop a quantum algorithm for sampling from this optimized distribution over features, in runtime $O(D)$ that is linear in the dimension $D$ of the input data.
no code implementations • 2 Feb 2019 • Sho Sonoda
Kernel quadrature is a kernel-based numerical integration scheme developed for fast approximation of expectations $\int f(x) d p(x)$.
no code implementations • 19 May 2018 • Sho Sonoda, Isao Ishikawa, Masahiro Ikeda, Kei Hagihara, Yoshihiro Sawano, Takuo Matsubara, Noboru Murata
We prove that the global minimum of the backpropagation (BP) training problem of neural networks with an arbitrary nonlinear activation is given by the ridgelet transform.
no code implementations • 12 Dec 2017 • Sho Sonoda, Noboru Murata
The feature map obtained from the denoising autoencoder (DAE) is investigated by determining transportation dynamics of the DAE, which is a cornerstone for deep learning.
no code implementations • 10 May 2016 • Sho Sonoda, Noboru Murata
Starting from the shallow DAE, this paper develops three topics: the transport map of the deep DAE, the equivalence between the stacked DAE and the composition of DAEs, and the development of the double continuum limit or the integral representation of the flow representation.
no code implementations • 14 May 2015 • Sho Sonoda, Noboru Murata
This paper presents an investigation of the approximation property of neural networks with unbounded activation functions, such as the rectified linear unit (ReLU), which is the new de-facto standard of deep learning.
no code implementations • 23 Dec 2013 • Sho Sonoda, Noboru Murata
A new initialization method for hidden parameters in a neural network is proposed.