no code implementations • 16 Apr 2024 • Isao Ishikawa
One advantage of our theory is that it enables us to apply linear algebraic operations to the finite-dimensional approximation of the push-forward.
1 code implementation • 4 Mar 2024 • Isao Ishikawa, Yuka Hashimoto, Masahiro Ikeda, Yoshinobu Kawahara
This paper presents a novel approach for estimating the Koopman operator defined on a reproducing kernel Hilbert space (RKHS) and its spectra.
no code implementations • 25 Feb 2024 • Sho Sonoda, Isao Ishikawa, Masahiro Ikeda
For depth-2 fully-connected networks on a Euclidean space, the ridgelet transform has been discovered up to the closed-form expression, thus we could describe how the parameters are distributed.
no code implementations • 5 Oct 2023 • Sho Sonoda, Yuka Hashimoto, Isao Ishikawa, Masahiro Ikeda
We identify hidden layers inside a deep neural network (DNN) with group actions on the data domain, and formulate a formal deep network as a dual voice transform with respect to the Koopman operator, a linear representation of the group action.
no code implementations • 5 Oct 2023 • Sho Sonoda, Hideyuki Ishi, Isao Ishikawa, Masahiro Ikeda
The symmetry and geometry of input data are considered to be encoded in the internal data representation inside the neural network, but the specific encoding rule has been less investigated.
no code implementations • 12 Feb 2023 • Yuka Hashimoto, Sho Sonoda, Isao Ishikawa, Atsushi Nitanda, Taiji Suzuki
Our bound is tighter than existing norm-based bounds when the condition numbers of weight matrices are small.
no code implementations • 2 Jun 2022 • Motoya Ohnishi, Isao Ishikawa, Yuko Kuroki, Masahiro Ikeda
This work present novel method for structure estimation of an underlying dynamical system.
no code implementations • 30 May 2022 • Sho Sonoda, Isao Ishikawa, Masahiro Ikeda
We show the universality of depth-2 group convolutional neural networks (GCNNs) in a unified and constructive manner based on the ridgelet theory.
no code implementations • 15 Apr 2022 • Isao Ishikawa, Takeshi Teshima, Koichi Tojo, Kenta Oono, Masahiro Ikeda, Masashi Sugiyama
Invertible neural networks (INNs) are neural network architectures with invertibility by design.
no code implementations • 3 Mar 2022 • Sho Sonoda, Isao Ishikawa, Masahiro Ikeda
Based on the well-established framework of the Helgason-Fourier transform on the noncompact symmetric space, we present a fully-connected network and its associated ridgelet transform on the noncompact symmetric space, covering the hyperbolic neural network (HNN) and the SPDNet as special cases.
1 code implementation • 30 Jun 2021 • Motoya Ohnishi, Isao Ishikawa, Kendall Lowrey, Masahiro Ikeda, Sham Kakade, Yoshinobu Kawahara
In this work, we present a novel paradigm of controlling nonlinear systems via the minimization of the Koopman spectrum cost: a cost over the Koopman operator of the controlled dynamics.
no code implementations • 9 Jun 2021 • Sho Sonoda, Isao Ishikawa, Masahiro Ikeda
In this paper, we present a structure theorem of the null space for a general class of neural networks.
no code implementations • 27 Jan 2021 • Yuka Hashimoto, Isao Ishikawa, Masahiro Ikeda, Fuyuta Komura, Takeshi Katsura, Yoshinobu Kawahara
Kernel methods have been among the most popular techniques in machine learning, where learning tasks are solved using the property of reproducing kernel Hilbert space (RKHS).
no code implementations • 4 Dec 2020 • Takeshi Teshima, Koichi Tojo, Masahiro Ikeda, Isao Ishikawa, Kenta Oono
Neural ordinary differential equations (NODEs) is an invertible neural network architecture promising for its free-form Jacobian and the availability of a tractable Jacobian determinant estimator.
no code implementations • 20 Nov 2020 • Naoya Hatano, Masahiro Ikeda, Isao Ishikawa, Yoshihiro Sawano
In the present study, we investigate a universality of neural networks, which concerns a density of the set of two-layer neural networks in a function spaces.
no code implementations • 29 Jul 2020 • Yuka Hashimoto, Isao Ishikawa, Masahiro Ikeda, Fuyuta Komura, Yoshinobu Kawahara
Kernel mean embedding (KME) is a powerful tool to analyze probability measures for data, where the measures are conventionally embedded into a reproducing kernel Hilbert space (RKHS).
no code implementations • 7 Jul 2020 • Sho Sonoda, Isao Ishikawa, Masahiro Ikeda
We develop a new theory of ridgelet transform, a wavelet-like integral transform that provides a powerful and general framework for the theoretical study of neural networks involving not only the ReLU but general activation functions.
no code implementations • NeurIPS 2020 • Takeshi Teshima, Isao Ishikawa, Koichi Tojo, Kenta Oono, Masahiro Ikeda, Masashi Sugiyama
We answer this question by showing a convenient criterion: a CF-INN is universal if its layers contain affine coupling and invertible linear functions as special cases.
no code implementations • 2 Mar 2020 • Yuka Hashimoto, Isao Ishikawa, Masahiro Ikeda, Fuyuta Komura, Takeshi Katsura, Yoshinobu Kawahara
Kernel methods have been among the most popular techniques in machine learning, where learning tasks are solved using the property of reproducing kernel Hilbert space (RKHS).
no code implementations • 27 Nov 2019 • Masahiro Ikeda, Isao Ishikawa, Yoshihiro Sawano
In this paper, we specify what functions induce the bounded composition operators on a reproducing kernel Hilbert space (RKHS) associated with an analytic positive definite function defined on $\mathbf{R}^d$.
no code implementations • 9 Sep 2019 • Yuka Hashimoto, Isao Ishikawa, Masahiro Ikeda, Yoichi Matsuo, Yoshinobu Kawahara
In this paper, we address a lifted representation of nonlinear dynamical systems with random noise based on transfer operators, and develop a novel Krylov subspace method for estimating the operators using finite data, with consideration of the unboundedness of operators.
no code implementations • 17 Jun 2019 • Isao Ishikawa, Akinori Tanaka, Masahiro Ikeda, Yoshinobu Kawahara
We empirically illustrate our metric with synthetic data, and evaluate it in the context of the independence test for random processes.
2 code implementations • NeurIPS 2018 • Isao Ishikawa, Keisuke Fujii, Masahiro Ikeda, Yuka Hashimoto, Yoshinobu Kawahara
The development of a metric for structural data is a long-term problem in pattern recognition and machine learning.
no code implementations • 19 May 2018 • Sho Sonoda, Isao Ishikawa, Masahiro Ikeda, Kei Hagihara, Yoshihiro Sawano, Takuo Matsubara, Noboru Murata
We prove that the global minimum of the backpropagation (BP) training problem of neural networks with an arbitrary nonlinear activation is given by the ridgelet transform.