Search Results for author: Sho Sonoda

Found 22 papers, 2 papers with code

A unified Fourier slice method to derive ridgelet transform for a variety of depth-2 neural networks

no code implementations25 Feb 2024 Sho Sonoda, Isao Ishikawa, Masahiro Ikeda

For depth-2 fully-connected networks on a Euclidean space, the ridgelet transform has been discovered up to the closed-form expression, thus we could describe how the parameters are distributed.

A Policy Gradient Primal-Dual Algorithm for Constrained MDPs with Uniform PAC Guarantees

1 code implementation31 Jan 2024 Toshinori Kitamura, Tadashi Kozuno, Masahiro Kato, Yuki Ichihara, Soichiro Nishimori, Akiyoshi Sannai, Sho Sonoda, Wataru Kumagai, Yutaka Matsuo

We study a primal-dual reinforcement learning (RL) algorithm for the online constrained Markov decision processes (CMDP) problem, wherein the agent explores an optimal policy that maximizes return while satisfying constraints.

Reinforcement Learning (RL)

Deep Ridgelet Transform: Voice with Koopman Operator Proves Universality of Formal Deep Networks

no code implementations5 Oct 2023 Sho Sonoda, Yuka Hashimoto, Isao Ishikawa, Masahiro Ikeda

We identify hidden layers inside a deep neural network (DNN) with group actions on the data domain, and formulate a formal deep network as a dual voice transform with respect to the Koopman operator, a linear representation of the group action.

LEMMA

Joint Group Invariant Functions on Data-Parameter Domain Induce Universal Neural Networks

no code implementations5 Oct 2023 Sho Sonoda, Hideyuki Ishi, Isao Ishikawa, Masahiro Ikeda

The symmetry and geometry of input data are considered to be encoded in the internal data representation inside the neural network, but the specific encoding rule has been less investigated.

LEMMA

LPML: LLM-Prompting Markup Language for Mathematical Reasoning

no code implementations21 Sep 2023 Ryutaro Yamauchi, Sho Sonoda, Akiyoshi Sannai, Wataru Kumagai

In this paper, we propose a novel framework that integrates the Chain-of-Thought (CoT) method with an external tool (Python REPL).

Mathematical Reasoning

Koopman-based generalization bound: New aspect for full-rank weights

no code implementations12 Feb 2023 Yuka Hashimoto, Sho Sonoda, Isao Ishikawa, Atsushi Nitanda, Taiji Suzuki

Our bound is tighter than existing norm-based bounds when the condition numbers of weight matrices are small.

Quantum Ridgelet Transform: Winning Lottery Ticket of Neural Networks with Quantum Computation

no code implementations27 Jan 2023 Hayata Yamasaki, Sathyawageeswar Subramanian, Satoshi Hayakawa, Sho Sonoda

To address this problem, we develop a quantum ridgelet transform (QRT), which implements the ridgelet transform of a quantum state within a linear runtime $O(D)$ of quantum computation.

Quantum Machine Learning

Universality of Group Convolutional Neural Networks Based on Ridgelet Analysis on Groups

no code implementations30 May 2022 Sho Sonoda, Isao Ishikawa, Masahiro Ikeda

We show the universality of depth-2 group convolutional neural networks (GCNNs) in a unified and constructive manner based on the ridgelet theory.

Fully-Connected Network on Noncompact Symmetric Space and Ridgelet Transform based on Helgason-Fourier Analysis

no code implementations3 Mar 2022 Sho Sonoda, Isao Ishikawa, Masahiro Ikeda

Based on the well-established framework of the Helgason-Fourier transform on the noncompact symmetric space, we present a fully-connected network and its associated ridgelet transform on the noncompact symmetric space, covering the hyperbolic neural network (HNN) and the SPDNet as special cases.

Deep Learning in Random Neural Fields: Numerical Experiments via Neural Tangent Kernel

1 code implementation10 Feb 2022 Kaito Watanabe, Kotaro Sakamoto, Ryo Karakida, Sho Sonoda, Shun-ichi Amari

In this paper, we investigate such neural fields in a multilayer architecture to investigate the supervised learning of the fields.

Learning Theory

Exponential Error Convergence in Data Classification with Optimized Random Features: Acceleration by Quantum Machine Learning

no code implementations16 Jun 2021 Hayata Yamasaki, Sho Sonoda

We prove that our algorithm can achieve the exponential error convergence under the low-noise condition even with optimized RFs; at the same time, our algorithm can exploit the advantage of the significant reduction of the number of features without the computational hardness owing to QML.

BIG-bench Machine Learning Classification +1

Ghosts in Neural Networks: Existence, Structure and Role of Infinite-Dimensional Null Space

no code implementations9 Jun 2021 Sho Sonoda, Isao Ishikawa, Masahiro Ikeda

In this paper, we present a structure theorem of the null space for a general class of neural networks.

How Powerful are Shallow Neural Networks with Bandlimited Random Weights?

no code implementations19 Aug 2020 Ming Li, Sho Sonoda, Feilong Cao, Yu Guang Wang, Jiye Liang

Despite the well-known fact that a neural network is a universal approximator, in this study, we mathematically show that when hidden parameters are distributed in a bounded domain, the network may not achieve zero approximation error.

Learning Theory

Ridge Regression with Over-Parametrized Two-Layer Networks Converge to Ridgelet Spectrum

no code implementations7 Jul 2020 Sho Sonoda, Isao Ishikawa, Masahiro Ikeda

We develop a new theory of ridgelet transform, a wavelet-like integral transform that provides a powerful and general framework for the theoretical study of neural networks involving not only the ReLU but general activation functions.

Inductive Bias Numerical Integration +1

Fast Approximation and Estimation Bounds of Kernel Quadrature for Infinitely Wide Models

no code implementations2 Feb 2019 Sho Sonoda

Kernel quadrature is a kernel-based numerical integration scheme developed for fast approximation of expectations $\int f(x) d p(x)$.

Model Selection Numerical Integration

The global optimum of shallow neural network is attained by ridgelet transform

no code implementations19 May 2018 Sho Sonoda, Isao Ishikawa, Masahiro Ikeda, Kei Hagihara, Yoshihiro Sawano, Takuo Matsubara, Noboru Murata

We prove that the global minimum of the backpropagation (BP) training problem of neural networks with an arbitrary nonlinear activation is given by the ridgelet transform.

Transportation analysis of denoising autoencoders: a novel method for analyzing deep neural networks

no code implementations12 Dec 2017 Sho Sonoda, Noboru Murata

The feature map obtained from the denoising autoencoder (DAE) is investigated by determining transportation dynamics of the DAE, which is a cornerstone for deep learning.

Denoising

Transport Analysis of Infinitely Deep Neural Network

no code implementations10 May 2016 Sho Sonoda, Noboru Murata

Starting from the shallow DAE, this paper develops three topics: the transport map of the deep DAE, the equivalence between the stacked DAE and the composition of DAEs, and the development of the double continuum limit or the integral representation of the flow representation.

Denoising

Neural Network with Unbounded Activation Functions is Universal Approximator

no code implementations14 May 2015 Sho Sonoda, Noboru Murata

This paper presents an investigation of the approximation property of neural networks with unbounded activation functions, such as the rectified linear unit (ReLU), which is the new de-facto standard of deep learning.

Nonparametric Weight Initialization of Neural Networks via Integral Representation

no code implementations23 Dec 2013 Sho Sonoda, Noboru Murata

A new initialization method for hidden parameters in a neural network is proposed.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.