Search Results for author: Yoshinobu Kawahara

Found 41 papers, 10 papers with code

Koopman operators with intrinsic observables in rigged reproducing kernel Hilbert spaces

1 code implementation4 Mar 2024 Isao Ishikawa, Yuka Hashimoto, Masahiro Ikeda, Yoshinobu Kawahara

This paper presents a novel approach for estimating the Koopman operator defined on a reproducing kernel Hilbert space (RKHS) and its spectra.

Glocal Hypergradient Estimation with Koopman Operator

no code implementations5 Feb 2024 Ryuichiro Hataya, Yoshinobu Kawahara

Through numerical experiments of hyperparameter optimization, including optimization of optimizers, we demonstrate the effectiveness of the glocal hypergradient estimation.

Hyperparameter Optimization

Adaptive action supervision in reinforcement learning from real-world multi-agent demonstrations

no code implementations22 May 2023 Keisuke Fujii, Kazushi Tsutsui, Atom Scott, Hiroshi Nakahara, Naoya Takeishi, Yoshinobu Kawahara

In the experiments, using chase-and-escape and football tasks with the different dynamics between the unknown source and target environments, we show that our approach achieved a balance between the reproducibility and the generalization ability compared with the baselines.

Dynamic Time Warping reinforcement-learning +1

Data-driven End-to-end Learning of Pole Placement Control for Nonlinear Dynamics via Koopman Invariant Subspaces

no code implementations16 Aug 2022 Tomoharu Iwata, Yoshinobu Kawahara

With the proposed method, a policy network is trained such that the eigenvalues of a Koopman operator of controlled dynamics are close to the target eigenvalues.

reinforcement-learning Reinforcement Learning (RL)

Stable Invariant Models via Koopman Spectra

1 code implementation15 Jul 2022 Takuya Konishi, Yoshinobu Kawahara

Weight-tied models have attracted attention in the modern development of neural networks.

Estimating counterfactual treatment outcomes over time in complex multiagent scenarios

no code implementations4 Jun 2022 Keisuke Fujii, Koh Takeuchi, Atsushi Kuribayashi, Naoya Takeishi, Yoshinobu Kawahara, Kazuya Takeda

Evaluation of intervention in a multiagent system, e. g., when humans should intervene in autonomous driving systems and when a player should pass to teammates for a good shot, is challenging in various engineering and scientific fields.

Autonomous Driving counterfactual

Koopman Spectrum Nonlinear Regulator and Provably Efficient Online Learning

1 code implementation30 Jun 2021 Motoya Ohnishi, Isao Ishikawa, Kendall Lowrey, Masahiro Ikeda, Sham Kakade, Yoshinobu Kawahara

In this work, we present a novel paradigm of controlling nonlinear systems via the minimization of the Koopman spectrum cost: a cost over the Koopman operator of the controlled dynamics.

reinforcement-learning Reinforcement Learning (RL)

A Quadratic Actor Network for Model-Free Reinforcement Learning

1 code implementation11 Mar 2021 Matthias Weissenbacher, Yoshinobu Kawahara

In this work we discuss the incorporation of quadratic neurons into policy networks in the context of model-free actor-critic reinforcement learning.

Continuous Control reinforcement-learning +1

Discriminant Dynamic Mode Decomposition for Labeled Spatio-Temporal Data Collections

no code implementations19 Feb 2021 Naoya Takeishi, Keisuke Fujii, Koh Takeuchi, Yoshinobu Kawahara

Extracting coherent patterns is one of the standard approaches towards understanding spatio-temporal data.

Meta-Learning for Koopman Spectral Analysis with Short Time-series

no code implementations9 Feb 2021 Tomoharu Iwata, Yoshinobu Kawahara

With the proposed method, a representation of a given short time-series is obtained by a bidirectional LSTM for extracting its properties.

Future prediction Meta-Learning +2

Reproducing kernel Hilbert C*-module and kernel mean embeddings

no code implementations27 Jan 2021 Yuka Hashimoto, Isao Ishikawa, Masahiro Ikeda, Fuyuta Komura, Takeshi Katsura, Yoshinobu Kawahara

Kernel methods have been among the most popular techniques in machine learning, where learning tasks are solved using the property of reproducing kernel Hilbert space (RKHS).

Neural Dynamic Mode Decomposition for End-to-End Modeling of Nonlinear Dynamics

no code implementations11 Dec 2020 Tomoharu Iwata, Yoshinobu Kawahara

With our proposed method, the forecast error is backpropagated through the neural networks and the spectral decomposition, enabling end-to-end learning of Koopman spectral analysis.

Time Series Time Series Analysis

Kernel Mean Embeddings of Von Neumann-Algebra-Valued Measures

no code implementations29 Jul 2020 Yuka Hashimoto, Isao Ishikawa, Masahiro Ikeda, Fuyuta Komura, Yoshinobu Kawahara

Kernel mean embedding (KME) is a powerful tool to analyze probability measures for data, where the measures are conventionally embedded into a reproducing kernel Hilbert space (RKHS).

Learning Dynamics Models with Stable Invariant Sets

no code implementations16 Jun 2020 Naoya Takeishi, Yoshinobu Kawahara

Invariance and stability are essential notions in dynamical systems study, and thus it is of great interest to learn a dynamics model with a stable invariant set.

Analysis via Orthonormal Systems in Reproducing Kernel Hilbert $C^*$-Modules and Applications

no code implementations2 Mar 2020 Yuka Hashimoto, Isao Ishikawa, Masahiro Ikeda, Fuyuta Komura, Takeshi Katsura, Yoshinobu Kawahara

Kernel methods have been among the most popular techniques in machine learning, where learning tasks are solved using the property of reproducing kernel Hilbert space (RKHS).

UNIVERSAL MODAL EMBEDDING OF DYNAMICS IN VIDEOS AND ITS APPLICATIONS

no code implementations25 Sep 2019 Israr Ul Haq, Yoshinobu Kawahara

Extracting underlying dynamics of objects in image sequences is one of the challenging problems in computer vision.

Time Series Time Series Analysis +1

Krylov Subspace Method for Nonlinear Dynamical Systems with Random Noise

no code implementations9 Sep 2019 Yuka Hashimoto, Isao Ishikawa, Masahiro Ikeda, Yoichi Matsuo, Yoshinobu Kawahara

In this paper, we address a lifted representation of nonlinear dynamical systems with random noise based on transfer operators, and develop a novel Krylov subspace method for estimating the operators using finite data, with consideration of the unboundedness of operators.

Anomaly Detection

Metric on random dynamical systems with vector-valued reproducing kernel Hilbert spaces

no code implementations17 Jun 2019 Isao Ishikawa, Akinori Tanaka, Masahiro Ikeda, Yoshinobu Kawahara

We empirically illustrate our metric with synthetic data, and evaluate it in the context of the independence test for random processes.

Physically-interpretable classification of biological network dynamics for complex collective motions

1 code implementation13 May 2019 Keisuke Fujii, Naoya Takeishi, Motokazu Hojo, Yuki Inaba, Yoshinobu Kawahara

A fundamental question addressed here pertains to the classification of collective motion network based on physically-interpretable dynamical properties.

Classification General Classification

An efficient branch-and-cut algorithm for approximately submodular function maximization

no code implementations26 Apr 2019 Naoya Uematsu, Shunji Umetani, Yoshinobu Kawahara

For the problem of maximizing an approximately submodular function (ASFM problem), a greedy algorithm quickly finds good feasible solutions for many instances while guaranteeing ($1-e^{-\gamma}$)-approximation ratio for a given submodular ratio $\gamma$.

Knowledge-Based Regularization in Generative Modeling

no code implementations6 Feb 2019 Naoya Takeishi, Yoshinobu Kawahara

Prior domain knowledge can greatly help to learn generative models.

An efficient branch-and-bound algorithm for submodular function maximization

no code implementations10 Nov 2018 Naoya Uematsu, Shunji Umetani, Yoshinobu Kawahara

Nemhauser and Wolsey developed an exact algorithm called the constraint generation algorithm that starts from a reduced BIP problem with a small subset of constraints taken from the constraints and repeats solving a reduced BIP problem while adding a new constraint at each iteration.

Dynamic mode decomposition in vector-valued reproducing kernel Hilbert spaces for extracting dynamical structure among observables

1 code implementation30 Aug 2018 Keisuke Fujii, Yoshinobu Kawahara

In this paper, we formulate Koopman spectral analysis for NLDSs with structures among observables and propose an estimation algorithm for this problem.

Learning Koopman Invariant Subspaces for Dynamic Mode Decomposition

no code implementations NeurIPS 2017 Naoya Takeishi, Yoshinobu Kawahara, Takehisa Yairi

Spectral decomposition of the Koopman operator is attracting attention as a tool for the analysis of nonlinear dynamical systems.

regression

Dynamic Mode Decomposition with Reproducing Kernels for Koopman Spectral Analysis

no code implementations NeurIPS 2016 Yoshinobu Kawahara

In this paper, we consider a spectral analysis of the Koopman operator in a reproducing kernel Hilbert space (RKHS).

Parametric Maxflows for Structured Sparse Learning with Convex Relaxations of Submodular Functions

no code implementations14 Sep 2015 Yoshinobu Kawahara, Yutaro Yamaguchi

The proximal problem for structured penalties obtained via convex relaxations of submodular functions is known to be equivalent to minimizing separable convex functions over the corresponding submodular polyhedra.

Sparse Learning

A direct method for estimating a causal ordering in a linear non-Gaussian acyclic model

no code implementations9 Aug 2014 Shohei Shimizu, Aapo Hyvarinen, Yoshinobu Kawahara

Structural equation models and Bayesian networks have been widely used to analyze causal relations between continuous variables.

Causal Discovery in a Binary Exclusive-or Skew Acyclic Model: BExSAM

no code implementations22 Jan 2014 Takanori Inazumi, Takashi Washio, Shohei Shimizu, Joe Suzuki, Akihiro Yamamoto, Yoshinobu Kawahara

Discovering causal relations among observed variables in a given data set is a major objective in studies of statistics and artificial intelligence.

Causal Discovery

Structured Convex Optimization under Submodular Constraints

no code implementations26 Sep 2013 Kiyohito Nagano, Yoshinobu Kawahara

A number of discrete and continuous optimization problems in machine learning are related to convex minimization problems under submodular constraints.

BIG-bench Machine Learning

Weighted Likelihood Policy Search with Model Selection

no code implementations NeurIPS 2012 Tsuyoshi Ueno, Kohei Hayashi, Takashi Washio, Yoshinobu Kawahara

Reinforcement learning (RL) methods based on direct policy search (DPS) have been actively discussed to achieve an efficient approach to complicated Markov decision processes (MDPs).

Model Selection reinforcement-learning +1

Efficient network-guided multi-locus association mapping with graph cuts

no code implementations10 Nov 2012 Chloé-Agathe Azencott, Dominik Grimm, Mahito Sugiyama, Yoshinobu Kawahara, Karsten M. Borgwardt

We present SConES, a new efficient method to discover sets of genetic loci that are maximally associated with a phenotype, while being connected in an underlying network.

Minimum Average Cost Clustering

no code implementations NeurIPS 2010 Kiyohito Nagano, Yoshinobu Kawahara, Satoru Iwata

In this paper, we introduce the minimum average cost criterion, and show that the theory of intersecting submodular functions can be used for clustering with submodular objective functions.

Clustering

Submodularity Cuts and Applications

no code implementations NeurIPS 2009 Yoshinobu Kawahara, Kiyohito Nagano, Koji Tsuda, Jeff A. Bilmes

Several key problems in machine learning, such as feature selection and active learning, can be formulated as submodular set function maximization.

Active Learning feature selection

Cannot find the paper you are looking for? You can Submit a new open access paper.