Search Results for author: Wataru Kumagai

Found 16 papers, 4 papers with code

A Dataset for Evaluating LLM-based Evaluation Functions for Research Question Extraction Task

no code implementations10 Sep 2024 Yuya Fujisaki, Shiro Takagi, Hideki Asoh, Wataru Kumagai

We expect our dataset provides a foundation for further research on developing better evaluation functions tailored to the RQ extraction task, and contribute to enhance the performance of the task.

Text Summarization

Near-Optimal Policy Identification in Robust Constrained Markov Decision Processes via Epigraph Form

1 code implementation29 Aug 2024 Toshinori Kitamura, Tadashi Kozuno, Wataru Kumagai, Kenta Hoshino, Yohei Hosoe, Kazumi Kasaura, Masashi Hamaya, Paavo Parmas, Yutaka Matsuo

We first prove that the conventional Lagrangian max-min formulation with policy gradient methods can become trapped in suboptimal solutions by encountering a sum of conflicting gradients from the objective and constraint functions during its inner minimization problem.

Policy Gradient Methods

Towards Autonomous Hypothesis Verification via Language Models with Minimal Guidance

no code implementations16 Nov 2023 Shiro Takagi, Ryutaro Yamauchi, Wataru Kumagai

Research automation efforts usually employ AI as a tool to automate specific tasks within the research process.

LPML: LLM-Prompting Markup Language for Mathematical Reasoning

no code implementations21 Sep 2023 Ryutaro Yamauchi, Sho Sonoda, Akiyoshi Sannai, Wataru Kumagai

In this paper, we propose a novel framework that integrates the Chain-of-Thought (CoT) method with an external tool (Python REPL).

Mathematical Reasoning

Langevin Autoencoders for Learning Deep Latent Variable Models

1 code implementation15 Sep 2022 Shohei Taniguchi, Yusuke Iwasawa, Wataru Kumagai, Yutaka Matsuo

Based on the ALD, we also present a new deep latent variable model named the Langevin autoencoder (LAE).

Image Generation valid +1

Equivariant and Invariant Reynolds Networks

no code implementations15 Oct 2021 Akiyoshi Sannai, Makoto Kawano, Wataru Kumagai

We construct learning models based on the reductive Reynolds operator called equivariant and invariant Reynolds networks (ReyNets) and prove that they have universal approximation property.

Reynolds Equivariant and Invariant Networks

no code implementations29 Sep 2021 Akiyoshi Sannai, Makoto Kawano, Wataru Kumagai

To overcome this difficulty, we consider representing the Reynolds operator as a sum over a subset instead of a sum over the whole group.

Group Equivariant Conditional Neural Processes

no code implementations ICLR 2021 Makoto Kawano, Wataru Kumagai, Akiyoshi Sannai, Yusuke Iwasawa, Yutaka Matsuo

We present the group equivariant conditional neural process (EquivCNP), a meta-learning method with permutation invariance in a data set as in conventional conditional neural processes (CNPs), and it also has transformation equivariance in data space.

Meta-Learning Translation +1

Bayesian Neural Networks with Variance Propagation for Uncertainty Evaluation

no code implementations1 Jan 2021 Yuki Mae, Wataru Kumagai, Takafumi Kanamori

We report the computational efficiency and statistical reliability of our method in numerical experiments of the language modeling using RNNs, and the out-of-distribution detection with DNNs.

Bayesian Inference Computational Efficiency +2

Universal Approximation Theorem for Equivariant Maps by Group CNNs

no code implementations27 Dec 2020 Wataru Kumagai, Akiyoshi Sannai

However, universal approximation theorems for CNNs have been separately derived with individual techniques according to each group and setting.

Regret Analysis for Continuous Dueling Bandit

no code implementations NeurIPS 2017 Wataru Kumagai

The dueling bandit is a learning framework wherein the feedback information in the learning process is restricted to a noisy comparison between a pair of actions.

Learning Bound for Parameter Transfer Learning

no code implementations NeurIPS 2016 Wataru Kumagai

We consider a transfer-learning problem by using the parameter transfer approach, where a suitable parameter of feature mapping is learned through one task and applied to another objective task.

Transfer Learning

Parallel Distributed Block Coordinate Descent Methods based on Pairwise Comparison Oracle

no code implementations13 Sep 2014 Kota Matsui, Wataru Kumagai, Takafumi Kanamori

Our algorithm consists of two steps; one is the direction estimate step and the other is the search step.

Cannot find the paper you are looking for? You can Submit a new open access paper.