no code implementations • ICLR 2019 • Masataro Asai
We propose a permutation-invariant loss function designed for the neural networks reconstructing a set of elements without considering the order within its vector representation.
no code implementations • 23 Aug 2023 • Carlos Núñez-Molina, Masataro Asai, Juan Fernández-Olivares, Pablo Mesejo
This results in a different loss function from the MSE commonly employed in the literature, which implicitly models the learned heuristic as a gaussian distribution.
no code implementations • 16 May 2023 • Stephen Wissow, Masataro Asai
However, while the problem has been extensively analyzed within the Multi-Armed Bandit (MAB) literature, the planning community has had limited success when attempting to apply those results.
no code implementations • 21 Mar 2023 • Masataro Asai
Moreover, existing literature focuses on estimating the scale {\sigma} and the shape {\xi}, lacking discussion of the estimation of the location {\theta} which is the lower support of (minimum value possible in) a GP.
no code implementations • 8 Sep 2022 • Masataro Asai
It provides a step-by-step protocol for designing a machine learning system that satisfies a minimum theoretical guarantee necessary for being taken seriously by the symbolic AI community, i. e., it discusses "in what condition we can stop worrying and accept statistical machine learning."
1 code implementation • 30 Sep 2021 • Benjamin Ayton, Masataro Asai
Width-based planning has shown promising results on Atari 2600 games using pixel input, while using substantially fewer environment interactions than reinforcement learning.
no code implementations • 30 Sep 2021 • Clement Gehring, Masataro Asai, Rohan Chitnis, Tom Silver, Leslie Pack Kaelbling, Shirin Sohrabi, Michael Katz
In this paper, we propose to leverage domain-independent heuristic functions commonly used in the classical planning literature to improve the sample efficiency of RL.
1 code implementation • 30 Jun 2021 • Masataro Asai, Hiroshi Kajino, Alex Fukunaga, Christian Muise
Current domain-independent, classical planners require symbolic models of the problem domain and instance as input, resulting in a knowledge acquisition bottleneck.
no code implementations • 1 Jan 2021 • Masataro Asai
We propose FOSAE++, an unsupervised end-to-end neural system that generates a compact discrete state transition model (dynamics / action model) from raw visual observations.
no code implementations • 26 Aug 2020 • Masataro Asai, Zilu Tang
We propose an unsupervised neural model for learning a discrete embedding of words.
no code implementations • 27 Apr 2020 • Masataro Asai, Christian Muise
We achieved a new milestone in the difficult task of enabling agents to learn about their environment autonomously.
2 code implementations • 11 Dec 2019 • Masataro Asai
Recent work on Neural-Symbolic systems that learn the discrete planning model from images has opened a promising direction for expanding the scope of Automated Planning and Scheduling to the raw, noisy data.
no code implementations • 27 Mar 2019 • Masataro Asai, Hiroshi Kajino
We analyze the problem in Latplan both formally and empirically, and propose "Zero-Suppressed SAE", an enhancement that stabilizes the propositions using the idea of closed-world assumption as a prior for NN optimization.
1 code implementation • 21 Feb 2019 • Masataro Asai
In the experiment using 8-Puzzle and a photo-realistic Blocksworld environment, we show that (1) the resulting predicates capture the interpretable relations (e. g. spatial), (2) they help obtaining the compact, abstract model of the environment, and finally, (3) the resulting model is compatible to symbolic classical planning.
1 code implementation • 5 Dec 2018 • Masataro Asai
In this report, we introduce an artificial dataset generator for Photo-realistic Blocksworld domain.
no code implementations • 4 Dec 2018 • Masataro Asai
We propose a permutation-invariant loss function designed for the neural networks reconstructing a set of elements without considering the order within its vector representation.
1 code implementation • 29 Apr 2017 • Masataro Asai, Alex Fukunaga
Meanwhile, although deep learning has achieved significant success in many fields, the knowledge is encoded in a subsymbolic representation which is incompatible with symbolic systems such as planners.