Search Results for author: Makoto Takamoto

Found 7 papers, 2 papers with code

Higher-Rank Irreducible Cartesian Tensors for Equivariant Message Passing

no code implementations23 May 2024 Viktor Zaverkin, Francesco Alesiani, Takashi Maruyama, Federico Errica, Henrik Christiansen, Makoto Takamoto, Nicolas Weber, Mathias Niepert

This work introduces higher-rank irreducible Cartesian tensors as an alternative to spherical tensors, addressing the above limitations.

Uncertainty-biased molecular dynamics for learning uniformly accurate interatomic potentials

no code implementations3 Dec 2023 Viktor Zaverkin, David Holzmüller, Henrik Christiansen, Federico Errica, Francesco Alesiani, Makoto Takamoto, Mathias Niepert, Johannes Kästner

Existing biased and unbiased MD simulations, however, are prone to miss either rare events or extrapolative regions -- areas of the configurational space where unreliable predictions are made.

Active Learning

Learning Neural PDE Solvers with Parameter-Guided Channel Attention

2 code implementations27 Apr 2023 Makoto Takamoto, Francesco Alesiani, Mathias Niepert

The experiments also show several advantages of CAPE, such as its increased ability to generalize to unseen PDE parameters without large increases inference time and parameter count.

PDE Surrogate Modeling Weather Forecasting

PDEBENCH: An Extensive Benchmark for Scientific Machine Learning

2 code implementations13 Oct 2022 Makoto Takamoto, Timothy Praditia, Raphael Leiteritz, Dan MacKinlay, Francesco Alesiani, Dirk Pflüger, Mathias Niepert

With those metrics we identify tasks which are challenging for recent ML methods and propose these tasks as future challenges for the community.

milIE: Modular & Iterative Multilingual Open Information Extraction

no code implementations ACL 2022 Bhushan Kotnis, Kiril Gashteovski, Daniel Oñoro Rubio, Vanesa Rodriguez-Tembras, Ammar Shaker, Makoto Takamoto, Mathias Niepert, Carolin Lawrence

In contrast, we explore the hypothesis that it may be beneficial to extract triple slots iteratively: first extract easy slots, followed by the difficult ones by conditioning on the easy slots, and therefore achieve a better overall extraction.

Open Information Extraction

An Empirical Study of the Effects of Sample-Mixing Methods for Efficient Training of Generative Adversarial Networks

no code implementations8 Apr 2021 Makoto Takamoto, Yusuke Morishita

In this paper, we investigated the effect of sample mixing methods, that is, Mixup, CutMix, and newly proposed Smoothed Regional Mix (SRMix), to alleviate this problem.

Cannot find the paper you are looking for? You can Submit a new open access paper.