Search Results for author: Stéphane Bressan

Found 10 papers, 5 papers with code

Physics-informed Discovery of State Variables in Second-Order and Hamiltonian Systems

no code implementations21 Aug 2024 Félix Chavelli, Zi-Yu Khoo, Dawen Wu, Jonathan Sze Choong Low, Stéphane Bressan

The modeling of dynamical systems is a pervasive concern for not only describing but also predicting and controlling natural phenomena and engineered systems.

Expected Shapley-Like Scores of Boolean Functions: Complexity and Applications to Probabilistic Databases

1 code implementation12 Jan 2024 Pratik Karmakar, Mikaël Monet, Pierre Senellart, Stéphane Bressan

Shapley values, originating in game theory and increasingly prominent in explainable AI, have been proposed to assess the contribution of facts in query answering over databases, along with other similar power indices such as Banzhaf values.

A Comparative Evaluation of Additive Separability Tests for Physics-Informed Machine Learning

no code implementations15 Dec 2023 Zi-Yu Khoo, Jonathan Sze Choong Low, Stéphane Bressan

We present and comparatively and empirically evaluate the eight methods to compute the mixed partial derivative of a surrogate function.

Physics-informed machine learning

What's Next? Predicting Hamiltonian Dynamics from Discrete Observations of a Vector Field

no code implementations14 Dec 2023 Zi-Yu Khoo, Delong Zhang, Stéphane Bressan

We present several methods for predicting the dynamics of Hamiltonian systems from discrete observations of their vector field.

Separable Hamiltonian Neural Networks

1 code implementation3 Sep 2023 Zi-Yu Khoo, Dawen Wu, Jonathan Sze Choong Low, Stéphane Bressan

Hamiltonian neural networks (HNNs) are state-of-the-art models that regress the vector field of a dynamical system under the learning bias of Hamilton's equations.

Physics-informed machine learning regression

Syntax-informed Question Answering with Heterogeneous Graph Transformer

no code implementations1 Apr 2022 Fangyi Zhu, Lok You Tan, See-Kiong Ng, Stéphane Bressan

Large neural language models are steadily contributing state-of-the-art performance to question answering and other natural language and information processing tasks.

Language Modeling Language Modelling +1

BelMan: Bayesian Bandits on the Belief--Reward Manifold

1 code implementation4 May 2018 Debabrota Basu, Pierre Senellart, Stéphane Bressan

BelMan alternates \emph{information projection} and \emph{reverse information projection}, i. e., projection of the pseudobelief-reward onto beliefs-rewards to choose the arm to play, and projection of the resulting beliefs-rewards onto the pseudobelief-reward.

Cannot find the paper you are looking for? You can Submit a new open access paper.