Search Results for author: Sarathkrishna Swaminathan

Found 5 papers, 1 papers with code

A Neuro-Symbolic Approach to Multi-Agent RL for Interpretability and Probabilistic Decision Making

no code implementations21 Feb 2024 Chitra Subramanian, Miao Liu, Naweed Khan, Jonathan Lenchner, Aporva Amarnath, Sarathkrishna Swaminathan, Ryan Riegel, Alexander Gray

To enable decision-making under uncertainty and partial observability, we developed a novel probabilistic neuro-symbolic framework, Probabilistic Logical Neural Networks (PLNN), which combines the capabilities of logical reasoning with probabilistic graphical models.

Decision Making Decision Making Under Uncertainty +2

Learning Symbolic Rules over Abstract Meaning Representations for Textual Reinforcement Learning

1 code implementation5 Jul 2023 Subhajit Chaudhury, Sarathkrishna Swaminathan, Daiki Kimura, Prithviraj Sen, Keerthiram Murugesan, Rosario Uceda-Sosa, Michiaki Tatsubori, Achille Fokoue, Pavan Kapanipathi, Asim Munawar, Alexander Gray

Text-based reinforcement learning agents have predominantly been neural network-based models with embeddings-based representation, learning uninterpretable policies that often do not generalize well to unseen games.

reinforcement-learning Representation Learning

A Unified View of Localized Kernel Learning

no code implementations4 Mar 2016 John Moeller, Sarathkrishna Swaminathan, Suresh Venkatasubramanian

Multiple Kernel Learning, or MKL, extends (kernelized) SVM by attempting to learn not only a classifier/regressor but also the best kernel for the training task, usually from a combination of existing kernel functions.

Cannot find the paper you are looking for? You can Submit a new open access paper.