Search Results for author: Marko Vasic

Found 6 papers, 2 papers with code

Neural Program Repair by Jointly Learning to Localize and Repair

2 code implementations ICLR 2019 Marko Vasic, Aditya Kanade, Petros Maniatis, David Bieber, Rishabh Singh

We show that it is beneficial to train a model that jointly and directly localizes and repairs variable-misuse bugs.

Variable misuse

MoËT: Mixture of Expert Trees and its Application to Verifiable Reinforcement Learning

2 code implementations16 Jun 2019 Marko Vasic, Andrija Petrovic, Kaiyuan Wang, Mladen Nikolic, Rishabh Singh, Sarfraz Khurshid

By training Mo\"ET models using an imitation learning procedure on deep RL agents we outperform the previous state-of-the-art technique based on decision trees while preserving the verifiability of the models.

Game of Go Imitation Learning +4

A Study of the Learnability of Relational Properties: Model Counting Meets Machine Learning (MCML)

no code implementations25 Dec 2019 Muhammad Usman, Wenxi Wang, Kaiyuan Wang, Marko Vasic, Haris Vikalo, Sarfraz Khurshid

However, MCML metrics based on model counting show that the performance can degrade substantially when tested against the entire (bounded) input space, indicating the high complexity of precisely learning these properties, and the usefulness of model counting in quantifying the true performance.

BIG-bench Machine Learning

Deep Molecular Programming: A Natural Implementation of Binary-Weight ReLU Neural Networks

no code implementations ICML 2020 Marko Vasic, Cameron Chalk, Sarfraz Khurshid, David Soloveichik

Embedding computation in molecular contexts incompatible with traditional electronics is expected to have wide ranging impact in synthetic biology, medicine, nanofabrication and other fields.

Transfer Learning Translation

Programming and Training Rate-Independent Chemical Reaction Networks

no code implementations20 Sep 2021 Marko Vasic, Cameron Chalk, Austin Luchsinger, Sarfraz Khurshid, David Soloveichik

Embedding computation in biochemical environments incompatible with traditional electronics is expected to have wide-ranging impact in synthetic biology, medicine, nanofabrication and other fields.

Translation

MoET: Interpretable and Verifiable Reinforcement Learning via Mixture of Expert Trees

no code implementations25 Sep 2019 Marko Vasic, Andrija Petrovic, Kaiyuan Wang, Mladen Nikolic, Rishabh Singh, Sarfraz Khurshid

We propose MoET, a more expressive, yet still interpretable model based on Mixture of Experts, consisting of a gating function that partitions the state space, and multiple decision tree experts that specialize on different partitions.

Game of Go Imitation Learning +3

Cannot find the paper you are looking for? You can Submit a new open access paper.