Search Results for author: Buser Say

Found 6 papers, 0 papers with code

Training Experimentally Robust and Interpretable Binarized Regression Models Using Mixed-Integer Programming

no code implementations1 Dec 2021 Sanjana Tule, Nhi Ha Lan Le, Buser Say

In this paper, we explore model-based approach to training robust and interpretable binarized regression models for multiclass classification tasks using Mixed-Integer Programming (MIP).

Classification regression

Planning with Learned Binarized Neural Networks Benchmarks for MaxSAT Evaluation 2021

no code implementations2 Aug 2021 Buser Say, Scott Sanner, Jo Devriendt, Jakob Nordström, Peter J. Stuckey

This document provides a brief introduction to learned automated planning problem where the state transition function is in the form of a binarized neural network (BNN), presents a general MaxSAT encoding for this problem, and describes the four domains, namely: Navigation, Inventory Control, System Administrator and Cellda, that are submitted as benchmarks for MaxSAT Evaluation 2021.

Reward Potentials for Planning with Learned Neural Network Transition Models

no code implementations19 Apr 2019 Buser Say, Scott Sanner, Sylvie Thiébaux

We then strengthen the linear relaxation of the underlying MILP model by introducing constraints to bound the reward function based on the precomputed reward potentials.

Scalable Planning with Deep Neural Network Learned Transition Models

no code implementations5 Apr 2019 Ga Wu, Buser Say, Scott Sanner

But there remains one major problem for the task of control -- how can we plan with deep network learned transition models without resorting to Monte Carlo Tree Search and other black-box transition model techniques that ignore model structure and do not easily extend to mixed discrete and continuous domains?

Compact and Efficient Encodings for Planning in Factored State and Action Spaces with Learned Binarized Neural Network Transition Models

no code implementations26 Nov 2018 Buser Say, Scott Sanner

In this paper, we leverage the efficiency of Binarized Neural Networks (BNNs) to learn complex state transition models of planning domains with discretized factored state and action spaces.

Computational Efficiency

Scalable Planning with Tensorflow for Hybrid Nonlinear Domains

no code implementations NeurIPS 2017 Ga Wu, Buser Say, Scott Sanner

Given recent deep learning results that demonstrate the ability to effectively optimize high-dimensional non-convex functions with gradient descent optimization on GPUs, we ask in this paper whether symbolic gradient optimization tools such as Tensorflow can be effective for planning in hybrid (mixed discrete and continuous) nonlinear domains with high dimensional state and action spaces?

Cannot find the paper you are looking for? You can Submit a new open access paper.