Search Results for author: Gil Lederman

Found 7 papers, 4 papers with code

Rotation Invariant Quantization for Model Compression

1 code implementation3 Mar 2023 Joseph Kampeas, Yury Nahshan, Hanoch Kremer, Gil Lederman, Shira Zaloshinski, Zheng Li, Emir Haleva

Post-training Neural Network (NN) model compression is an attractive approach for deploying large, memory-consuming models on devices with limited memory resources.

Model Compression Quantization

Demonstration Informed Specification Search

1 code implementation20 Dec 2021 Marcell Vazquez-Chanlatte, Ameesh Shah, Gil Lederman, Sanjit A. Seshia

This paper considers the problem of learning temporal task specifications, e. g. automata and temporal logic, from expert demonstrations.

Learning Branching Heuristics for Propositional Model Counting

no code implementations7 Jul 2020 Pashootan Vaezipoor, Gil Lederman, Yuhuai Wu, Chris J. Maddison, Roger Grosse, Sanjit A. Seshia, Fahiem Bacchus

In addition to step count improvements, Neuro# can also achieve orders of magnitude wall-clock speedups over the vanilla solver on larger instances in some problem families, despite the runtime overhead of querying the model.

Learning Heuristics for Quantified Boolean Formulas through Reinforcement Learning

no code implementations ICLR 2020 Gil Lederman, Markus Rabe, Sanjit Seshia, Edward A. Lee

We demonstrate how to learn efficient heuristics for automated reasoning algorithms for quantified Boolean formulas through deep reinforcement learning.

reinforcement-learning Reinforcement Learning (RL)

Learning Heuristics for Quantified Boolean Formulas through Deep Reinforcement Learning

1 code implementation20 Jul 2018 Gil Lederman, Markus N. Rabe, Edward A. Lee, Sanjit A. Seshia

We demonstrate how to learn efficient heuristics for automated reasoning algorithms for quantified Boolean formulas through deep reinforcement learning.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.