no code implementations • 18 Oct 2023 • Amar Shah, Federico Mora, Sanjit A. Seshia
Specifically, our solver reduces ADT queries to a simpler logical theory, uninterpreted functions (UF), and then uses an existing solver on the reduced query.
no code implementations • 11 Jun 2021 • Robert I. Citron, Peter Jenniskens, Christopher Watkins, Sravanthi Sinha, Amar Shah, Chedy Raissi, Hadrien Devillepoix, Jim Albers
Those images can be analyzed using a machine learning classifier to identify meteorites in the field among many other features.
no code implementations • 30 Nov 2019 • Jeffrey Hawke, Richard Shen, Corina Gurau, Siddharth Sharma, Daniele Reda, Nikolay Nikolov, Przemyslaw Mazur, Sean Micklethwaite, Nicolas Griffiths, Amar Shah, Alex Kendall
As our main contribution, we present an end-to-end conditional imitation learning approach, combining both lateral and longitudinal control on a real vehicle for following urban routes with simple traffic.
7 code implementations • 1 Jul 2018 • Alex Kendall, Jeffrey Hawke, David Janz, Przemyslaw Mazur, Daniele Reda, John-Mark Allen, Vinh-Dieu Lam, Alex Bewley, Amar Shah
We demonstrate the first application of deep reinforcement learning to autonomous driving.
no code implementations • NeurIPS 2015 • Amar Shah, Zoubin Ghahramani
We develop parallel predictive entropy search (PPES), a novel algorithm for Bayesian optimization of expensive black-box objective functions.
2 code implementations • 20 Nov 2015 • Martin Arjovsky, Amar Shah, Yoshua Bengio
When the eigenvalues of the hidden to hidden weight matrix deviate from absolute value 1, optimization becomes difficult due to the well studied issue of vanishing and exploding gradients, especially when trying to learn long-term dependencies.
Ranked #26 on Sequential Image Classification on Sequential MNIST
no code implementations • 17 Nov 2015 • Daniel Hernández-Lobato, José Miguel Hernández-Lobato, Amar Shah, Ryan P. Adams
The results show that PESMO produces better recommendations with a smaller number of evaluations of the objectives, and that a decoupled evaluation can lead to improvements in performance, particularly when the number of objectives is large.
no code implementations • 26 Jun 2015 • Amar Shah, David A. Knowles, Zoubin Ghahramani
Stochastic variational inference (SVI) is emerging as the most promising candidate for scaling inference in Bayesian probabilistic models to large datasets.
no code implementations • 18 Feb 2014 • Amar Shah, Andrew Gordon Wilson, Zoubin Ghahramani
We investigate the Student-t process as an alternative to the Gaussian process as a nonparametric prior over functions.
no code implementations • 26 Sep 2013 • Amar Shah, Zoubin Ghahramani
Semi-supervised clustering is the task of clustering data points into clusters where only a fraction of the points are labelled.