1 code implementation • 15 Apr 2023 • Tyler H. Chang, Jakob R. Elias, Stefan M. Wild, Santanu Chaudhuri, Joseph A. Libera
In order to deploy machine learning in a real-world self-driving laboratory where data acquisition is costly and there are multiple competing design criteria, systems need to be able to intelligently sample while balancing performance trade-offs and constraints.
no code implementations • 29 Nov 2022 • Lucas Slattery, Ruslan Shaydulin, Shouvanik Chakrabarti, Marco Pistoia, Sami Khairy, Stefan M. Wild
We show that the general-purpose hyperparameter tuning techniques proposed to improve the generalization of quantum kernels lead to the kernel becoming well-approximated by a classical kernel, removing the possibility of quantum advantage.
1 code implementation • 1 Nov 2022 • Aleksandra Ćiprijanović, Ashia Lewis, Kevin Pedro, Sandeep Madireddy, Brian Nord, Gabriel N. Perdue, Stefan M. Wild
For the first time, we demonstrate the successful use of domain adaptation on two very different observational datasets (from SDSS and DECaLS).
no code implementations • 14 Jun 2022 • Abdulkadir Canatar, Evan Peters, Cengiz Pehlevan, Stefan M. Wild, Ruslan Shaydulin
Quantum computers are known to provide speedups over classical state-of-the-art machine learning methods in some specialized settings.
no code implementations • 28 Dec 2021 • Aleksandra Ćiprijanović, Diana Kafkes, Gregory Snyder, F. Javier Sánchez, Gabriel Nathan Perdue, Kevin Pedro, Brian Nord, Sandeep Madireddy, Stefan M. Wild
On the other hand, we show that training with domain adaptation improves model robustness and mitigates the effects of these perturbations, improving the classification accuracy by 23% on data with higher observational noise.
1 code implementation • 9 Nov 2021 • Ruslan Shaydulin, Stefan M. Wild
Quantum kernel methods are considered a promising avenue for applying quantum computers to machine learning problems.
no code implementations • 24 Sep 2021 • Raghu Bollapragada, Stefan M. Wild
We consider unconstrained stochastic optimization problems with no available gradient information.
no code implementations • 19 Apr 2021 • Aydin Buluc, Tamara G. Kolda, Stefan M. Wild, Mihai Anitescu, Anthony DeGennaro, John Jakeman, Chandrika Kamath, Ramakrishnan Kannan, Miles E. Lopes, Per-Gunnar Martinsson, Kary Myers, Jelani Nelson, Juan M. Restrepo, C. Seshadhri, Draguna Vrabie, Brendt Wohlberg, Stephen J. Wright, Chao Yang, Peter Zwart
Randomized algorithms have propelled advances in artificial intelligence and represent a foundational research area in advancing AI for Science.
no code implementations • 30 Mar 2021 • Arindam Fadikar, Stefan M. Wild, Jonas Chaves-Montero
Handling big data has largely been a major bottleneck in traditional statistical models.
1 code implementation • 25 Jan 2021 • Ruslan Shaydulin, Stefan M. Wild
We show how by considering only the terms that are not connected by symmetry, we can significantly reduce the cost of evaluating the QAOA energy.
Quantum Physics
no code implementations • 29 Oct 2019 • Raghu Bollapragada, Stefan M. Wild
We consider stochastic zero-order optimization problems, which arise in settings from simulation optimization to reinforcement learning.
1 code implementation • 26 Jul 2019 • Nathan Wycoff, Mickael Binois, Stefan M. Wild
In such cases, often a surrogate model is employed, on which finite differencing is performed.
no code implementations • NeurIPS 2016 • Victor Picheny, Robert B. Gramacy, Stefan M. Wild, Sebastien Le Digabel
An augmented Lagrangian (AL) can convert a constrained optimization problem into a sequence of simpler (e. g., unconstrained) problems, which are then usually solved with local solvers.