no code implementations • pproximateinference AABI Symposium 2019 • Nimar S. Arora, Nazanin Khosravani Tehrani, Kinjal Divesh Shah, Michael Tingley, Yucen Lily Li, Narjes Torabi, David Noursi, Sepehr Akhavan Masouleh, Eric Lippert, Erik Meijer
NMC is similar to the Newton-Raphson update in optimization where the second order gradient is used to automatically scale the step size in each dimension.
2 code implementations • 15 Jan 2020 • Nimar S. Arora, Nazanin Khosravani Tehrani, Kinjal Divesh Shah, Michael Tingley, Yucen Lily Li, Narjes Torabi, David Noursi, Sepehr Akhavan Masouleh, Eric Lippert, Erik Meijer
NMC is similar to the Newton-Raphson update in optimization where the second order gradient is used to automatically scale the step size in each dimension.
1 code implementation • 17 Oct 2020 • Sourabh Kulkarni, Kinjal Divesh Shah, Nimar Arora, Xiaoyan Wang, Yucen Lily Li, Nazanin Khosravani Tehrani, Michael Tingley, David Noursi, Narjes Torabi, Sepehr Akhavan Masouleh, Eric Lippert, Erik Meijer
The benchmark includes data generation and evaluation code for a number of models as well as implementations in some common PPLs.
2 code implementations • 31 May 2023 • Yucen Lily Li, Tim G. J. Rudner, Andrew Gordon Wilson
Bayesian optimization is a highly efficient approach to optimizing objective functions which are expensive to query.
1 code implementation • NeurIPS 2023 • Ravid Shwartz-Ziv, Micah Goldblum, Yucen Lily Li, C. Bayan Bruss, Andrew Gordon Wilson
Real-world datasets are often highly class-imbalanced, which can adversely impact the performance of deep learning models.