no code implementations • 10 Aug 2023 • Adam Breuer, Nazanin Khosravani, Michael Tingley, Bradford Cottel
We show that even before a new account has made friends or shared content, these initial friend request behaviors evoke a natural multi-class extension of the canonical Preferential Attachment model of social network growth.
1 code implementation • 23 Oct 2020 • Feynman Liang, Nimar Arora, Nazanin Tehrani, Yucen Li, Michael Tingley, Erik Meijer
In order to construct accurate proposers for Metropolis-Hastings Markov Chain Monte Carlo, we integrate ideas from probabilistic graphical models and neural networks in an open-source framework we call Lightweight Inference Compilation (LIC).
1 code implementation • 17 Oct 2020 • Sourabh Kulkarni, Kinjal Divesh Shah, Nimar Arora, Xiaoyan Wang, Yucen Lily Li, Nazanin Khosravani Tehrani, Michael Tingley, David Noursi, Narjes Torabi, Sepehr Akhavan Masouleh, Eric Lippert, Erik Meijer
The benchmark includes data generation and evaluation code for a number of models as well as implementations in some common PPLs.
no code implementations • 30 Sep 2020 • Narjes Torabi, Nimar S. Arora, Emma Yu, Kinjal Shah, Wenshun Liu, Michael Tingley
This task is exceedingly hard because even though we can easily get the ML scores or features for all of the billions of items we can only get ground truth labels on a few thousands of these items due to practical considerations.
2 code implementations • 15 Jan 2020 • Nimar S. Arora, Nazanin Khosravani Tehrani, Kinjal Divesh Shah, Michael Tingley, Yucen Lily Li, Narjes Torabi, David Noursi, Sepehr Akhavan Masouleh, Eric Lippert, Erik Meijer
NMC is similar to the Newton-Raphson update in optimization where the second order gradient is used to automatically scale the step size in each dimension.
no code implementations • pproximateinference AABI Symposium 2019 • Nimar S. Arora, Nazanin Khosravani Tehrani, Kinjal Divesh Shah, Michael Tingley, Yucen Lily Li, Narjes Torabi, David Noursi, Sepehr Akhavan Masouleh, Eric Lippert, Erik Meijer
NMC is similar to the Newton-Raphson update in optimization where the second order gradient is used to automatically scale the step size in each dimension.