no code implementations • ICML 2020 • Michael Zhu, Chang Liu, Jun Zhu
Particle-based Variational Inference methods (ParVIs), like Stein Variational Gradient Descent, are nonparametric variational inference methods that optimize a set of particles to best approximate a target distribution.
no code implementations • LREC 2022 • Zhaoxin Luo, Michael Zhu
In this paper, we propose a novel approach called the latent indicator layer to identify and learn implicit hierarchical information (e. g., phrases), and further develop an EM algorithm to handle the latent indicator layer in training.
no code implementations • 21 Aug 2021 • Amir Karami, Spring B. Clark, Anderson Mackenzie, Dorathea Lee, Michael Zhu, Hannah R. Boyajieff, Bailey Goldschmidt
Social media is commonly used by the public during election campaigns to express their opinions regarding different issues.
no code implementations • 29 Jul 2021 • Amir Karami, Michael Zhu, Bailey Goldschmidt, Hannah R. Boyajieff, Mahdi M. Najafabadi
Our findings show that the negative sentiment regarding the COVID-19 vaccine had a decreasing trend between November 2020 and February 2021.
no code implementations • 4 Jun 2021 • Zhaoxin Luo, Michael Zhu
Hierarchical structures exist in both linguistics and Natural Language Processing (NLP) tasks.
no code implementations • 20 Jul 2020 • Yang Yang, Ke Deng, Michael Zhu
Hyperparameters play a critical role in the performances of many machine learning methods.
no code implementations • 24 Apr 2020 • Michael Zhu, Kevin Murphy, Rico Jonschkowski
Resampling is a key component of sample-based recursive state estimation in particle filters.
1 code implementation • NeurIPS 2019 • Michael Zhu
For MCMC methods like Metropolis-Hastings, tuning the proposal distribution is important in practice for effective sampling from the target distribution \pi.
no code implementations • NeurIPS 2017 • Chao Pan, Michael Zhu
The additive model is one of the most popularly used models for high dimensional nonparametric regression analysis.
3 code implementations • ICLR 2018 • Michael Zhu, Suyog Gupta
Model pruning seeks to induce sparsity in a deep neural network's various connection matrices, thereby reducing the number of nonzero-valued parameters in the model.
2 code implementations • 19 Dec 2012 • Sanjeev Arora, Rong Ge, Yoni Halpern, David Mimno, Ankur Moitra, David Sontag, Yichen Wu, Michael Zhu
Topic models provide a useful method for dimensionality reduction and exploratory data analysis in large text corpora.