no code implementations • 14 Sep 2023 • Francesco Fabbri, Xianghang Liu, Jack R. McKenzie, Bartlomiej Twardowski, Tri Kurniawan Wijaya
Federated Learning (FL) has emerged as a key approach for distributed machine learning, enhancing online personalization while ensuring user data privacy.
no code implementations • 30 Aug 2022 • Xianghang Liu, Bartłomiej Twardowski, Tri Kurniawan Wijaya
In Federated Learning (FL) of click-through rate (CTR) prediction, users' data is not shared for privacy protection.
no code implementations • NeurIPS 2014 • Xianghang Liu, Justin Domke
Markov chain Monte Carlo (MCMC) algorithms are simple and extremely powerful techniques to sample from almost arbitrary distributions.
no code implementations • NeurIPS 2013 • Justin Domke, Xianghang Liu
Inference in general Ising models is difficult, due to high treewidth making tree-based algorithms intractable.
no code implementations • NeurIPS 2012 • Xianghang Liu, James Petterson, Tibério S. Caetano
Instead of relying on convex losses and regularisers such as in SVMs, logistic regression and boosting, or instead non-convex but continuous formulations such as those encountered in neural networks and deep belief networks, our framework entails a non-convex but \emph{discrete} formulation, where estimation amounts to finding a MAP configuration in a graphical model whose potential functions are low-dimensional discrete surrogates for the misclassification loss.