no code implementations • 21 Oct 2022 • Haoyu Jiang, Jason Xu
Stochastic versions of proximal methods have gained much attention in statistics and machine learning.
1 code implementation • 22 Jun 2022 • Adithya Vellal, Saptarshi Chakraborty, Jason Xu
Recent progress in center-based clustering algorithms combats poor local minima by implicit annealing, using a family of generalized means.
no code implementations • 31 Mar 2022 • Brosnan Yuen, Yifeng Bie, Duncan Cairns, Geoffrey Harper, Jason Xu, Charles Chang, Xiaodai Dong, Tao Lu
Previous contact tracing systems required the users to perform many manual actions, such as installing smartphone applications, joining wireless networks, or carrying custom user devices.
no code implementations • 16 Nov 2021 • Timothy C Stutz, Janet S. Sinsheimer, Mary Sehl, Jason Xu
In our model, a single-hit mutation carries a slight proliferative advantage over a wild-type stem cells.
1 code implementation • NeurIPS 2021 • Debolina Paul, Saptarshi Chakraborty, Swagatam Das, Jason Xu
Recent advances in center-based clustering continue to improve upon the drawbacks of Lloyd's celebrated $k$-means algorithm over $60$ years after its introduction.
1 code implementation • 24 Feb 2021 • Mark He, Dylan Lu, Jason Xu, Rose Mary Xavier
We call this model for multilayer weighted networks the Stochastic Block (with) Ambient Noise Model (SBANM) and develop an associated community detection algorithm.
no code implementations • 12 Nov 2020 • Debolina Paul, Saptarshi Chakraborty, Swagatam Das, Jason Xu
We show the method implicitly performs annealing in kernel feature space while retaining efficient, closed-form updates, and we rigorously characterize its convergence properties both from computational and statistical points of view.
1 code implementation • NeurIPS 2020 • Zhiyue Zhang, Kenneth Lange, Jason Xu
In this paper, we propose a novel framework for sparse k-means clustering that is intuitive, simple to implement, and competitive with state-of-the-art algorithms.
1 code implementation • 10 Jan 2020 • Saptarshi Chakraborty, Debolina Paul, Swagatam Das, Jason Xu
Despite its well-known shortcomings, $k$-means remains one of the most widely used approaches to data clustering.
no code implementations • 29 Mar 2018 • Adam Gustafson, Matthew Hirn, Kitty Mohammed, Hariharan Narayanan, Jason Xu
Recently, the following smooth function approximation problem was proposed: given a finite set $E \subset \mathbb{R}^d$ and a function $f: E \rightarrow \mathbb{R}$, interpolate the given information with a function $\widehat{f} \in \dot{C}^{1, 1}(\mathbb{R}^d)$ (the class of first-order differentiable functions with Lipschitz gradients) such that $\widehat{f}(a) = f(a)$ for all $a \in E$, and the value of $\mathrm{Lip}(\nabla \widehat{f})$ is minimal.
no code implementations • 5 Jan 2018 • Jiazhuo Wang, Jason Xu, Xuejun Wang
Deep learning has achieved impressive results on many problems.
no code implementations • 14 Nov 2017 • Alistair Letcher, Jelena Trišović, Collin Cademartori, Xi Chen, Jason Xu
Automatic conflict detection has grown in relevance with the advent of body-worn technology, but existing metrics such as turn-taking and overlap are poor indicators of conflict in police-public interactions.
no code implementations • NeurIPS 2017 • Jason Xu, Eric C. Chi, Kenneth Lange
Estimation in generalized linear models (GLM) is complicated by the presence of constraints.
no code implementations • 16 Dec 2016 • Jason Xu, Eric C. Chi, Meng Yang, Kenneth Lange
Furthermore, we show that the Euclidean norm appearing in the proximity function of the non-linear split feasibility problem can be replaced by arbitrary Bregman divergences.
no code implementations • NeurIPS 2014 • Nicholas J. Foti, Jason Xu, Dillon Laird, Emily B. Fox
Variational inference algorithms have proven successful for Bayesian analysis in large data settings, with recent advances using stochastic variational inference (SVI).