1 code implementation • 6 Feb 2023 • Saurabh Garg, Nick Erickson, James Sharpnack, Alex Smola, Sivaraman Balakrishnan, Zachary C. Lipton
Despite the emergence of principled methods for domain adaptation under label shift, their sensitivity to shifts in class conditional distributions is precariously under explored.
no code implementations • 5 Jun 2021 • Qin Ding, Yue Kang, Yi-Wei Liu, Thomas C. M. Lee, Cho-Jui Hsieh, James Sharpnack
To tackle this problem, we first propose a two-layer bandit structure for auto tuning the exploration parameter and further generalize it to the Syndicated Bandits framework which can learn multiple hyper-parameters dynamically in contextual bandit environment.
no code implementations • 5 Jun 2021 • Qin Ding, Cho-Jui Hsieh, James Sharpnack
We provide theoretical guarantees for our proposed algorithm and show by experiments that our proposed algorithm improves the robustness against various kinds of popular attacks.
no code implementations • 7 Jun 2020 • Qin Ding, Cho-Jui Hsieh, James Sharpnack
A natural way to resolve this problem is to apply online stochastic gradient descent (SGD) so that the per-step time and memory complexity can be reduced to constant with respect to $t$, but a contextual bandit policy based on online SGD updates that balances exploration and exploitation has remained elusive.
no code implementations • 13 Feb 2020 • Qin Ding, Cho-Jui Hsieh, James Sharpnack
Classic contextual bandit algorithms for linear models, such as LinUCB, assume that the reward distribution for an arm is modeled by a stationary linear regression.
no code implementations • 21 Nov 2019 • Weitang Liu, Lifeng Wei, James Sharpnack, John D. Owens
In this paper, we propose a novel architecture that iteratively discovers and segments out the objects of a scene based on the image reconstruction quality.
2 code implementations • 25 Sep 2019 • Liwei Wu, Shuqing Li, Cho-Jui Hsieh, James Sharpnack
Recent advances in deep learning, especially the discovery of various attention mechanisms and newer architectures in addition to widely used RNN and CNN in natural language processing, have allowed for better use of the temporal ordering of items that each user has engaged with.
Ranked #1 on Recommendation Systems on MovieLens 1M (nDCG@10 metric)
3 code implementations • 15 Aug 2019 • Liwei Wu, Shuqing Li, Cho-Jui Hsieh, James Sharpnack
Recent advances in deep learning, especially the discovery of various attention mechanisms and newer architectures in addition to widely used RNN and CNN in natural language processing, have allowed us to make better use of the temporal ordering of items that each user has engaged with.
no code implementations • 29 May 2019 • Liwei Wu, Hsiang-Fu Yu, Nikhil Rao, James Sharpnack, Cho-Jui Hsieh
In this paper, we propose using Graph DNA, a novel Deep Neighborhood Aware graph encoding algorithm, for exploiting deeper neighborhood information.
3 code implementations • NeurIPS 2019 • Liwei Wu, Shuqing Li, Cho-Jui Hsieh, James Sharpnack
We find that when used along with widely-used regularization methods such as weight decay and dropout, our proposed SSE can further reduce overfitting, which often leads to more favorable generalization results.
no code implementations • 6 Feb 2019 • James Sharpnack
Biased sampling and missing data complicates statistical problems ranging from causal inference to reinforcement learning.
no code implementations • 25 May 2018 • Shitong Wei, Oscar Hernan Madrid-Padilla, James Sharpnack
We study an extention of total variation denoising over images to over Cartesian power graphs and its applications to estimating non-parametric network models.
1 code implementation • ICML 2018 • Liwei Wu, Cho-Jui Hsieh, James Sharpnack
In this paper, we propose a listwise approach for constructing user-specific rankings in recommendation systems in a collaborative fashion.
1 code implementation • 23 Feb 2018 • Kirill Paramonov, Dmitry Shemetov, James Sharpnack
Exploratory analysis over network data is often limited by the ability to efficiently calculate graph statistics, which can provide a model-free understanding of the macroscopic properties of a network.
no code implementations • 28 Oct 2014 • Yu-Xiang Wang, James Sharpnack, Alex Smola, Ryan J. Tibshirani
We introduce a family of adaptive estimators on graphs, based on penalizing the $\ell_1$ norm of discrete graph differences.
no code implementations • NeurIPS 2013 • James Sharpnack, Akshay Krishnamurthy, Aarti Singh
The detection of anomalous activity in graphs is a statistical problem that arises in many applications, such as network surveillance, disease outbreak detection, and activity monitoring in social networks.
no code implementations • 1 May 2013 • Akshay Krishnamurthy, James Sharpnack, Aarti Singh
We study the localization of a cluster of activated vertices in a graph, from adaptively designed compressive measurements.
no code implementations • NeurIPS 2010 • James Sharpnack, Aarti Singh
We consider the problem of identifying an activation pattern in a complex, large-scale network that is embedded in very noisy measurements.