no code implementations • 20 Feb 2024 • Kaiqi Jiang, Wenzhe Fan, Mao Li, Xinhua Zhang
Fairness-aware classification models have gained increasing attention in recent years as concerns grow on discrimination against some demographic groups.
no code implementations • 1 Feb 2024 • Shangzhe Li, Xinhua Zhang
Batch offline data have been shown considerably beneficial for reinforcement learning.
no code implementations • 2 Jun 2022 • Bradley T. Wolfe, Michael J. Falato, Xinhua Zhang, Nga T. T. Nguyen-Fotiadis, J. P. Sauppe, P. M. Kozlowski, P. A. Keiter, R. E. Reinovsky, S. A. Batha, Zhehui Wang
We utilize half a dozen different convolutional neural networks to produce different 3D representations of ICF implosions from the experimental data.
1 code implementation • 12 May 2022 • Hongwei Jin, Zishun Yu, Xinhua Zhang
Comparing structured data from possibly different metric-measure spaces is a fundamental task in machine learning, with applications in, e. g., graph classification.
no code implementations • 13 Mar 2022 • Xinhua Zhang, Lance R. Williams
Because the magnitude of inner products with its basis functions are invariant to rotation and scale change, the Fourier-Mellin transform has long been used as a component in Euclidean invariant 2D shape recognition systems.
no code implementations • 13 Mar 2022 • Xinhua Zhang, Lance R. Williams
Group convolution generalizes the concept to linear operations on functions of group elements representing more general geometric transformations and which commute with those transformations.
no code implementations • NeurIPS 2021 • Mao Li, Kaiqi Jiang, Xinhua Zhang
Probability discrepancy measure is a fundamental construct for numerous machine learning models such as weakly supervised learning and generative modeling.
no code implementations • NeurIPS 2021 • Mohammad Ali Bashiri, Brian Ziebart, Xinhua Zhang
We consider the imitation learning problem of learning a policy in a Markov Decision Process (MDP) setting where the reward function is not given, but demonstrations from experts are available.
1 code implementation • NeurIPS 2020 • Hongwei Jin, Zhan Shi, Venkata Jaya Shankar Ashish Peruri, Xinhua Zhang
Graph convolution networks (GCNs) have become effective models for graph classification.
1 code implementation • NeurIPS 2020 • Mao Li, Yingyi Ma, Xinhua Zhang
Underpinning the success of deep learning is effective regularizations that allow a variety of priors in data to be modeled.
no code implementations • ICML 2020 • Yingyi Ma, Vignesh Ganapathiraman, Yao-Liang Yu, Xinhua Zhang
Invariance (defined in a general sense) has been one of the most effective priors for representation learning.
no code implementations • 11 Feb 2020 • Zac Cranko, Zhan Shi, Xinhua Zhang, Richard Nock, Simon Kornblith
The problem of adversarial examples has highlighted the need for a theory of regularisation that is general enough to apply to exotic function classes, such as universal approximators.
no code implementations • 25 Sep 2019 • Zac Cranko, Zhan Shi, Xinhua Zhang, Simon Kornblith, Richard Nock
Distributional robust risk (DRR) minimisation has arisen as a flexible and effective framework for machine learning.
2 code implementations • 18 Dec 2018 • Rizal Fathony, Kaiser Asif, Anqi Liu, Mohammad Ali Bashiri, Wei Xing, Sima Behpour, Xinhua Zhang, Brian D. Ziebart
We propose a robust adversarial prediction framework for general multiclass classification.
no code implementations • NeurIPS 2018 • Rizal Fathony, Ashkan Rezaei, Mohammad Ali Bashiri, Xinhua Zhang, Brian D. Ziebart
Our approach enjoys both the flexibility of incorporating customized loss metrics into its design as well as the statistical guarantee of Fisher consistency.
no code implementations • ICML 2018 • Rizal Fathony, Sima Behpour, Xinhua Zhang, Brian Ziebart
Many important structured prediction problems, including learning to rank items, correspondence-based natural language processing, and multi-object tracking, can be formulated as weighted bipartite matching optimizations.
no code implementations • ICML 2018 • Vignesh Ganapathiraman, Zhan Shi, Xinhua Zhang, Yao-Liang Yu
Latent prediction models, exemplified by multi-layer networks, employ hidden variables that automate abstract feature discovery.
no code implementations • 20 May 2018 • Parameswaran Kamalaruban, Robert C. Williamson, Xinhua Zhang
In special cases like the Aggregating Algorithm (\cite{vovk1995game}) with mixable losses and the Weighted Average Algorithm (\cite{kivinen1999averaging}) with exp-concave losses, it is possible to achieve $O(1)$ regret bounds.
no code implementations • NeurIPS 2017 • Mohammad Ali Bashiri, Xinhua Zhang
Frank-Wolfe (FW) algorithms with linear convergence rates have recently achieved great efficiency in many applications.
no code implementations • NeurIPS 2017 • Zhan Shi, Xinhua Zhang, Yao-Liang Yu
Adversarial machines, where a learner competes against an adversary, have regained much recent interest in machine learning.
no code implementations • NeurIPS 2016 • Vignesh Ganapathiraman, Xinhua Zhang, Yao-Liang Yu, Junfeng Wen
Unsupervised learning of structured predictors has been a long standing pursuit in machine learning.
1 code implementation • 16 Apr 2016 • Parameswaran Raman, Sriram Srinivasan, Shin Matsushima, Xinhua Zhang, Hyokun Yun, S. V. N. Vishwanathan
Scaling multinomial logistic regression to datasets with very large number of data points and classes is challenging.
no code implementations • NeurIPS 2014 • Changyou Chen, Jun Zhu, Xinhua Zhang
We present max-margin Bayesian clustering (BMC), a general and robust framework that incorporates the max-margin criterion into Bayesian clustering models, as well as two concrete models of BMC to demonstrate its flexibility and effectiveness in dealing with different clustering tasks.
no code implementations • NeurIPS 2014 • Özlem Aslan, Xinhua Zhang, Dale Schuurmans
Deep learning has been a long standing pursuit in machine learning, which until recently was hampered by unreliable training methods before the discovery of improved heuristics for embedded layer training.
no code implementations • 17 Oct 2014 • Yao-Liang Yu, Xinhua Zhang, Dale Schuurmans
Structured sparsity is an important modeling tool that expands the applicability of convex formulations for data analysis, however it also creates significant challenges for efficient algorithm design.
no code implementations • 17 Jun 2014 • Shin Matsushima, Hyokun Yun, Xinhua Zhang, S. V. N. Vishwanathan
Many machine learning algorithms minimize a regularized risk, and stochastic optimization is widely used for this task.
no code implementations • NeurIPS 2013 • Özlem Aslan, Hao Cheng, Xinhua Zhang, Dale Schuurmans
Latent variable prediction models, such as multi-layer networks, impose auxiliary latent variables between inputs and outputs to allow automatic inference of implicit features useful for prediction.
no code implementations • NeurIPS 2013 • Xinhua Zhang, Yao-Liang Yu, Dale Schuurmans
Structured sparse estimation has become an important technique in many areas of data analysis.
no code implementations • NeurIPS 2013 • Xinhua Zhang, Wee Sun Lee, Yee Whye Teh
For the representer theorem to hold, the linear functionals are required to be bounded in the RKHS, and we show that this is true for a variety of commonly used RKHS and invariances.
no code implementations • 26 Sep 2013 • Hao Cheng, Xinhua Zhang, Dale Schuurmans
Although many convex relaxations of clustering have been proposed in the past decade, current formulations remain restricted to spherical Gaussian or discriminative models and are susceptible to imbalanced clusters.
no code implementations • NeurIPS 2012 • Martha White, Xinhua Zhang, Dale Schuurmans, Yao-Liang Yu
Subspace learning seeks a low dimensional representation of data that enables accurate reconstruction.
no code implementations • NeurIPS 2012 • Xinhua Zhang, Dale Schuurmans, Yao-Liang Yu
Sparse learning models typically combine a smooth loss with a nonsmooth penalty, such as trace norm.
no code implementations • NeurIPS 2010 • Xinhua Zhang, Ankan Saha, S. V. N. Vishwanathan
By exploiting the structure of the objective function we can devise an algorithm that converges in $O(1/\sqrt{\epsilon})$ iterations.
no code implementations • NeurIPS 2008 • Xinhua Zhang, Le Song, Arthur Gretton, Alex J. Smola
Many machine learning algorithms can be formulated in the framework of statistical independence such as the Hilbert Schmidt Independence Criterion.