Search Results for author: Xinhua Zhang

Found 34 papers, 5 papers with code

Fairness Risks for Group-conditionally Missing Demographics

no code implementations20 Feb 2024 Kaiqi Jiang, Wenzhe Fan, Mao Li, Xinhua Zhang

Fairness-aware classification models have gained increasing attention in recent years as concerns grow on discrimination against some demographic groups.

Fairness

Augmenting Offline Reinforcement Learning with State-only Interactions

no code implementations1 Feb 2024 Shangzhe Li, Xinhua Zhang

Batch offline data have been shown considerably beneficial for reinforcement learning.

D4RL Data Augmentation +3

Orthogonal Gromov-Wasserstein Discrepancy with Efficient Lower Bound

1 code implementation12 May 2022 Hongwei Jin, Zishun Yu, Xinhua Zhang

Comparing structured data from possibly different metric-measure spaces is a fundamental task in machine learning, with applications in, e. g., graph classification.

Graph Classification

Euclidean Invariant Recognition of 2D Shapes Using Histograms of Magnitudes of Local Fourier-Mellin Descriptors

no code implementations13 Mar 2022 Xinhua Zhang, Lance R. Williams

Because the magnitude of inner products with its basis functions are invariant to rotation and scale change, the Fourier-Mellin transform has long been used as a component in Euclidean invariant 2D shape recognition systems.

Similarity Equivariant Linear Transformation of Joint Orientation-Scale Space Representations

no code implementations13 Mar 2022 Xinhua Zhang, Lance R. Williams

Group convolution generalizes the concept to linear operations on functions of group elements representing more general geometric transformations and which commute with those transformations.

Implicit Task-Driven Probability Discrepancy Measure for Unsupervised Domain Adaptation

no code implementations NeurIPS 2021 Mao Li, Kaiqi Jiang, Xinhua Zhang

Probability discrepancy measure is a fundamental construct for numerous machine learning models such as weakly supervised learning and generative modeling.

Unsupervised Domain Adaptation Weakly-supervised Learning

Distributionally Robust Imitation Learning

no code implementations NeurIPS 2021 Mohammad Ali Bashiri, Brian Ziebart, Xinhua Zhang

We consider the imitation learning problem of learning a policy in a Markov Decision Process (MDP) setting where the reward function is not given, but demonstrations from experts are available.

Imitation Learning reinforcement-learning +2

Proximal Mapping for Deep Regularization

1 code implementation NeurIPS 2020 Mao Li, Yingyi Ma, Xinhua Zhang

Underpinning the success of deep learning is effective regularizations that allow a variety of priors in data to be modeled.

Generalised Lipschitz Regularisation Equals Distributional Robustness

no code implementations11 Feb 2020 Zac Cranko, Zhan Shi, Xinhua Zhang, Richard Nock, Simon Kornblith

The problem of adversarial examples has highlighted the need for a theory of regularisation that is general enough to apply to exotic function classes, such as universal approximators.

Certifying Distributional Robustness using Lipschitz Regularisation

no code implementations25 Sep 2019 Zac Cranko, Zhan Shi, Xinhua Zhang, Simon Kornblith, Richard Nock

Distributional robust risk (DRR) minimisation has arisen as a flexible and effective framework for machine learning.

Distributionally Robust Graphical Models

no code implementations NeurIPS 2018 Rizal Fathony, Ashkan Rezaei, Mohammad Ali Bashiri, Xinhua Zhang, Brian D. Ziebart

Our approach enjoys both the flexibility of incorporating customized loss metrics into its design as well as the statistical guarantee of Fisher consistency.

Structured Prediction

Efficient and Consistent Adversarial Bipartite Matching

no code implementations ICML 2018 Rizal Fathony, Sima Behpour, Xinhua Zhang, Brian Ziebart

Many important structured prediction problems, including learning to rank items, correspondence-based natural language processing, and multi-object tracking, can be formulated as weighted bipartite matching optimizations.

Computational Efficiency Learning-To-Rank +2

Exp-Concavity of Proper Composite Losses

no code implementations20 May 2018 Parameswaran Kamalaruban, Robert C. Williamson, Xinhua Zhang

In special cases like the Aggregating Algorithm (\cite{vovk1995game}) with mixable losses and the Weighted Average Algorithm (\cite{kivinen1999averaging}) with exp-concave losses, it is possible to achieve $O(1)$ regret bounds.

Computational Efficiency

Decomposition-Invariant Conditional Gradient for General Polytopes with Line Search

no code implementations NeurIPS 2017 Mohammad Ali Bashiri, Xinhua Zhang

Frank-Wolfe (FW) algorithms with linear convergence rates have recently achieved great efficiency in many applications.

Bregman Divergence for Stochastic Variance Reduction: Saddle-Point and Adversarial Prediction

no code implementations NeurIPS 2017 Zhan Shi, Xinhua Zhang, Yao-Liang Yu

Adversarial machines, where a learner competes against an adversary, have regained much recent interest in machine learning.

Robust Bayesian Max-Margin Clustering

no code implementations NeurIPS 2014 Changyou Chen, Jun Zhu, Xinhua Zhang

We present max-margin Bayesian clustering (BMC), a general and robust framework that incorporates the max-margin criterion into Bayesian clustering models, as well as two concrete models of BMC to demonstrate its flexibility and effectiveness in dealing with different clustering tasks.

Clustering

Convex Deep Learning via Normalized Kernels

no code implementations NeurIPS 2014 Özlem Aslan, Xinhua Zhang, Dale Schuurmans

Deep learning has been a long standing pursuit in machine learning, which until recently was hampered by unreliable training methods before the discovery of improved heuristics for embedded layer training.

Deep Learning

Generalized Conditional Gradient for Sparse Estimation

no code implementations17 Oct 2014 Yao-Liang Yu, Xinhua Zhang, Dale Schuurmans

Structured sparsity is an important modeling tool that expands the applicability of convex formulations for data analysis, however it also creates significant challenges for efficient algorithm design.

Dictionary Learning Matrix Completion +1

Distributed Stochastic Optimization of the Regularized Risk

no code implementations17 Jun 2014 Shin Matsushima, Hyokun Yun, Xinhua Zhang, S. V. N. Vishwanathan

Many machine learning algorithms minimize a regularized risk, and stochastic optimization is widely used for this task.

Stochastic Optimization

Convex Two-Layer Modeling

no code implementations NeurIPS 2013 Özlem Aslan, Hao Cheng, Xinhua Zhang, Dale Schuurmans

Latent variable prediction models, such as multi-layer networks, impose auxiliary latent variables between inputs and outputs to allow automatic inference of implicit features useful for prediction.

Vocal Bursts Valence Prediction

Polar Operators for Structured Sparse Estimation

no code implementations NeurIPS 2013 Xinhua Zhang, Yao-Liang Yu, Dale Schuurmans

Structured sparse estimation has become an important technique in many areas of data analysis.

Learning with Invariance via Linear Functionals on Reproducing Kernel Hilbert Space

no code implementations NeurIPS 2013 Xinhua Zhang, Wee Sun Lee, Yee Whye Teh

For the representer theorem to hold, the linear functionals are required to be bounded in the RKHS, and we show that this is true for a variety of commonly used RKHS and invariances.

Convex Relaxations of Bregman Divergence Clustering

no code implementations26 Sep 2013 Hao Cheng, Xinhua Zhang, Dale Schuurmans

Although many convex relaxations of clustering have been proposed in the past decade, current formulations remain restricted to spherical Gaussian or discriminative models and are susceptible to imbalanced clusters.

Clustering

Convex Multi-view Subspace Learning

no code implementations NeurIPS 2012 Martha White, Xinhua Zhang, Dale Schuurmans, Yao-Liang Yu

Subspace learning seeks a low dimensional representation of data that enables accurate reconstruction.

Lower Bounds on Rate of Convergence of Cutting Plane Methods

no code implementations NeurIPS 2010 Xinhua Zhang, Ankan Saha, S. V. N. Vishwanathan

By exploiting the structure of the objective function we can devise an algorithm that converges in $O(1/\sqrt{\epsilon})$ iterations.

Kernel Measures of Independence for non-iid Data

no code implementations NeurIPS 2008 Xinhua Zhang, Le Song, Arthur Gretton, Alex J. Smola

Many machine learning algorithms can be formulated in the framework of statistical independence such as the Hilbert Schmidt Independence Criterion.

BIG-bench Machine Learning Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.