Search Results for author: Clayton Scott

Found 32 papers, 5 papers with code

Unified Binary and Multiclass Margin-Based Classification

no code implementations29 Nov 2023 Yutong Wang, Clayton Scott

The notion of margin loss has been central to the development and analysis of algorithms for binary classification.

Binary Classification Classification

Mixture Proportion Estimation Beyond Irreducibility

1 code implementation2 Jun 2023 YIlun Zhu, Aaron Fjeldsted, Darren Holland, George Landon, Azaree Lintereur, Clayton Scott

The task of mixture proportion estimation (MPE) is to estimate the weight of a component distribution in a mixture, given observations from both the component and mixture.

Label Embedding via Low-Coherence Matrices

no code implementations31 May 2023 Jianxin Zhang, Clayton Scott

Label embedding is a framework for multiclass classification problems where each label is represented by a distinct vector of some fixed dimension, and training involves matching model output to the vector representing the correct label.

Classification Dimensionality Reduction +2

Learning from Label Proportions by Learning with Label Noise

1 code implementation4 Mar 2022 Jianxin Zhang, Yutong Wang, Clayton Scott

Learning from label proportions (LLP) is a weakly supervised classification problem where data points are grouped into bags, and the label proportions within each bag are observed instead of the instance-level labels.

Weakly Supervised Classification

Supervised PCA: A Multiobjective Approach

no code implementations10 Nov 2020 Alexander Ritchie, Laura Balzano, Daniel Kessler, Chandra S. Sripada, Clayton Scott

Methods for supervised principal component analysis (SPCA) aim to incorporate label information into principal component analysis (PCA), so that the extracted features are more useful for a prediction task of interest.

Consistent Estimation of Identifiable Nonparametric Mixture Models from Grouped Observations

1 code implementation NeurIPS 2020 Alexander Ritchie, Robert A. Vandermeulen, Clayton Scott

Recent research has established sufficient conditions for finite mixture models to be identifiable from grouped observations.

Learning from Label Proportions: A Mutual Contamination Framework

1 code implementation NeurIPS 2020 Clayton Scott, Jianxin Zhang

Learning from label proportions (LLP) is a weakly supervised setting for classification in which unlabeled training instances are grouped into bags, and each bag is annotated with the proportion of each class occurring in that bag.

Calibrated Surrogate Losses for Adversarially Robust Classification

no code implementations28 May 2020 Han Bao, Clayton Scott, Masashi Sugiyama

Adversarially robust classification seeks a classifier that is insensitive to adversarial perturbations of test patterns.

Classification General Classification +1

Learning from Multiple Corrupted Sources, with Application to Learning from Label Proportions

no code implementations10 Oct 2019 Clayton Scott, Jianxin Zhang

We study binary classification in the setting where the learner is presented with multiple corrupted training samples, with possibly different sample sizes and degrees of corruption, and introduce an approach based on minimizing a weighted combination of corruption-corrected empirical risks.

Binary Classification

PAC Reinforcement Learning without Real-World Feedback

no code implementations23 Sep 2019 Yuren Zhong, Aniket Anand Deshmukh, Clayton Scott

This work studies reinforcement learning in the Sim-to-Real setting, in which an agent is first trained on a number of simulators before being deployed in the real world, with the aim of decreasing the real-world sample complexity requirement.

reinforcement-learning Reinforcement Learning (RL)

A Generalization Error Bound for Multi-class Domain Generalization

no code implementations24 May 2019 Aniket Anand Deshmukh, Yunwen Lei, Srinagesh Sharma, Urun Dogan, James W. Cutler, Clayton Scott

Domain generalization is the problem of assigning labels to an unlabeled data set, given several similar data sets for which labels have been provided.

Classification Domain Generalization +2

Simple Regret Minimization for Contextual Bandits

no code implementations17 Oct 2018 Aniket Anand Deshmukh, Srinagesh Sharma, James W. Cutler, Mark Moldwin, Clayton Scott

Contextual bandits are a sub-class of MABs where, at every time step, the learner has access to side information that is predictive of the best arm.

Multi-Armed Bandits

A Generalized Neyman-Pearson Criterion for Optimal Domain Adaptation

no code implementations3 Oct 2018 Clayton Scott

In the problem of domain adaptation for binary classification, the learner is presented with labeled examples from a source domain, and must correctly classify unlabeled examples from a target domain, which may differ from the source.

Binary Classification Domain Adaptation +1

Domain Generalization by Marginal Transfer Learning

2 code implementations21 Nov 2017 Gilles Blanchard, Aniket Anand Deshmukh, Urun Dogan, Gyemin Lee, Clayton Scott

In the problem of domain generalization (DG), there are labeled training data sets from several related prediction problems, and the goal is to make accurate predictions on future unlabeled data sets that are not known to the learner.

Domain Generalization General Classification +1

Dictionary-Free MRI PERK: Parameter Estimation via Regression with Kernels

no code implementations6 Oct 2017 Gopal Nataraj, Jon-Fredrik Nielsen, Clayton Scott, Jeffrey A. Fessler

This paper introduces a fast, general method for dictionary-free parameter estimation in quantitative magnetic resonance imaging (QMRI) via regression with kernels (PERK).

regression

Nonparametric Preference Completion

no code implementations24 May 2017 Julian Katz-Samuels, Clayton Scott

We consider the task of collaborative preference completion: given a pool of items, a pool of users and a partially observed item-user rating matrix, the goal is to recover the \emph{personalized ranking} of each user over all of the items.

Multi-Task Learning for Contextual Bandits

no code implementations NeurIPS 2017 Aniket Anand Deshmukh, Urun Dogan, Clayton Scott

Contextual bandits are a form of multi-armed bandit in which the agent has access to predictive side information (known as the context) for each arm at each time step, and have been used to model personalized news recommendation, ad placement, and other applications.

Multi-Armed Bandits Multi-Task Learning +1

Consistent Kernel Density Estimation with Non-Vanishing Bandwidth

no code implementations24 May 2017 Efrén Cruz Cortés, Clayton Scott

Consistency of the kernel density estimator requires that the kernel bandwidth tends to zero as the sample size grows.

Density Estimation

Adaptive Questionnaires for Direct Identification of Optimal Product Design

no code implementations5 Jan 2017 Max Yi Ren, Clayton Scott

In this work, we (1) demonstrate that accurate preference estimation is neither necessary nor sufficient for identifying the optimal design, (2) introduce a novel adaptive questionnaire that leverages knowledge about engineering feasibility and manufacturing costs to directly determine the optimal design, and (3) interpret product design in terms of a nonlinear segmentation of part-worth space, and use this interpretation to illuminate the intrinsic difficulty of optimal design in the presence of noisy questionnaire responses.

Marketing

Mixture Proportion Estimation via Kernel Embedding of Distributions

no code implementations8 Mar 2016 Harish G. Ramaswamy, Clayton Scott, Ambuj Tewari

Mixture proportion estimation (MPE) is the problem of estimating the weight of a component distribution in a mixture, given samples from the mixture and component.

Anomaly Detection Weakly-supervised Learning

A Mutual Contamination Analysis of Mixed Membership and Partial Label Models

no code implementations19 Feb 2016 Julian Katz-Samuels, Clayton Scott

We examine the decontamination problem in two mutual contamination models that describe popular machine learning tasks: recovering the base distributions up to a permutation in a mixed membership model, and recovering the base distributions exactly in a partial label model for classification.

BIG-bench Machine Learning

Optimal change point detection in Gaussian processes

no code implementations3 Jun 2015 Hossein Keshavarz, Clayton Scott, XuanLong Nguyen

By contrast, the standard CUSUM method, which does not account for the covariance structure, is shown to be asymptotically optimal only in the increasing domain.

Change Point Detection Gaussian Processes +2

Disease Prediction based on Functional Connectomes using a Scalable and Spatially-Informed Support Vector Machine

no code implementations21 Oct 2013 Takanori Watanabe, Daniel Kessler, Clayton Scott, Michael Angstadt, Chandra Sripada

Using the fused Lasso or GraphNet regularizer with the hinge-loss leads to a structured sparse support vector machine (SVM) with embedded feature selection.

Data Augmentation Disease Prediction +1

Class Proportion Estimation with Application to Multiclass Anomaly Rejection

no code implementations21 Jun 2013 Tyler Sanderson, Clayton Scott

The first problem studied is that of class proportion estimation, which is the problem of estimating the class proportions in an unlabeled testing data set given labeled examples of each class.

Domain Adaptation

Classification with Asymmetric Label Noise: Consistency and Maximal Denoising

no code implementations5 Mar 2013 Gilles Blanchard, Marek Flaska, Gregory Handy, Sara Pozzi, Clayton Scott

For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions.

Classification Denoising +1

Extensions of Generalized Binary Search to Group Identification and Exponential Costs

no code implementations NeurIPS 2010 Gowtham Bellala, Suresh Bhavnani, Clayton Scott

Generalized Binary Search (GBS) is a well known greedy algorithm for identifying an unknown object while minimizing the number of yes" or "no" questions posed about that object, and arises in problems such as active learning and active diagnosis.

Active Learning Object

Performance analysis for L\_2 kernel classification

no code implementations NeurIPS 2008 Jooseuk Kim, Clayton Scott

We provide statistical performance guarantees for a recently introduced kernel classifier that optimizes the $L_2$ or integrated squared error (ISE) of a difference of densities.

Classification Density Estimation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.