Search Results for author: Matthew Nokleby

Found 13 papers, 0 papers with code

Information-Theoretic Bayes Risk Lower Bounds for Realizable Models

no code implementations8 Nov 2021 Matthew Nokleby, Ahmad Beirami

For models that are (roughly) lower Lipschitz in their parameters, we bound the rate distortion function from below, whereas for VC classes, the mutual information is bounded above by $d_\mathrm{vc}\log(n)$.

Scaling-up Distributed Processing of Data Streams for Machine Learning

no code implementations18 May 2020 Matthew Nokleby, Haroon Raja, Waheed U. Bajwa

This paper reviews recently developed methods that focus on large-scale distributed stochastic optimization in the compute- and bandwidth-limited regime, with an emphasis on convergence analysis that explicitly accounts for the mismatch between computation, communication and streaming rates.

BIG-bench Machine Learning Stochastic Optimization

Learning Furniture Compatibility with Graph Neural Networks

no code implementations15 Apr 2020 Luisa F. Polania, Mauricio Flores, Yiran Li, Matthew Nokleby

We present two GNN models, both of which comprise a deep CNN that extracts a feature representation for each image, a gated recurrent unit (GRU) network that models interactions between the furniture items in a set, and an aggregation function that calculates the compatibility score.

Optimizing Taxi Carpool Policies via Reinforcement Learning and Spatio-Temporal Mining

no code implementations11 Nov 2018 Ishan Jindal, Zhiwei Qin, Xue-wen Chen, Matthew Nokleby, Jieping Ye

In this paper, we develop a reinforcement learning (RL) based system to learn an effective policy for carpooling that maximizes transportation efficiency so that fewer cars are required to fulfill the given amount of trip demand.

reinforcement-learning Reinforcement Learning (RL) +1

Information Bottleneck Methods for Distributed Learning

no code implementations26 Oct 2018 Parinaz Farajiparvar, Ahmad Beirami, Matthew Nokleby

We consider this problem for unsupervised learning for batch and sequential data.

Tensor Matched Kronecker-Structured Subspace Detection for Missing Information

no code implementations25 Oct 2018 Ishan Jindal, Matthew Nokleby

We consider the problem of detecting whether a tensor signal having many missing entities lies within a given low dimensional Kronecker-Structured (KS) subspace.

A Unified Neural Network Approach for Estimating Travel Time and Distance for a Taxi Trip

no code implementations12 Oct 2017 Ishan Jindal, Tony, Qin, Xue-wen Chen, Matthew Nokleby, Jieping Ye

In building intelligent transportation systems such as taxi or rideshare services, accurate prediction of travel time and distance is crucial for customer experience and resource management.

Feature Engineering Management +1

Learning Deep Networks from Noisy Labels with Dropout Regularization

no code implementations9 May 2017 Ishan Jindal, Matthew Nokleby, Xue-wen Chen

Large datasets often have unreliable labels-such as those obtained from Amazon's Mechanical Turk or social media platforms-and classifiers trained on mislabeled datasets often exhibit poor performance.

Classification and Representation via Separable Subspaces: Performance Limits and Algorithms

no code implementations7 May 2017 Ishan Jindal, Matthew Nokleby

We study the classification performance of Kronecker-structured models in two asymptotic regimes and developed an algorithm for separable, fast and compact K-S dictionary learning for better classification and representation of multidimensional signals by exploiting the structure in the signal.

Classification Dictionary Learning +1

Stochastic Optimization from Distributed, Streaming Data in Rate-limited Networks

no code implementations25 Apr 2017 Matthew Nokleby, Waheed U. Bajwa

Motivated by machine learning applications in networks of sensors, internet-of-things (IoT) devices, and autonomous agents, we propose techniques for distributed stochastic convex learning from high-rate data streams.

Stochastic Optimization

Rate-Distortion Bounds on Bayes Risk in Supervised Learning

no code implementations8 May 2016 Matthew Nokleby, Ahmad Beirami, Robert Calderbank

We provide lower and upper bounds on the rate-distortion function, using $L_p$ loss as the distortion measure, of a maximum a priori classifier in terms of the differential entropy of the posterior distribution and a quantity called the interpolation dimension, which characterizes the complexity of the parametric distribution family.

Cannot find the paper you are looking for? You can Submit a new open access paper.