Search Results for author: Abhimanu Kumar

Found 11 papers, 0 papers with code

Data Transformation Insights in Self-supervision with Clustering Tasks

no code implementations18 Feb 2020 Abhimanu Kumar, Aniket Anand Deshmukh, Urun Dogan, Denis Charles, Eren Manavoglu

We show faster convergence rate with valid transformations for convex as well as certain family of non-convex objectives along with the proof of convergence to the original set of optima.

Clustering valid

Learning Latent Space Models with Angular Constraints

no code implementations ICML 2017 Pengtao Xie, Yuntian Deng, Yi Zhou, Abhimanu Kumar, Yao-Liang Yu, James Zou, Eric P. Xing

The large model capacity of latent space models (LSMs) enables them to achieve great performance on various applications, but meanwhile renders LSMs to be prone to overfitting.

Scalable Modeling of Conversational-role based Self-presentation Characteristics in Large Online Forums

no code implementations10 Dec 2015 Abhimanu Kumar, Shriphani Palakodety, Chong Wang, Carolyn P. Rose, Eric P. Xing, Miaomiao Wen

Online discussion forums are complex webs of overlapping subcommunities (macrolevel structure, across threads) in which users enact different roles depending on which subcommunity they are participating in within a particular time point (microlevel structure, within threads).

Topic Models Variational Inference

Distributed Training of Deep Neural Networks with Theoretical Analysis: Under SSP Setting

no code implementations9 Dec 2015 Abhimanu Kumar, Pengtao Xie, Junming Yin, Eric P. Xing

We propose a distributed approach to train deep neural networks (DNNs), which has guaranteed convergence theoretically and great scalability empirically: close to 6 times faster on instance of ImageNet data set when run with 6 machines.

General Classification Image Classification

Distributed Machine Learning via Sufficient Factor Broadcasting

no code implementations26 Nov 2015 Pengtao Xie, Jin Kyu Kim, Yi Zhou, Qirong Ho, Abhimanu Kumar, Yao-Liang Yu, Eric Xing

Matrix-parametrized models, including multiclass logistic regression and sparse coding, are used in machine learning (ML) applications ranging from computer vision to computational biology.

BIG-bench Machine Learning

High-Performance Distributed ML at Scale through Parameter Server Consistency Models

no code implementations29 Oct 2014 Wei Dai, Abhimanu Kumar, Jinliang Wei, Qirong Ho, Garth Gibson, Eric P. Xing

As Machine Learning (ML) applications increase in data size and model complexity, practitioners turn to distributed clusters to satisfy the increased computational and memory demands.

Vocal Bursts Intensity Prediction

Distributed Machine Learning via Sufficient Factor Broadcasting

no code implementations19 Sep 2014 Pengtao Xie, Jin Kyu Kim, Yi Zhou, Qirong Ho, Abhimanu Kumar, Yao-Liang Yu, Eric Xing

Matrix-parametrized models, including multiclass logistic regression and sparse coding, are used in machine learning (ML) applications ranging from computer vision to computational biology.

BIG-bench Machine Learning

Petuum: A New Platform for Distributed Machine Learning on Big Data

no code implementations30 Dec 2013 Eric P. Xing, Qirong Ho, Wei Dai, Jin Kyu Kim, Jinliang Wei, Seunghak Lee, Xun Zheng, Pengtao Xie, Abhimanu Kumar, Yao-Liang Yu

What is a systematic way to efficiently apply a wide spectrum of advanced ML programs to industrial scale problems, using Big Models (up to 100s of billions of parameters) on Big Data (up to terabytes or petabytes)?

BIG-bench Machine Learning Scheduling

Consistent Bounded-Asynchronous Parameter Servers for Distributed ML

no code implementations30 Dec 2013 Jinliang Wei, Wei Dai, Abhimanu Kumar, Xun Zheng, Qirong Ho, Eric P. Xing

Many ML algorithms fall into the category of \emph{iterative convergent algorithms} which start from a randomly chosen initial point and converge to optima by repeating iteratively a set of procedures.

Cannot find the paper you are looking for? You can Submit a new open access paper.