We discuss a general formulation for the Continual Learning (CL) problem for classification---a learning task where a stream provides samples to a learner and the goal of the learner, depending on the samples it receives, is to continually upgrade its knowledge about the old classes and learn new ones.
There has been increasing interest in building deep hierarchy-aware classifiers that aim to quantify and reduce the severity of mistakes, and not just reduce the number of errors.
There has been increasing interest in building deep hierarchy-aware classifiers, aiming to quantify and reduce the severity of mistakes and not just count the number of errors.
Multi-object tracking has seen a lot of progress recently, albeit with substantial annotation costs for developing better and larger labeled datasets.
Further, we predict the performance accuracy of the recommended architecture on the given unknown dataset, without the need for training the model.
The exploding cost and time needed for data labeling and model training are bottlenecks for training DNN models on large datasets.
Ranked #2 on Text Classification on Amazon-5
We analyze the binarization tradeoff using a metric that jointly models the input binarization-error and computational cost and introduce an efficient algorithm to select layers whose inputs are to be binarized.
We present a theoretical analysis of the technique to show the effective representational power of the resulting layers, and explore the forms of data they model best.
Inspired by these techniques, we propose to model connections between filters of a CNN using graphs which are simultaneously sparse and well connected.
We introduce a Hindi-English (Hi-En) code-mixed dataset for sentiment analysis and perform empirical analysis comparing the suitability and performance of various state-of-the-art SA methods in social media.
In this paper we describe an end to end Neural Model for Named Entity Recognition NER) which is based on Bi-Directional RNN-LSTM.