no code implementations • 14 Mar 2022 • Vibhor Gupta, Jyoti Narwariya, Pankaj Malhotra, Lovekesh Vig, Gautam Shroff
We note that existing continual learning methods do not take into account variability in input dimensions arising due to different subsets of sensors being available across tasks, and struggle to adapt to such variable input dimensions (VID) tasks.
no code implementations • 11 Feb 2022 • Jyoti Narwariya, Chetan Verma, Pankaj Malhotra, Lovekesh Vig, Easwara Subramanian, Sanjay Bhat
One of the objectives of such demand response management is to incentivize the consumers to adjust their consumption so that the overall electricity procurement in the wholesale markets is minimized, e. g. it is desirable that consumers consume less during peak hours when cost of procurement for brokers from wholesale markets are high.
no code implementations • 1 Jul 2020 • Vibhor Gupta, Jyoti Narwariya, Pankaj Malhotra, Lovekesh Vig, Gautam Shroff
Such a combinatorial generalization is achieved by conditioning the layers of a core neural network-based time series model with a "conditioning vector" that carries information of the available combination of sensors for each time series.
no code implementations • 30 Jun 2020 • Jyoti Narwariya, Pankaj Malhotra, Vishnu Tv, Lovekesh Vig, Gautam Shroff
Deep learning models such as those based on recurrent neural networks (RNNs) or convolutional neural networks (CNNs) fail to explicitly leverage this potentially rich source of domain-knowledge into the learning procedure.
no code implementations • 13 Sep 2019 • Jyoti Narwariya, Pankaj Malhotra, Lovekesh Vig, Gautam Shroff, Vishnu Tv
We overcome this limitation in order to train a common agent across domains with each domain having different number of target classes, we utilize a triplet-loss based learning procedure that does not require any constraints to be enforced on the number of classes for the few-shot TSC tasks.
no code implementations • 16 Jul 2019 • Vishnu TV, Pankaj Malhotra, Jyoti Narwariya, Lovekesh Vig, Gautam Shroff
Recently, neural networks trained as optimizers under the "learning to learn" or meta-learning framework have been shown to be effective for a broad range of optimization tasks including derivative-free black-box function optimization.
no code implementations • 29 Apr 2019 • Kathan Kashiparekh, Jyoti Narwariya, Pankaj Malhotra, Lovekesh Vig, Gautam Shroff
We also provide qualitative insights into the working of CTN by: i) analyzing the activations and filters of first convolution layer suggesting the filters in CTN are generically useful, ii) analyzing the impact of the design decision to incorporate multiple length decisions, and iii) finding regions of time series that affect the final classification decision via occlusion sensitivity analysis.
no code implementations • 1 Apr 2019 • Priyanka Gupta, Pankaj Malhotra, Jyoti Narwariya, Lovekesh Vig, Gautam Shroff
We, therefore, conclude that pre-trained deep models like TimeNet and HealthNet allow leveraging the advantages of deep learning for clinical time series analysis tasks, while also minimize dependence on hand-crafted features, deal robustly with scarce labeled training data scenarios without overfitting, as well as reduce dependence on expertise and resources required to train deep networks from scratch.