no code implementations • 1 Aug 2023 • Mohit Rajpal, Lac Gia Tran, Yehong Zhang, Bryan Kian Hsiang Low
Derivative-free approaches such as Bayesian Optimization mitigate the dependency on the quality of gradient feedback, but are known to scale poorly in the high-dimension setting of complex decision making models.
no code implementations • 1 Jan 2021 • Mohit Rajpal, Yehong Zhang, Bryan Kian Hsiang Low
Pruning is an approach to alleviate overparameterization of deep neural networks (DNN) by zeroing out or pruning DNN elements with little to no efficacy at a given task.
no code implementations • 23 Oct 2019 • Mohit Rajpal, Bryan Kian Hsiang Low
This paper presents a novel unifying framework of bilinear LSTMs that can represent and utilize the nonlinear interaction of the input features present in sequence datasets for achieving superior performance over a linear LSTM and yet not incur more parameters to be learned.
1 code implementation • NeurIPS 2017 • Nikhil Parthasarathy, Eleanor Batty, William Falcon, Thomas Rutten, Mohit Rajpal, E.J. Chichilnisky, Liam Paninski
Decoding sensory stimuli from neural signals can be used to reveal how we sense our physical environment, and is valuable for the design of brain-machine interfaces.
no code implementations • 10 Nov 2017 • Mohit Rajpal, William Blum, Rishabh Singh
Fuzzing is a popular dynamic program analysis technique used to find vulnerabilities in complex software.