no code implementations • 7 Feb 2024 • Rahul Yedida, Snehanshu Saha
We propose a novel white-box approach to hyper-parameter optimization.
1 code implementation • 17 Jan 2024 • Rahul Yedida, Tim Menzies
We hence conclude that this theory (that hyper-parameter optimization is best viewed as a ``smoothing'' function for the decision landscape), is both theoretically interesting and practically very useful.
1 code implementation • 21 May 2022 • Rahul Yedida, Hong Jin Kang, Huy Tu, Xueqi Yang, David Lo, Tim Menzies
Automatically generated static code warnings suffer from a large number of false alarms.
no code implementations • 29 Sep 2021 • Rahul Yedida, Rahul Krishna, Anup Kalia, Tim Menzies, Jin Xiao, Maja Vukovic
When services are divided into many independent components, they are easier to update.
1 code implementation • 15 Jan 2021 • Rahul Yedida, Xueqi Yang, Tim Menzies
We test the hypothesis laid by Galke and Scherp [18], that feedforward networks suffice for many analytics tasks (which we call, the "Old but Gold" hypothesis) for these two tasks.
no code implementations • 22 Oct 2020 • Rahul Yedida, Saad Mohammad Abrar, Cleber Melo-Filho, Eugene Muratov, Rada Chirkova, Alexander Tropsha
Results: We found 30. 4% of our proposed pairs in the ROBOKOP database.
1 code implementation • 18 May 2020 • Shailesh Sridhar, Snehanshu Saha, Azhar Shaikh, Rahul Yedida, Sriparna Saha
We leveraged the functional property of Mean Square Error, which is Lipschitz continuous to compute learning rate in shallow neural networks.
no code implementations • 9 Dec 2019 • Amritanshu Agrawal, Xueqi Yang, Rishabh Agrawal, Rahul Yedida, Xipeng Shen, Tim Menzies
How can we make software analytics simpler and faster?
3 code implementations • 1 Jun 2019 • Snehanshu Saha, Nithin Nagaraj, Archana Mathur, Rahul Yedida
We present analytical exploration of novel activation functions as consequence of integration of several ideas leading to implementation and subsequent use in habitability classification of exoplanets.
5 code implementations • 20 Feb 2019 • Rahul Yedida, Snehanshu Saha, Tejas Prashanth
In this paper, we propose a novel method to compute the learning rate for training deep neural networks with stochastic gradient descent.