no code implementations • 11 Dec 2022 • Ankit Kumar, Priya Shukla, Vandana Kushwaha, G. C. Nandi
In this paper, we present an architecture that, unlike prior work, is context-aware.
no code implementations • 6 Nov 2021 • Priya Shukla, Vandana Kushwaha, G. C. Nandi
In the case of robots, we can not afford to spend that much time on making it to learn how to grasp objects effectively.
no code implementations • 9 Aug 2021 • Shekhar Gupta, Gaurav Kumar Yadav, G. C. Nandi
Subsequently, we further propose to feed the output of the inception residual block as an input to the Graph Convolution Neural Network (GCN) due to its better spatial feature learning capability.
no code implementations • 15 Jul 2021 • Priya Shukla, Nilotpal Pramanik, Deepesh Mehta, G. C. Nandi
It is trained on Cornell Grasping Dataset (CGD) and attained 98. 87% grasp pose accuracy for detecting both regular and irregular shaped objects from RGB-Depth (RGB-D) images while requiring only one third of the network trainable parameters as compared to the existing approaches.
no code implementations • 8 May 2021 • Vijay Bhaskar Semwal, Neha Gaud, G. C. Nandi
In this research article, we have reported periodic cellular automata rules for different gait state prediction and classification of the gait data using extreme machine Leaning (ELM).
no code implementations • 23 Jan 2020 • Mridul Mahajan, Tryambak Bhattacharjee, Arya Krishnan, Priya Shukla, G. C. Nandi
However, vision based robotic grasp detection is hindered by the unavailability of sufficient labelled data.
no code implementations • 15 Jan 2020 • Priya Shukla, Hitesh Kumar, G. C. Nandi
Further for grasp orientation learning, we develop a deep reinforcement learning (DRL) model which we name as Grasp Deep Q-Network (GDQN) and benchmarked our results with Modified VGG16 (MVGG16).
no code implementations • 3 Oct 2017 • Naimish Agarwal, G. C. Nandi
Automatic feature learning algorithms are at the forefront of modern day machine learning research.
no code implementations • 17 Sep 2016 • Avinash Kumar Singh, Piyush Joshi, G. C. Nandi
We have used two testing parameters, (a) under bad illumination and (b) less movement in eyes and mouth in case of real user to evaluate the performance of the system.