no code implementations • 21 Feb 2025 • Akshay Kumar, Jarvis Haupt
Recent works exploring the training dynamics of homogeneous neural network weights under gradient flow with small initialization have established that in the early stages of training, the weights remain small and near the origin, but converge in direction.
no code implementations • 12 Mar 2024 • Akshay Kumar, Jarvis Haupt
This paper studies the gradient flow dynamics that arise when training deep homogeneous neural networks assumed to have locally Lipschitz gradients and an order of homogeneity strictly greater than two.
no code implementations • 14 Feb 2024 • Akshay Kumar, Jarvis Haupt
This paper examines gradient flow dynamics of two-homogeneous neural networks for small initializations, where all weights are initialized near the origin.
1 code implementation • NAACL 2021 • Shailaja Keyur Sampat, Akshay Kumar, Yezhou Yang, Chitta Baral
Most existing research on visual question answering (VQA) is limited to information explicitly present in an image or a video.
1 code implementation • 13 Apr 2021 • Shailaja Keyur Sampat, Akshay Kumar, Yezhou Yang, Chitta Baral
Most existing research on visual question answering (VQA) is limited to information explicitly present in an image or a video.
no code implementations • 14 Jan 2021 • Vinay Damodaran, Sharanya Chakravarthy, Akshay Kumar, Anjana Umapathy, Teruko Mitamura, Yuta Nakashima, Noa Garcia, Chenhui Chu
Visual Question Answering (VQA) is of tremendous interest to the research community with important applications such as aiding visually impaired users and image-based search.
no code implementations • 2 Nov 2020 • Manjot Kaur, Kulwinder Singh, Ishant Chauhan, Hardilraj Singh, Ram K Sharma, Ankush Vij, Anup Thakur, Akshay Kumar
The electrical conductivity analysis shows the presence of the three regions with temperature variation.
Materials Science
no code implementations • 15 Jul 2020 • Akshay Kumar, Jarvis Haupt
This work examines the problem of exact data interpolation via sparse (neuron count), infinitely wide, single hidden layer neural networks with leaky rectified linear unit activations.