1 code implementation • 9 Feb 2024 • Ruijie Zheng, Yongyuan Liang, Xiyao Wang, Shuang Ma, Hal Daumé III, Huazhe Xu, John Langford, Praveen Palanisamy, Kalyan Shankar Basu, Furong Huang
We present Premier-TACO, a multitask feature representation learning approach designed to improve few-shot policy learning efficiency in sequential decision-making tasks.
no code implementations • 8 Nov 2023 • Sakshi Mishra, Praveen Palanisamy
The perspective aims to provide a holistic picture of the autonomous advanced aerial mobility field and its future directions.
no code implementations • 10 Nov 2022 • Mehrnaz Sabet, Praveen Palanisamy, Sakshi Mishra
One major barrier to advancing aerial autonomy has been collecting large-scale aerial datasets for training machine learning models.
1 code implementation • 11 Nov 2019 • Praveen Palanisamy
Our MACAD-Gym platform provides an extensible set of Connected Autonomous Driving (CAD) simulation environments that enable the research and development of Deep RL- based integrated sensing, perception, planning and control algorithms for CAD systems with unlimited operational design domain under realistic, multi-agent settings.
1 code implementation • 7 May 2019 • Sakshi Mishra, Praveen Palanisamy
In this research work, we propose a unified architecture for multi-time-scale predictions for intra-day solar irradiance forecasting using recurrent neural networks (RNN) and long-short-term memory networks (LSTMs).
no code implementations • 19 Dec 2018 • Yilun Chen, Praveen Palanisamy, Priyantha Mudalige, Katharina Muelling, John M. Dolan
In this paper, we leverage auxiliary information aside from raw images and design a novel network structure, called Auxiliary Task Network (ATN), to help boost the driving performance while maintaining the advantage of minimal training data and an End-to-End training method.
1 code implementation • 14 Jul 2018 • Sakshi Mishra, Praveen Palanisamy
The results demonstrate that the proposed method based on the unified architecture is effective for multi-horizon solar forecasting and achieves a lower root-mean-squared prediction error compared to the previous best-performing methods which use one model for each time-horizon.