Search Results for author: S P Sharan

Found 5 papers, 3 papers with code

LLM-Assist: Enhancing Closed-Loop Planning with Language-Based Reasoning

no code implementations30 Dec 2023 S P Sharan, Francesco Pittaluga, Vijay Kumar B G, Manmohan Chandraker

Although planning is a crucial component of the autonomous driving stack, researchers have yet to develop robust planning algorithms that are capable of safely handling the diverse range of possible driving scenarios.

Autonomous Driving Common Sense Reasoning

Outline, Then Details: Syntactically Guided Coarse-To-Fine Code Generation

1 code implementation28 Apr 2023 Wenqing Zheng, S P Sharan, Ajay Kumar Jaiswal, Kevin Wang, Yihan Xi, Dejia Xu, Zhangyang Wang

For a complicated algorithm, its implementation by a human programmer usually starts with outlining a rough control flow followed by iterative enrichments, eventually yielding carefully generated syntactic structures and variables in a hierarchy.

Code Generation Language Modelling +1

Symbolic Visual Reinforcement Learning: A Scalable Framework with Object-Level Abstraction and Differentiable Expression Search

1 code implementation30 Dec 2022 Wenqing Zheng, S P Sharan, Zhiwen Fan, Kevin Wang, Yihan Xi, Zhangyang Wang

Learning efficient and interpretable policies has been a challenging task in reinforcement learning (RL), particularly in the visual RL setting with complex scenes.

Reinforcement Learning (RL)

Symbolic Distillation for Learned TCP Congestion Control

1 code implementation24 Oct 2022 S P Sharan, Wenqing Zheng, Kuo-Feng Hsu, Jiarong Xing, Ang Chen, Zhangyang Wang

At the core of our proposal is a novel symbolic branching algorithm that enables the rule to be aware of the context in terms of various network conditions, eventually converting the NN policy into a symbolic tree.

Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.