no code implementations • 15 Jun 2024 • Gurusha Juneja, Nagarajan Natarajan, Hua Li, Jian Jiao, Amit Sharma
Given a task in the form of a basic description and its training examples, prompt optimization is the problem of synthesizing the given information into a text prompt for a large language model (LLM).
no code implementations • 2 Apr 2024 • Gurusha Juneja, Subhabrata Dutta, Tanmoy Chakraborty
The solver model generates the solution to the subproblems that are then checked by the verifier module; depending upon the feedback from the verifier, the reasoning context is constructed using the subproblems and the solutions.
no code implementations • 23 Dec 2023 • Gurusha Juneja, Sukrit Kumar
Diffusion models when conditioned on text prompts, generate realistic-looking images with intricate details.
1 code implementation • 21 Oct 2023 • Gurusha Juneja, Subhabrata Dutta, Soumen Chakrabarti, Sunny Manchanda, Tanmoy Chakraborty
Additionally, we show that DaSLaM is not limited by the solver's capabilities as a function of scale; e. g., solver LMs with diverse sizes give significant performance improvement with our solver-agnostic decomposition technique.
Ranked #6 on Overall - Test on JEEBench (using extra training data)