1 code implementation • 22 Feb 2025 • Shivank Garg, Ayush Singh, Shweta Singh, Paras Chopra
Reinforcement learning from human feedback (RLHF) has emerged as the primary method for aligning large language models (LLMs) with human preferences.
no code implementations • 19 Dec 2024 • Pratham Singla, Ayush Singh, Adesh Gupta, Shivank Garg
Urban planning faces a critical challenge in balancing city-wide infrastructure needs with localized demographic preferences, particularly in rapidly developing regions.
no code implementations • 24 Nov 2024 • Ayush Singh, Rajdeep Aher, Shivank Garg
The rapid advancements in large language models (LLMs) have revolutionized natural language processing, creating an increased need for efficient, task-specific fine-tuning methods.
1 code implementation • 29 Oct 2024 • Ayush Singh, Mansi Gupta, Shivank Garg
Vision Language Models excel in handling a wide range of complex tasks, including Optical Character Recognition (OCR), Visual Question Answering (VQA), and advanced geometric reasoning.
no code implementations • 8 Oct 2024 • Ayush Singh, Mansi Gupta, Shivank Garg, Abhinav Kumar, Vansh Agrawal
We incorporated this pipeline for tasks involving geometry, algebra, and counting.
1 code implementation • 8 Oct 2024 • Vansh Agrawal, Pratham Singla, Amitoj Singh Miglani, Shivank Garg, Ayush Mangal
While state-of-the-art LLMs have shown poor logical and basic mathematical reasoning, recent works try to improve their problem-solving abilities using prompting techniques.
no code implementations • 6 Oct 2024 • Shivank Garg, Manyana Tiwari
This study investigates the generation of unsafe or harmful content in state-of-the-art generative models, focusing on methods for restricting such generations.
1 code implementation • 19 Jun 2024 • Shivank Garg, Abhishek Baghel, Amit Agarwal, Durga Toshniwal
With the rise of autonomous vehicles and advanced driver-assistance systems (ADAS), ensuring reliable object detection in all weather conditions is crucial for safety and efficiency.
1 code implementation • 18 Jun 2024 • Shivank Garg, Manyana Tiwari
In this paper, we extend the study of concept ablation within pre-trained models as introduced in 'Ablating Concepts in Text-to-Image Diffusion Models' by (Kumari et al., 2022).
1 code implementation • 26 Nov 2023 • Abhishek Sinha, Himanshi Tibrewal, Mansi Gupta, Nikhar Waghela, Shivank Garg
In this attack, adversaries aim to determine whether a particular point was used during the training of a target model.