Search Results for author: Senthil Purushwalkam

Found 20 papers, 7 papers with code

Trust but Verify: Programmatic VLM Evaluation in the Wild

no code implementations17 Oct 2024 Viraj Prabhu, Senthil Purushwalkam, An Yan, Caiming Xiong, ran Xu

Next, to evaluate free-form model responses to queries in PROVE, we propose a programmatic evaluation strategy that measures both the helpfulness and truthfulness of a response within a unified scene graph-based framework.

Benchmarking Language Modelling +1

FaithEval: Can Your Language Model Stay Faithful to Context, Even If "The Moon is Made of Marshmallows"

1 code implementation30 Sep 2024 Yifei Ming, Senthil Purushwalkam, Shrey Pandit, Zixuan Ke, Xuan-Phi Nguyen, Caiming Xiong, Shafiq Joty

Ensuring faithfulness to context in large language models (LLMs) and retrieval-augmented generation (RAG) systems is crucial for reliable deployment in real-world applications, as incorrect or unsupported information can erode user trust.

counterfactual Hallucination +3

SFR-RAG: Towards Contextually Faithful LLMs

no code implementations16 Sep 2024 Xuan-Phi Nguyen, Shrey Pandit, Senthil Purushwalkam, Austin Xu, Hailin Chen, Yifei Ming, Zixuan Ke, Silvio Savarese, Caiming Xong, Shafiq Joty

Retrieval Augmented Generation (RAG), a paradigm that integrates external contextual information with large language models (LLMs) to enhance factual accuracy and relevance, has emerged as a pivotal area in generative AI.

counterfactual Hallucination +3

BootPIG: Bootstrapping Zero-shot Personalized Image Generation Capabilities in Pretrained Diffusion Models

1 code implementation25 Jan 2024 Senthil Purushwalkam, Akash Gokul, Shafiq Joty, Nikhil Naik

We propose a novel architecture (BootPIG) that allows a user to provide reference images of an object in order to guide the appearance of a concept in the generated images.

Image Segmentation Personalized Image Generation +2

Diffusion Model Alignment Using Direct Preference Optimization

1 code implementation CVPR 2024 Bram Wallace, Meihua Dang, Rafael Rafailov, Linqi Zhou, Aaron Lou, Senthil Purushwalkam, Stefano Ermon, Caiming Xiong, Shafiq Joty, Nikhil Naik

Large language models (LLMs) are fine-tuned using human comparison data with Reinforcement Learning from Human Feedback (RLHF) methods to make them better aligned with users' preferences.

XGen-7B Technical Report

1 code implementation7 Sep 2023 Erik Nijkamp, Tian Xie, Hiroaki Hayashi, Bo Pang, Congying Xia, Chen Xing, Jesse Vig, Semih Yavuz, Philippe Laban, Ben Krause, Senthil Purushwalkam, Tong Niu, Wojciech Kryściński, Lidiya Murakhovs'ka, Prafulla Kumar Choubey, Alex Fabbri, Ye Liu, Rui Meng, Lifu Tu, Meghana Bhat, Chien-Sheng Wu, Silvio Savarese, Yingbo Zhou, Shafiq Joty, Caiming Xiong

Most open-source LLMs, on the other hand, are limited in their ability to support longer sequence lengths, which is a key requirement for many tasks that require inference over an input context.

2k 8k

The Challenges of Continuous Self-Supervised Learning

no code implementations23 Mar 2022 Senthil Purushwalkam, Pedro Morgado, Abhinav Gupta

As a result, SSL holds the promise to learn representations from data in-the-wild, i. e., without the need for finite and static datasets.

Representation Learning Self-Supervised Learning

The Unsurprising Effectiveness of Pre-Trained Vision Models for Control

no code implementations7 Mar 2022 Simone Parisi, Aravind Rajeswaran, Senthil Purushwalkam, Abhinav Gupta

In this context, we revisit and study the role of pre-trained visual representations for control, and in particular representations trained on large-scale computer vision datasets.

The Functional Correspondence Problem

no code implementations ICCV 2021 Zihang Lai, Senthil Purushwalkam, Abhinav Gupta

For example, what are the correspondences between a bottle and shoe for the task of pounding or the task of pouring.

Aligning Videos in Space and Time

no code implementations ECCV 2020 Senthil Purushwalkam, Tian Ye, Saurabh Gupta, Abhinav Gupta

During training, given a pair of videos, we compute cycles that connect patches in a given frame in the first video by matching through frames in the second video.

Task-Driven Modular Networks for Zero-Shot Compositional Learning

1 code implementation ICCV 2019 Senthil Purushwalkam, Maximilian Nickel, Abhinav Gupta, Marc'Aurelio Ranzato

When extending the evaluation to the generalized setting which accounts also for pairs seen during training, we discover that naive baseline methods perform similarly or better than current approaches.

Attribute Novel Concepts +1

Bounce and Learn: Modeling Scene Dynamics with Real-World Bounces

no code implementations ICLR 2019 Senthil Purushwalkam, Abhinav Gupta, Danny M. Kaufman, Bryan Russell

To achieve our results, we introduce the Bounce Dataset comprising 5K RGB-D videos of bouncing trajectories of a foam ball to probe surfaces of varying shapes and materials in everyday scenes including homes and offices.

Pose from Action: Unsupervised Learning of Pose Features based on Motion

no code implementations18 Sep 2016 Senthil Purushwalkam, Abhinav Gupta

We propose an unsupervised method to learn pose features from videos that exploits a signal which is complementary to appearance and can be used as supervision: motion.

Action Recognition In Videos Optical Flow Estimation +2

Stochastic Multiple Choice Learning for Training Diverse Deep Ensembles

no code implementations NeurIPS 2016 Stefan Lee, Senthil Purushwalkam, Michael Cogswell, Viresh Ranjan, David Crandall, Dhruv Batra

Many practical perception systems exist within larger processes that include interactions with users or additional components capable of evaluating the quality of predicted solutions.

Multiple-choice

Combining the Best of Graphical Models and ConvNets for Semantic Segmentation

no code implementations14 Dec 2014 Michael Cogswell, Xiao Lin, Senthil Purushwalkam, Dhruv Batra

We present a two-module approach to semantic segmentation that incorporates Convolutional Networks (CNNs) and Graphical Models.

Segmentation Semantic Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.