Search Results for author: Samar Khanna

Found 11 papers, 7 papers with code

ExPLoRA: Parameter-Efficient Extended Pre-Training to Adapt Vision Transformers under Domain Shifts

no code implementations16 Jun 2024 Samar Khanna, Medhanie Irgau, David B. Lobell, Stefano Ermon

An under-explored question of PEFT is in extending the pre-training phase without supervised labels; that is, can we adapt a pre-trained foundation model to a new domain via efficient self-supervised pre-training on this new domain?

parameter-efficient fine-tuning Transfer Learning +1

SpotNet: An Image Centric, Lidar Anchored Approach To Long Range Perception

no code implementations24 May 2024 Louis Foucard, Samar Khanna, Yi Shi, Chi-Kuei Liu, Quinn Z Shen, Thuyen Ngo, Zi-Xiang Xia

In this paper, we propose SpotNet: a fast, single stage, image-centric but LiDAR anchored approach for long range 3D object detection.

3D Object Detection object-detection +1

Large Language Models are Geographically Biased

1 code implementation5 Feb 2024 Rohin Manvi, Samar Khanna, Marshall Burke, David Lobell, Stefano Ermon

Initially, we demonstrate that LLMs are capable of making accurate zero-shot geospatial predictions in the form of ratings that show strong monotonic correlation with ground truth (Spearman's $\rho$ of up to 0. 89).

Fairness

DiffusionSat: A Generative Foundation Model for Satellite Imagery

1 code implementation6 Dec 2023 Samar Khanna, Patrick Liu, Linqi Zhou, Chenlin Meng, Robin Rombach, Marshall Burke, David Lobell, Stefano Ermon

Our method outperforms previous state-of-the-art methods for satellite image generation and is the first large-scale generative foundation model for satellite imagery.

Crop Yield Prediction Image Generation +1

GeoLLM: Extracting Geospatial Knowledge from Large Language Models

1 code implementation10 Oct 2023 Rohin Manvi, Samar Khanna, Gengchen Mai, Marshall Burke, David Lobell, Stefano Ermon

With GeoLLM, we observe that GPT-3. 5 outperforms Llama 2 and RoBERTa by 19% and 51% respectively, suggesting that the performance of our method scales well with the size of the model and its pretraining dataset.

Denoising Diffusion Bridge Models

4 code implementations29 Sep 2023 Linqi Zhou, Aaron Lou, Samar Khanna, Stefano Ermon

However, for many applications such as image editing, the model input comes from a distribution that is not random noise.

Denoising Image Generation

Differentiable Weight Masks for Domain Transfer

no code implementations26 Aug 2023 Samar Khanna, Skanda Vaidyanath, Akash Velu

For instance, given a network that has been trained on a source task, we would like to re-train this network on a similar, yet different, target task while maintaining its performance on the source task.

Invalid Logic, Equivalent Gains: The Bizarreness of Reasoning in Language Model Prompting

no code implementations20 Jul 2023 Rylan Schaeffer, Kateryna Pistunova, Samar Khanna, Sarthak Consul, Sanmi Koyejo

We find that the logically \textit{invalid} reasoning prompts do indeed achieve similar performance gains on BBH tasks as logically valid reasoning prompts.

Language Modeling Language Modelling +1

Cannot find the paper you are looking for? You can Submit a new open access paper.