Search Results for author: Samar Khanna

Found 8 papers, 4 papers with code

Large Language Models are Geographically Biased

1 code implementation5 Feb 2024 Rohin Manvi, Samar Khanna, Marshall Burke, David Lobell, Stefano Ermon

Initially, we demonstrate that LLMs are capable of making accurate zero-shot geospatial predictions in the form of ratings that show strong monotonic correlation with ground truth (Spearman's $\rho$ of up to 0. 89).

Fairness

DiffusionSat: A Generative Foundation Model for Satellite Imagery

no code implementations6 Dec 2023 Samar Khanna, Patrick Liu, Linqi Zhou, Chenlin Meng, Robin Rombach, Marshall Burke, David Lobell, Stefano Ermon

Our method outperforms previous state-of-the-art methods for satellite image generation and is the first large-scale $\textit{generative}$ foundation model for satellite imagery.

Crop Yield Prediction Image Generation

GeoLLM: Extracting Geospatial Knowledge from Large Language Models

1 code implementation10 Oct 2023 Rohin Manvi, Samar Khanna, Gengchen Mai, Marshall Burke, David Lobell, Stefano Ermon

With GeoLLM, we observe that GPT-3. 5 outperforms Llama 2 and RoBERTa by 19% and 51% respectively, suggesting that the performance of our method scales well with the size of the model and its pretraining dataset.

Denoising Diffusion Bridge Models

1 code implementation29 Sep 2023 Linqi Zhou, Aaron Lou, Samar Khanna, Stefano Ermon

However, for many applications such as image editing, the model input comes from a distribution that is not random noise.

Denoising Image Generation

Differentiable Weight Masks for Domain Transfer

no code implementations26 Aug 2023 Samar Khanna, Skanda Vaidyanath, Akash Velu

For instance, given a network that has been trained on a source task, we would like to re-train this network on a similar, yet different, target task while maintaining its performance on the source task.

Invalid Logic, Equivalent Gains: The Bizarreness of Reasoning in Language Model Prompting

no code implementations20 Jul 2023 Rylan Schaeffer, Kateryna Pistunova, Samar Khanna, Sarthak Consul, Sanmi Koyejo

We find that the logically \textit{invalid} reasoning prompts do indeed achieve similar performance gains on BBH tasks as logically valid reasoning prompts.

Language Modelling valid

Cannot find the paper you are looking for? You can Submit a new open access paper.