1 code implementation • 8 Oct 2024 • Jeremy Andrew Irvin, Emily Ruoyu Liu, Joyce Chuyi Chen, Ines Dormoy, Jinyoung Kim, Samar Khanna, Zhuo Zheng, Stefano Ermon
Large vision and language assistants have enabled new capabilities for interpreting natural images.
no code implementations • 16 Jun 2024 • Samar Khanna, Medhanie Irgau, David B. Lobell, Stefano Ermon
An under-explored question of PEFT is in extending the pre-training phase without supervised labels; that is, can we adapt a pre-trained foundation model to a new domain via efficient self-supervised pre-training on this new domain?
no code implementations • 24 May 2024 • Louis Foucard, Samar Khanna, Yi Shi, Chi-Kuei Liu, Quinn Z Shen, Thuyen Ngo, Zi-Xiang Xia
In this paper, we propose SpotNet: a fast, single stage, image-centric but LiDAR anchored approach for long range 3D object detection.
1 code implementation • 5 Feb 2024 • Rohin Manvi, Samar Khanna, Marshall Burke, David Lobell, Stefano Ermon
Initially, we demonstrate that LLMs are capable of making accurate zero-shot geospatial predictions in the form of ratings that show strong monotonic correlation with ground truth (Spearman's $\rho$ of up to 0. 89).
1 code implementation • 6 Dec 2023 • Samar Khanna, Patrick Liu, Linqi Zhou, Chenlin Meng, Robin Rombach, Marshall Burke, David Lobell, Stefano Ermon
Our method outperforms previous state-of-the-art methods for satellite image generation and is the first large-scale generative foundation model for satellite imagery.
1 code implementation • 10 Oct 2023 • Rohin Manvi, Samar Khanna, Gengchen Mai, Marshall Burke, David Lobell, Stefano Ermon
With GeoLLM, we observe that GPT-3. 5 outperforms Llama 2 and RoBERTa by 19% and 51% respectively, suggesting that the performance of our method scales well with the size of the model and its pretraining dataset.
4 code implementations • 29 Sep 2023 • Linqi Zhou, Aaron Lou, Samar Khanna, Stefano Ermon
However, for many applications such as image editing, the model input comes from a distribution that is not random noise.
no code implementations • 26 Aug 2023 • Samar Khanna, Skanda Vaidyanath, Akash Velu
For instance, given a network that has been trained on a source task, we would like to re-train this network on a similar, yet different, target task while maintaining its performance on the source task.
no code implementations • 20 Jul 2023 • Rylan Schaeffer, Kateryna Pistunova, Samar Khanna, Sarthak Consul, Sanmi Koyejo
We find that the logically \textit{invalid} reasoning prompts do indeed achieve similar performance gains on BBH tasks as logically valid reasoning prompts.
1 code implementation • 17 Jul 2022 • Yezhen Cong, Samar Khanna, Chenlin Meng, Patrick Liu, Erik Rozi, Yutong He, Marshall Burke, David B. Lobell, Stefano Ermon
Unsupervised pre-training methods for large vision models have shown to enhance performance on downstream supervised tasks.
1 code implementation • 14 Apr 2022 • Samar Khanna, Bram Wallace, Kavita Bala, Bharath Hariharan
Geographic variance in satellite imagery impacts the ability of machine learning models to generalise to new regions.