Search Results for author: Giscard Biamby

Found 5 papers, 2 papers with code

See, Say, and Segment: Teaching LMMs to Overcome False Premises

no code implementations13 Dec 2023 Tsung-Han Wu, Giscard Biamby, David Chan, Lisa Dunlap, Ritwik Gupta, Xudong Wang, Joseph E. Gonzalez, Trevor Darrell

Current open-source Large Multimodal Models (LMMs) excel at tasks such as open-vocabulary language grounding and segmentation but can suffer under false premises when queries imply the existence of something that is not actually present in the image.

G^3: Geolocation via Guidebook Grounding

1 code implementation28 Nov 2022 Grace Luo, Giscard Biamby, Trevor Darrell, Daniel Fried, Anna Rohrbach

We propose the task of Geolocation via Guidebook Grounding that uses a dataset of StreetView images from a diverse set of locations and an associated textual guidebook for GeoGuessr, a popular interactive geolocation game.

Twitter-COMMs: Detecting Climate, COVID, and Military Multimodal Misinformation

1 code implementation NAACL 2022 Giscard Biamby, Grace Luo, Trevor Darrell, Anna Rohrbach

Detecting out-of-context media, such as "mis-captioned" images on Twitter, is a relevant problem, especially in domains of high public significance.

Misinformation

Region-level Active Detector Learning

no code implementations20 Aug 2021 Michael Laielli, Giscard Biamby, Dian Chen, Ritwik Gupta, Adam Loeffler, Phat Dat Nguyen, Ross Luo, Trevor Darrell, Sayna Ebrahimi

Active learning for object detection is conventionally achieved by applying techniques developed for classification in a way that aggregates individual detections into image-level selection criteria.

Active Learning Object +2

Minimax Active Learning

no code implementations18 Dec 2020 Sayna Ebrahimi, William Gan, Dian Chen, Giscard Biamby, Kamyar Salahi, Michael Laielli, Shizhan Zhu, Trevor Darrell

Active learning aims to develop label-efficient algorithms by querying the most representative samples to be labeled by a human annotator.

Active Learning Clustering +2

Cannot find the paper you are looking for? You can Submit a new open access paper.