Search Results for author: Omid Poursaeed

Found 16 papers, 4 papers with code

Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs

no code implementations11 Apr 2024 Kanchana Ranasinghe, Satya Narayan Shukla, Omid Poursaeed, Michael S. Ryoo, Tsung-Yu Lin

Integration of Large Language Models (LLMs) into visual domain tasks, resulting in visual-LLMs (V-LLMs), has enabled exceptional performance in vision-language tasks, particularly for visual question answering (VQA).

Descriptive Hallucination +2

Universal Pyramid Adversarial Training for Improved ViT Performance

no code implementations26 Dec 2023 Ping-Yeh Chiang, Yipin Zhou, Omid Poursaeed, Satya Narayan Shukla, Ashish Shah, Tom Goldstein, Ser-Nam Lim

Recently, Pyramid Adversarial training (Herrmann et al., 2022) has been shown to be very effective for improving clean accuracy and distribution-shift robustness of vision transformers.

Revisiting Kernel Temporal Segmentation as an Adaptive Tokenizer for Long-form Video Understanding

no code implementations20 Sep 2023 Mohamed Afham, Satya Narayan Shukla, Omid Poursaeed, Pengchuan Zhang, Ashish Shah, SerNam Lim

While most modern video understanding models operate on short-range clips, real-world videos are often several minutes long with semantically consistent segments of variable length.

Temporal Action Localization Video Classification +1

Open Vocabulary Semantic Segmentation with Patch Aligned Contrastive Learning

no code implementations CVPR 2023 Jishnu Mukhoti, Tsung-Yu Lin, Omid Poursaeed, Rui Wang, Ashish Shah, Philip H. S. Torr, Ser-Nam Lim

We introduce Patch Aligned Contrastive Learning (PACL), a modified compatibility function for CLIP's contrastive loss, intending to train an alignment between the patch tokens of the vision encoder and the CLS token of the text encoder.

Contrastive Learning Image Classification +5

Unifying Tracking and Image-Video Object Detection

no code implementations20 Nov 2022 Peirong Liu, Rui Wang, Pengchuan Zhang, Omid Poursaeed, Yipin Zhou, Xuefei Cao, Sreya Dutta Roy, Ashish Shah, Ser-Nam Lim

We propose TrIVD (Tracking and Image-Video Detection), the first framework that unifies image OD, video OD, and MOT within one end-to-end model.

Multi-Object Tracking Object +2

Robustness and Generalization via Generative Adversarial Training

no code implementations ICCV 2021 Omid Poursaeed, Tianxing Jiang, Harry Yang, Serge Belongie, SerNam Lim

Adversarial training with these examples enable the model to withstand a wide range of attacks by observing a variety of input alterations during training.

object-detection Object Detection

Coupling Explicit and Implicit Surface Representations for Generative 3D Modeling

no code implementations ECCV 2020 Omid Poursaeed, Matthew Fisher, Noam Aigerman, Vladimir G. Kim

We propose a novel neural architecture for representing 3D surfaces, which harnesses two complementary shape representations: (i) an explicit representation via an atlas, i. e., embeddings of 2D domains into 3D; (ii) an implicit-function representation, i. e., a scalar function over the 3D volume, with its levels denoting surfaces.

Surface Reconstruction

Fine-grained Synthesis of Unrestricted Adversarial Examples

no code implementations20 Nov 2019 Omid Poursaeed, Tianxing Jiang, Yordanos Goshu, Harry Yang, Serge Belongie, Ser-Nam Lim

We propose a novel approach for generating unrestricted adversarial examples by manipulating fine-grained aspects of image generation.

Image Generation object-detection +2

Neural Puppet: Generative Layered Cartoon Characters

no code implementations4 Oct 2019 Omid Poursaeed, Vladimir G. Kim, Eli Shechtman, Jun Saito, Serge Belongie

We capture these subtle changes by applying an image translation network to refine the mesh rendering, providing an end-to-end model to generate new animations of a character with high visual quality.

Generative Adversarial Perturbations

1 code implementation CVPR 2018 Omid Poursaeed, Isay Katsman, Bicheng Gao, Serge Belongie

In this paper, we propose novel generative models for creating adversarial examples, slightly perturbed images resembling natural images but maliciously crafted to fool pre-trained models.

General Classification Semantic Segmentation

Vision-based Real Estate Price Estimation

no code implementations18 Jul 2017 Omid Poursaeed, Tomas Matera, Serge Belongie

Using deep convolutional neural networks on a large dataset of photos of home interiors and exteriors, we develop a method for estimating the luxury level of real estate photos.

Stacked Generative Adversarial Networks

2 code implementations CVPR 2017 Xun Huang, Yixuan Li, Omid Poursaeed, John Hopcroft, Serge Belongie

In this paper, we propose a novel generative model named Stacked Generative Adversarial Networks (SGAN), which is trained to invert the hierarchical representations of a bottom-up discriminative network.

Ranked #11 on Conditional Image Generation on CIFAR-10 (Inception score metric)

Conditional Image Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.