Search Results for author: Yumin Suh

Found 18 papers, 2 papers with code

Progressive Token Length Scaling in Transformer Encoders for Efficient Universal Segmentation

no code implementations23 Apr 2024 Abhishek Aich, Yumin Suh, Samuel Schulter, Manmohan Chandraker

With efficiency being a high priority for scaling such models, we observed that the state-of-the-art method Mask2Former uses ~50% of its compute only on the transformer encoder.

Efficient Transformer Encoders for Mask2Former-style models

no code implementations23 Apr 2024 Manyi Yao, Abhishek Aich, Yumin Suh, Amit Roy-Chowdhury, Christian Shelton, Manmohan Chandraker

The third step is to use the aforementioned derived dataset to train a gating network that predicts the number of encoder layers to be used, conditioned on the input image.

Generating Enhanced Negatives for Training Language-Based Object Detectors

1 code implementation29 Dec 2023 Shiyu Zhao, Long Zhao, Vijay Kumar B. G, Yumin Suh, Dimitris N. Metaxas, Manmohan Chandraker, Samuel Schulter

The recent progress in language-based open-vocabulary object detection can be largely attributed to finding better ways of leveraging large-scale data with free-form text annotations.

Object object-detection +1

Efficient Controllable Multi-Task Architectures

no code implementations ICCV 2023 Abhishek Aich, Samuel Schulter, Amit K. Roy-Chowdhury, Manmohan Chandraker, Yumin Suh

Further, we present a simple but effective search algorithm that translates user constraints to runtime width configurations of both the shared encoder and task decoders, for sampling the sub-architectures.

Knowledge Distillation

Taming Self-Training for Open-Vocabulary Object Detection

2 code implementations11 Aug 2023 Shiyu Zhao, Samuel Schulter, Long Zhao, Zhixing Zhang, Vijay Kumar B. G, Yumin Suh, Manmohan Chandraker, Dimitris N. Metaxas

This work identifies two challenges of using self-training in OVD: noisy PLs from VLMs and frequent distribution changes of PLs.

Object object-detection +1

OmniLabel: A Challenging Benchmark for Language-Based Object Detection

no code implementations ICCV 2023 Samuel Schulter, Vijay Kumar B G, Yumin Suh, Konstantinos M. Dafnis, Zhixing Zhang, Shiyu Zhao, Dimitris Metaxas

With more than 28K unique object descriptions on over 25K images, OmniLabel provides a challenging benchmark with diverse and complex object descriptions in a naturally open-vocabulary setting.

Object object-detection +1

Confidence and Dispersity Speak: Characterising Prediction Matrix for Unsupervised Accuracy Estimation

no code implementations2 Feb 2023 Weijian Deng, Yumin Suh, Stephen Gould, Liang Zheng

This work aims to assess how well a model performs under distribution shifts without using labels.

PU GNN: Chargeback Fraud Detection in P2E MMORPGs via Graph Attention Networks with Imbalanced PU Labels

no code implementations16 Nov 2022 Jiho Choi, Junghoon Park, Woocheol Kim, Jin-Hyeok Park, Yumin Suh, Minchang Sung

The recent advent of play-to-earn (P2E) systems in massively multiplayer online role-playing games (MMORPGs) has made in-game goods interchangeable with real-world values more than ever before.

Fraud Detection Graph Attention

Controllable Dynamic Multi-Task Architectures

no code implementations CVPR 2022 Dripta S. Raychaudhuri, Yumin Suh, Samuel Schulter, Xiang Yu, Masoud Faraki, Amit K. Roy-Chowdhury, Manmohan Chandraker

In contrast to the existing dynamic multi-task approaches that adjust only the weights within a fixed architecture, our approach affords the flexibility to dynamically control the total computational cost and match the user-preferred task importance better.

Multi-Task Learning

On Generalizing Beyond Domains in Cross-Domain Continual Learning

no code implementations CVPR 2022 Christian Simon, Masoud Faraki, Yi-Hsuan Tsai, Xiang Yu, Samuel Schulter, Yumin Suh, Mehrtash Harandi, Manmohan Chandraker

Humans have the ability to accumulate knowledge of new tasks in varying conditions, but deep neural networks often suffer from catastrophic forgetting of previously learned knowledge after learning a new task.

Continual Learning Knowledge Distillation

Learning Semantic Segmentation from Multiple Datasets with Label Shifts

no code implementations28 Feb 2022 Dongwan Kim, Yi-Hsuan Tsai, Yumin Suh, Masoud Faraki, Sparsh Garg, Manmohan Chandraker, Bohyung Han

First, a gradient conflict in training due to mismatched label spaces is identified and a class-independent binary cross-entropy loss is proposed to alleviate such label conflicts.

Semantic Segmentation

Cross-Domain Similarity Learning for Face Recognition in Unseen Domains

no code implementations CVPR 2021 Masoud Faraki, Xiang Yu, Yi-Hsuan Tsai, Yumin Suh, Manmohan Chandraker

Intuitively, it discriminatively correlates explicit metrics derived from one domain, with triplet samples from another domain in a unified loss function to be minimized within a network, which leads to better alignment of the training domains.

Face Recognition Metric Learning

Learning to Optimize Domain Specific Normalization for Domain Generalization

no code implementations ECCV 2020 Seonguk Seo, Yumin Suh, Dongwan Kim, Geeho Kim, Jongwoo Han, Bohyung Han

We propose a simple but effective multi-source domain generalization technique based on deep neural networks by incorporating optimized normalization layers that are specific to individual domains.

Domain Generalization Unsupervised Domain Adaptation

Subgraph Matching Using Compactness Prior for Robust Feature Correspondence

no code implementations CVPR 2015 Yumin Suh, Kamil Adamczewski, Kyoung Mu Lee

By constructing Markov chain on the restricted search space instead of the original solution space, our method approximates the solution effectively.

Graph Matching

Cannot find the paper you are looking for? You can Submit a new open access paper.