Search Results for author: K J Joseph

Found 16 papers, 11 papers with code

CoPL: Contextual Prompt Learning for Vision-Language Understanding

no code implementations3 Jul 2023 Koustava Goswami, Srikrishna Karanam, Prateksha Udhayanan, K J Joseph, Balaji Vasan Srinivasan

Our key innovations over earlier works include using local image features as part of the prompt learning process, and more crucially, learning to weight these prompts based on local features that are appropriate for the task at hand.

A-STAR: Test-time Attention Segregation and Retention for Text-to-image Synthesis

no code implementations ICCV 2023 Aishwarya Agarwal, Srikrishna Karanam, K J Joseph, Apoorv Saxena, Koustava Goswami, Balaji Vasan Srinivasan

First, our attention segregation loss reduces the cross-attention overlap between attention maps of different concepts in the text prompt, thereby reducing the confusion/conflict among various concepts and the eventual capture of all concepts in the generated output.

Denoising Image Generation +1

Data-Free Class-Incremental Hand Gesture Recognition

1 code implementation ICCV 2023 Shubhra Aich, Jesus Ruiz-Santaquiteria, Zhenyu Lu, Prachi Garg, K J Joseph, Alvaro Fernandez Garcia, Vineeth N Balasubramanian, Kenrick Kin, Chengde Wan, Necati Cihan Camgoz, Shugao Ma, Fernando de la Torre

Our sampling scheme outperforms SOTA methods significantly on two 3D skeleton gesture datasets, the publicly available SHREC 2017, and EgoGesture3D -- which we extract from a publicly available RGBD dataset.

Class Incremental Learning Hand Gesture Recognition +3

Class-Incremental Learning with Cross-Space Clustering and Controlled Transfer

1 code implementation7 Aug 2022 Arjun Ashok, K J Joseph, Vineeth Balasubramanian

This allows the model to learn classes in such a way that it maximizes positive forward transfer from similar prior classes, thus increasing plasticity, and minimizes negative backward transfer on dissimilar prior classes, whereby strengthening stability.

Class Incremental Learning Clustering +1

D3Former: Debiased Dual Distilled Transformer for Incremental Learning

1 code implementation25 Jul 2022 Abdelrahman Mohamed, Rushali Grandhe, K J Joseph, Salman Khan, Fahad Khan

In contrast to a recent ViT based CIL approach, our $\textrm{D}^3\textrm{Former}$ does not dynamically expand its architecture when new tasks are learned and remains suitable for a large number of incremental tasks.

Incremental Learning

Novel Class Discovery without Forgetting

no code implementations21 Jul 2022 K J Joseph, Sujoy Paul, Gaurav Aggarwal, Soma Biswas, Piyush Rai, Kai Han, Vineeth N Balasubramanian

Inspired by this, we identify and formulate a new, pragmatic problem setting of NCDwF: Novel Class Discovery without Forgetting, which tasks a machine learning model to incrementally discover novel categories of instances from unlabeled data, while maintaining its performance on the previously seen categories.

Novel Class Discovery

Spacing Loss for Discovering Novel Categories

1 code implementation22 Apr 2022 K J Joseph, Sujoy Paul, Gaurav Aggarwal, Soma Biswas, Piyush Rai, Kai Han, Vineeth N Balasubramanian

Novel Class Discovery (NCD) is a learning paradigm, where a machine learning model is tasked to semantically group instances from unlabeled data, by utilizing labeled instances from a disjoint set of classes.

Novel Class Discovery

OW-DETR: Open-world Detection Transformer

2 code implementations CVPR 2022 Akshita Gupta, Sanath Narayan, K J Joseph, Salman Khan, Fahad Shahbaz Khan, Mubarak Shah

In the case of incremental object detection, OW-DETR outperforms the state-of-the-art for all settings on PASCAL VOC.

Inductive Bias Object +3

Meta-Consolidation for Continual Learning

1 code implementation NeurIPS 2020 K J Joseph, Vineeth N. Balasubramanian

The ability to continuously learn and adapt itself to new tasks, without losing grasp of already acquired knowledge is a hallmark of biological learning systems, which current deep learning systems fall short of.

Continual Learning

Submodular Batch Selection for Training Deep Neural Networks

1 code implementation20 Jun 2019 K J Joseph, Vamshi Teja R, Krishnakant Singh, Vineeth N. Balasubramanian

Mini-batch gradient descent based methods are the de facto algorithms for training neural network architectures today.

Combinatorial Optimization Informativeness

MASON: A Model AgnoStic ObjectNess Framework

1 code implementation20 Sep 2018 K J Joseph, Vineeth N. Balasubramanian

This paper proposes a simple, yet very effective method to localize dominant foreground objects in an image, to pixel-level precision.

Cannot find the paper you are looking for? You can Submit a new open access paper.