1 code implementation • ICCV 2023 • Shubham Goel, Georgios Pavlakos, Jathushan Rajasegaran, Angjoo Kanazawa, Jitendra Malik
To analyze video, we use 3D reconstructions from HMR 2. 0 as input to a tracking system that operates in 3D.
Ranked #1 on
3D Human Pose Estimation
on Human3.6M
(MPJPE metric)
1 code implementation • CVPR 2023 • Jathushan Rajasegaran, Georgios Pavlakos, Angjoo Kanazawa, Christoph Feichtenhofer, Jitendra Malik
Subsequently, we propose a Lagrangian Action Recognition model by fusing 3D pose and contextualized appearance over tracklets.
Ranked #1 on
Action Recognition
on AVA v2.2
(using extra training data)
no code implementations • 1 Feb 2022 • Jathushan Rajasegaran, Chelsea Finn, Sergey Levine
In this paper, we study how meta-learning can be applied to tackle online problems of this nature, simultaneously adapting to changing tasks and input distributions and meta-training the model in order to adapt more quickly in the future.
no code implementations • CVPR 2022 • Jathushan Rajasegaran, Georgios Pavlakos, Angjoo Kanazawa, Jitendra Malik
For a future frame, we compute the similarity between the predicted state of a tracklet and the single frame observations in a probabilistic manner.
no code implementations • 8 Dec 2021 • Jathushan Rajasegaran, Georgios Pavlakos, Angjoo Kanazawa, Jitendra Malik
For a future frame, we compute the similarity between the predicted state of a tracklet and the single frame observations in a probabilistic manner.
1 code implementation • NeurIPS 2021 • Jathushan Rajasegaran, Georgios Pavlakos, Angjoo Kanazawa, Jitendra Malik
We find that 3D representations are more effective than 2D representations for tracking in these settings, and we obtain state-of-the-art performance.
no code implementations • 1 Jan 2021 • Karttikeya Mangalam, Rohin Garg, Jathushan Rajasegaran, Taesung Park
Generative Adversarial Networks (GANs) are a class of generative models used for various applications, but they have been known to suffer from the mode collapse problem, in which some modes of the target distribution are ignored by the generator.
no code implementations • 19 Oct 2020 • Jathushan Rajasegaran, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Mubarak Shah
This demonstrates their ability to acquire transferable knowledge, a capability that is central to human learning.
1 code implementation • 17 Jun 2020 • Jathushan Rajasegaran, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Mubarak Shah
Our experiments show that, even in the first stage, self-supervision can outperform current state-of-the-art methods, with further gains achieved by our second stage distillation process.
Ranked #12 on
Few-Shot Image Classification
on FC100 5-way (5-shot)
no code implementations • 2 Jun 2020 • Naveen Karunanayake, Jathushan Rajasegaran, Ashanie Gunathillake, Suranga Seneviratne, Guillaume Jourjon
We show that a novel approach of combining content embeddings and style embeddings outperforms the baseline methods for image similarity such as SIFT, SURF, and various image hashing methods.
1 code implementation • CVPR 2020 • Jathushan Rajasegaran, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Mubarak Shah
In this paper, we hypothesize this problem can be avoided by learning a set of generalized parameters, that are neither specific to old nor new tasks.
2 code implementations • 17 Mar 2020 • K J Joseph, Jathushan Rajasegaran, Salman Khan, Fahad Shahbaz Khan, Vineeth N Balasubramanian
In a real-world setting, object instances from new classes can be continuously encountered by object detectors.
1 code implementation • NeurIPS 2019 • Jathushan Rajasegaran, Munawar Hayat, Salman H. Khan, Fahad Shahbaz Khan, Ling Shao
In order to maintain an equilibrium between previous and newly acquired knowledge, we propose a simple controller to dynamically balance the model plasticity.
Ranked #7 on
Continual Learning
on F-CelebA (10 tasks)
1 code implementation • 26 Nov 2019 • Hirunima Jayasekara, Vinoj Jayasundara, Mohamed Athif, Jathushan Rajasegaran, Sandaru Jayasekara, Suranga Seneviratne, Ranga Rodrigo
Capsule networks excel in understanding spatial relationships in 2D data for vision related tasks.
1 code implementation • 3 Jun 2019 • Jathushan Rajasegaran, Munawar Hayat, Salman Khan, Fahad Shahbaz Khan, Ling Shao, Ming-Hsuan Yang
In a conventional supervised learning setting, a machine learning model has access to examples of all object classes that are desired to be recognized during the inference stage.
5 code implementations • CVPR 2019 • Jathushan Rajasegaran, Vinoj Jayasundara, Sandaru Jayasekara, Hirunima Jayasekara, Suranga Seneviratne, Ranga Rodrigo
Capsule Network is a promising concept in deep learning, yet its true potential is not fully realized thus far, providing sub-par performance on several key benchmark datasets with complex data.
3 code implementations • 17 Apr 2019 • Vinoj Jayasundara, Sandaru Jayasekara, Hirunima Jayasekara, Jathushan Rajasegaran, Suranga Seneviratne, Ranga Rodrigo
Our system is useful in character recognition for localized languages that lack much labeled training data and even in other related more general contexts such as object recognition.
Ranked #4 on
Image Classification
on EMNIST-Letters
no code implementations • 16 Oct 2018 • Sameera Ramasinghe, Jathushan Rajasegaran, Vinoj Jayasundara, Kanchana Ranasinghe, Ranga Rodrigo, Ajith A. Pasqual
We propose three schemas for combining static and motion components: based on a variance ratio, principal components, and Cholesky decomposition.
no code implementations • 26 Apr 2018 • Jathushan Rajasegaran, Suranga Seneviratne, Guillaume Jourjon
We show that further performance increases can be achieved by combining style embeddings with content embeddings.