Search Results for author: Sathyanarayanan N. Aakur

Found 18 papers, 2 papers with code

ProtoKD: Learning from Extremely Scarce Data for Parasite Ova Recognition

no code implementations18 Sep 2023 Shubham Trehan, Udhav Ramachandran, Ruth Scimeca, Sathyanarayanan N. Aakur

Developing reliable computational frameworks for early parasite detection, particularly at the ova (or egg) stage is crucial for advancing healthcare and effectively managing potential public health crises.

Shape-Graph Matching Network (SGM-net): Registration for Statistical Shape Analysis

no code implementations14 Aug 2023 Shenyuan Liang, Mauricio Pamplona Segundo, Sathyanarayanan N. Aakur, Sudeep Sarkar, Anuj Srivastava

This, in turn, requires optimization over the permutation group, made challenging by differences in nodes (in terms of numbers, locations) and edges (in terms of shapes, placements, and sizes) across objects.

Graph Matching

Discovering Novel Actions in an Open World with Object-Grounded Visual Commonsense Reasoning

no code implementations26 May 2023 Sathyanarayanan N. Aakur, Sanjoy Kundu, Shubham Trehan

Learning to infer labels in an open world, i. e., in an environment where the target ``labels'' are unknown, is an important characteristic for achieving autonomy.

Object Recognition Visual Commonsense Reasoning

IS-GGT: Iterative Scene Graph Generation With Generative Transformers

no code implementations CVPR 2023 Sanjoy Kundu, Sathyanarayanan N. Aakur

Current approaches take a generation-by-classification approach where the scene graph is generated through labeling of all possible edges between objects in a scene, which adds computational overhead to the approach.

Graph Generation Link Prediction +5

Iterative Scene Graph Generation with Generative Transformers

no code implementations30 Nov 2022 Sanjoy Kundu, Sathyanarayanan N. Aakur

Current approaches take a generation-by-classification approach where the scene graph is generated through labeling of all possible edges between objects in a scene, which adds computational overhead to the approach.

Graph Generation Link Prediction +5

Scalable Pathogen Detection from Next Generation DNA Sequencing with Deep Learning

no code implementations30 Nov 2022 Sai Narayanan, Sathyanarayanan N. Aakur, Priyadharsini Ramamurthy, Arunkumar Bagavathi, Vishalini Ramnath, Akhilesh Ramachandran

The emergence of zoonotic diseases from novel pathogens, such as the influenza virus in 1918 and SARS-CoV-2 in 2019 that can jump species barriers and lead to pandemic underscores the need for scalable metagenome analysis.

Representation Learning

Metagenome2Vec: Building Contextualized Representations for Scalable Metagenome Analysis

no code implementations9 Nov 2021 Sathyanarayanan N. Aakur, Vineela Indla, Vennela Indla, Sai Narayanan, Arunkumar Bagavathi, Vishalini Laguduva Ramnath, Akhilesh Ramachandran

There is an increased need for learning robust representations from metagenome reads since pathogens within a family can have highly similar genome structures (some more than 90%) and hence enable the segmentation and identification of novel pathogen sequences with limited labeled data.

Representation Learning

Towards Active Vision for Action Localization with Reactive Control and Predictive Learning

1 code implementation9 Nov 2021 Shubham Trehan, Sathyanarayanan N. Aakur

We formulate an energy-based mechanism that combines predictive learning and reactive control to perform active action localization without rewards, which can be sparse or non-existent in real-world environments.

Action Localization Object Tracking

MG-NET: Leveraging Pseudo-Imaging for Multi-Modal Metagenome Analysis

no code implementations21 Jul 2021 Sathyanarayanan N. Aakur, Sai Narayanan, Vineela Indla, Arunkumar Bagavathi, Vishalini Laguduva Ramnath, Akhilesh Ramachandran

However, there are significant challenges in developing such an approach, the chief among which is to learn self-supervised representations that can help detect novel pathogen signatures with very low amounts of labeled data.

Representation Learning

Actor-centered Representations for Action Localization in Streaming Videos

no code implementations29 Apr 2021 Sathyanarayanan N. Aakur, Sudeep Sarkar

We tackle the problem of learning actor-centered representations through the notion of continual hierarchical predictive learning to localize actions in streaming videos without the need for training labels and outlines for the objects in the video.

Action Localization

Knowledge Guided Learning: Towards Open Domain Egocentric Action Recognition with Zero Supervision

no code implementations16 Sep 2020 Sathyanarayanan N. Aakur, Sanjoy Kundu, Nikhil Gunti

Building upon the compositional representation offered by Grenander's Pattern Theory formalism, we show that attention and commonsense knowledge can be used to enable the self-supervised discovery of novel actions in egocentric videos in an open-world setting, where data from the observed environment (the target domain) is open i. e., the vocabulary is partially known and training examples (both labeled and unlabeled) are not available.

Action Recognition Domain Adaptation +4

Abductive Reasoning as Self-Supervision for Common Sense Question Answering

no code implementations6 Sep 2019 Sathyanarayanan N. Aakur, Sudeep Sarkar

We find that large amounts of training data are necessary, both for pre-training as well as fine-tuning to a task, for the models to perform well on the designated task.

Common Sense Reasoning Domain Adaptation +1

A Perceptual Prediction Framework for Self Supervised Event Segmentation

1 code implementation CVPR 2019 Sathyanarayanan N. Aakur, Sudeep Sarkar

We also show that the proposed approach is able to learn highly discriminative features that help improve action recognition when used in a representation learning paradigm.

Action Recognition Event Segmentation +1

Going Deeper with Semantics: Video Activity Interpretation using Semantic Contextualization

no code implementations11 Aug 2017 Sathyanarayanan N. Aakur, Fillipe DM de Souza, Sudeep Sarkar

Through extensive experiments, we show that the use of commonsense knowledge from ConceptNet allows the proposed approach to handle various challenges such as training data imbalance, weak features, and complex semantic relationships and visual scenes.

Cannot find the paper you are looking for? You can Submit a new open access paper.