Search Results for author: Harsh Rangwani

Found 15 papers, 10 papers with code

IIT (BHU) Submission for the ACL Shared Task on Named Entity Recognition on Code-switched Data

no code implementations WS 2018 Shashwat Trivedi, Harsh Rangwani, Anil Kumar Singh

This paper describes the best performing system for the shared task on Named Entity Recognition (NER) on code-switched data for the language pair Spanish-English (ENG-SPA).

named-entity-recognition Named Entity Recognition +2

S3VAADA: Submodular Subset Selection for Virtual Adversarial Active Domain Adaptation

1 code implementation ICCV 2021 Harsh Rangwani, Arihant Jain, Sumukh K Aithal, R. Venkatesh Babu

Unsupervised domain adaptation (DA) methods have focused on achieving maximal performance through aligning features from source and target domains without using labeled data in the target domain.

Unsupervised Domain Adaptation

Class Balancing GAN with a Classifier in the Loop

1 code implementation17 Jun 2021 Harsh Rangwani, Konda Reddy Mopuri, R. Venkatesh Babu

However, majority of the developments focus on performance of GANs on balanced datasets.

S$^3$VAADA: Submodular Subset Selection for Virtual Adversarial Active Domain Adaptation

1 code implementation18 Sep 2021 Harsh Rangwani, Arihant Jain, Sumukh K Aithal, R. Venkatesh Babu

Unsupervised domain adaptation (DA) methods have focused on achieving maximal performance through aligning features from source and target domains without using labeled data in the target domain.

Unsupervised Domain Adaptation

A Closer Look at Smoothness in Domain Adversarial Training

1 code implementation16 Jun 2022 Harsh Rangwani, Sumukh K Aithal, Mayank Mishra, Arihant Jain, R. Venkatesh Babu

Based on the analysis, we introduce the Smooth Domain Adversarial Training (SDAT) procedure, which effectively enhances the performance of existing domain adversarial methods for both classification and object detection tasks.

Domain Adaptation Object Detection

Hierarchical Semantic Regularization of Latent Spaces in StyleGANs

no code implementations7 Aug 2022 Tejan Karmali, Rishubh Parihar, Susmit Agrawal, Harsh Rangwani, Varun Jampani, Maneesh Singh, R. Venkatesh Babu

The quality of the generated images is predicated on two assumptions; (a) The richness of the hierarchical representations learnt by the generator, and, (b) The linearity and smoothness of the style spaces.

Attribute

Improving GANs for Long-Tailed Data through Group Spectral Regularization

1 code implementation21 Aug 2022 Harsh Rangwani, Naman Jaswani, Tejan Karmali, Varun Jampani, R. Venkatesh Babu

Deep long-tailed learning aims to train useful deep networks on practical, real-world imbalanced distributions, wherein most labels of the tail classes are associated with a few samples.

Conditional Image Generation

Certified Adversarial Robustness Within Multiple Perturbation Bounds

1 code implementation20 Apr 2023 Soumalya Nandi, Sravanti Addepalli, Harsh Rangwani, R. Venkatesh Babu

We further propose a novel \textit{training noise distribution} along with a \textit{regularized training scheme} to improve the certification within both $\ell_1$ and $\ell_2$ perturbation norms simultaneously.

Adversarial Robustness

Cost-Sensitive Self-Training for Optimizing Non-Decomposable Metrics

1 code implementation28 Apr 2023 Harsh Rangwani, Shrinivas Ramasubramanian, Sho Takemori, Kato Takashi, Yuhei Umeda, Venkatesh Babu Radhakrishnan

Using the proposed CSST framework, we obtain practical self-training methods (for both vision and NLP tasks) for optimizing different non-decomposable metrics using deep neural networks.

Selective Mixup Fine-Tuning for Optimizing Non-Decomposable Objectives

no code implementations27 Mar 2024 Shrinivas Ramasubramanian, Harsh Rangwani, Sho Takemori, Kunal Samanta, Yuhei Umeda, Venkatesh Babu Radhakrishnan

We find that current state-of-the-art empirical techniques offer sub-optimal performance on these practical, non-decomposable performance objectives.

Fairness imbalanced classification

DeiT-LT Distillation Strikes Back for Vision Transformer Training on Long-Tailed Datasets

1 code implementation3 Apr 2024 Harsh Rangwani, Pradipto Mondal, Mayank Mishra, Ashish Ramayee Asokan, R. Venkatesh Babu

In DeiT-LT, we introduce an efficient and effective way of distillation from CNN via distillation DIST token by using out-of-distribution images and re-weighting the distillation loss to enhance focus on tail classes.

 Ranked #1 on Image Classification on iNaturalist (Overall metric)

Image Classification Inductive Bias +1

Cannot find the paper you are looking for? You can Submit a new open access paper.