Search Results for author: Senthilnath Jayavelu

Found 7 papers, 1 papers with code

DO-GAN: A Double Oracle Framework for Generative Adversarial Networks

no code implementations CVPR 2022 Aye Phyu Phyu Aung, Xinrun Wang, Runsheng Yu, Bo An, Senthilnath Jayavelu, XiaoLi Li

In this paper, we propose a new approach to train Generative Adversarial Networks (GANs) where we deploy a double-oracle framework using the generator and discriminator oracles.

Continual Learning

Does Adversarial Oversampling Help us?

no code implementations20 Aug 2021 Tanmoy Dam, Md Meftahul Ferdaus, Sreenatha G. Anavatti, Senthilnath Jayavelu, Hussein A. Abbass

Rather than adversarial minority oversampling, we propose an adversarial oversampling (AO) and a data-space oversampling (DO) approach.

Robust classification

Robust Representation Learning with Self-Distillation for Domain Generalization

no code implementations14 Feb 2023 Ankur Singh, Senthilnath Jayavelu

Despite the recent success of deep neural networks, there remains a need for effective methods to enhance domain generalization using vision transformers.

Domain Generalization Representation Learning

S-REINFORCE: A Neuro-Symbolic Policy Gradient Approach for Interpretable Reinforcement Learning

no code implementations12 May 2023 Rajdeep Dutta, Qincheng Wang, Ankur Singh, Dhruv Kumarjiguda, Li Xiaoli, Senthilnath Jayavelu

This paper presents a novel RL algorithm, S-REINFORCE, which is designed to generate interpretable policies for dynamic decision-making tasks.

Decision Making reinforcement-learning

Cross-Problem Learning for Solving Vehicle Routing Problems

no code implementations17 Apr 2024 Zhuoyi Lin, Yaoxin Wu, Bangjian Zhou, Zhiguang Cao, Wen Song, Yingqian Zhang, Senthilnath Jayavelu

Accordingly, we propose to pre-train the backbone Transformer for TSP, and then apply it in the process of fine-tuning the Transformer models for each target VRP variant.

Cannot find the paper you are looking for? You can Submit a new open access paper.