Search Results for author: Shivchander Sudalairaj

Found 5 papers, 1 papers with code

LAB: Large-Scale Alignment for ChatBots

no code implementations2 Mar 2024 Shivchander Sudalairaj, Abhishek Bhandwaldar, Aldo Pareja, Kai Xu, David D. Cox, Akash Srivastava

This work introduces LAB (Large-scale Alignment for chatBots), a novel methodology designed to overcome the scalability challenges in the instruction-tuning phase of large language model (LLM) training.

Instruction Following Language Modelling +2

Multi-Symmetry Ensembles: Improving Diversity and Generalization via Opposing Symmetries

1 code implementation4 Mar 2023 Charlotte Loh, Seungwook Han, Shivchander Sudalairaj, Rumen Dangovski, Kai Xu, Florian Wenzel, Marin Soljacic, Akash Srivastava

In this work, we present Multi-Symmetry Ensembles (MSE), a framework for constructing diverse ensembles by capturing the multiplicity of hypotheses along symmetry axes, which explore the hypothesis space beyond stochastic perturbations of model weights and hyperparameters.

Representation Learning Uncertainty Quantification

Grafting Vision Transformers

no code implementations28 Oct 2022 Jongwoo Park, Kumara Kahatapitiya, Donghyun Kim, Shivchander Sudalairaj, Quanfu Fan, Michael S. Ryoo

In this paper, we present a simple and efficient add-on component (termed GrafT) that considers global dependencies and multi-scale information throughout the network, in both high- and low-resolution features alike.

Image Classification Instance Segmentation +3

On the Importance of Calibration in Semi-supervised Learning

no code implementations10 Oct 2022 Charlotte Loh, Rumen Dangovski, Shivchander Sudalairaj, Seungwook Han, Ligong Han, Leonid Karlinsky, Marin Soljacic, Akash Srivastava

State-of-the-art (SOTA) semi-supervised learning (SSL) methods have been highly successful in leveraging a mix of labeled and unlabeled data by combining techniques of consistency regularization and pseudo-labeling.

Cannot find the paper you are looking for? You can Submit a new open access paper.