Search Results for author: Gopinath Chennupati

Found 16 papers, 4 papers with code

Federated Self-Learning with Weak Supervision for Speech Recognition

no code implementations21 Jun 2023 Milind Rao, Gopinath Chennupati, Gautam Tiwari, Anit Kumar Sahu, Anirudh Raju, Ariya Rastrow, Jasha Droppo

Automatic speech recognition (ASR) models with low-footprint are increasingly being deployed on edge devices for conversational agents, which enhances privacy.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

Learning When to Trust Which Teacher for Weakly Supervised ASR

no code implementations21 Jun 2023 Aakriti Agrawal, Milind Rao, Anit Kumar Sahu, Gopinath Chennupati, Andreas Stolcke

We show the efficacy of our approach using LibriSpeech and LibriLight benchmarks and find an improvement of 4 to 25\% over baselines that uniformly weight all the experts, use a single expert model, or combine experts using ROVER.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Can Calibration Improve Sample Prioritization?

no code implementations12 Oct 2022 Ganesh Tata, Gautham Krishna Gudur, Gopinath Chennupati, Mohammad Emtiyaz Khan

Calibration can reduce overconfident predictions of deep neural networks, but can calibration also accelerate training?

BB-ML: Basic Block Performance Prediction using Machine Learning Techniques

no code implementations16 Feb 2022 Hamdy Abdelkhalik, Shamminuj Aktar, Yehia Arafa, Atanu Barai, Gopinath Chennupati, Nandakishore Santhi, Nishant Panda, Nirmal Prajapati, Nazmul Haque Turja, Stephan Eidenbenz, Abdel-Hameed Badawy

We extrapolate the basic block execution counts of GPU applications and use them for predicting the performance for large input sizes from the counts of smaller input sizes.

BIG-bench Machine Learning

An Effective Baseline for Robustness to Distributional Shift

1 code implementation15 May 2021 Sunil Thulasidasan, Sushil Thapa, Sayera Dhaubhadel, Gopinath Chennupati, Tanmoy Bhattacharya, Jeff Bilmes

In this work we present a simple, but highly effective approach to deal with out-of-distribution detection that uses the principle of abstention: when encountering a sample from an unseen class, the desired behavior is to abstain from predicting.

 Ranked #1 on Out-of-Distribution Detection on CIFAR-100 (using extra training data)

Out-of-Distribution Detection Robust classification +1

A Simple and Effective Baseline for Out-of-Distribution Detection using Abstention

no code implementations1 Jan 2021 Sunil Thulasidasan, Sushil Thapa, Sayera Dhaubhadel, Gopinath Chennupati, Tanmoy Bhattacharya, Jeff Bilmes

In this work we present a simple, but highly effective approach to deal with out-of-distribution detection that uses the principle of abstention: when encountering a sample from an unseen class, the desired behavior is to abstain from predicting.

Out-of-Distribution Detection text-classification +1

Decoy Selection for Protein Structure Prediction Via Extreme Gradient Boosting and Ranking

no code implementations3 Oct 2020 Nasrin Akhter, Gopinath Chennupati, Hristo Djidjev, Amarda Shehu

Consensus methods show varied success in handling the challenge of decoy selection despite some issues associated with clustering large decoy sets and decoy sets that do not show much structural similarity.

BIG-bench Machine Learning Clustering +1

Distributed Non-Negative Tensor Train Decomposition

no code implementations4 Aug 2020 Manish Bhattarai, Gopinath Chennupati, Erik Skau, Raviteja Vangara, Hirsto Djidjev, Boian Alexandrov

Tensor train (TT) is a state-of-the-art tensor network introduced for factorization of high-dimensional tensors.

Combating Label Noise in Deep Learning Using Abstention

2 code implementations27 May 2019 Sunil Thulasidasan, Tanmoy Bhattacharya, Jeff Bilmes, Gopinath Chennupati, Jamal Mohd-Yusof

In the case of unstructured (arbitrary) label noise, abstention during training enables the DAC to be used as an effective data cleaner by identifying samples that are likely to have label noise.

General Classification Image Classification +1

Knows When it Doesn’t Know: Deep Abstaining Classifiers

no code implementations ICLR 2019 Sunil Thulasidasan, Tanmoy Bhattacharya, Jeffrey Bilmes, Gopinath Chennupati, Jamal Mohd-Yusof

We introduce the deep abstaining classifier -- a deep neural network trained with a novel loss function that provides an abstention option during training.

eAnt-Miner : An Ensemble Ant-Miner to Improve the ACO Classification

no code implementations9 Sep 2014 Gopinath Chennupati

In this paper, to address this issue, an acclaimed machine learning technique named, ensemble of classifiers is applied, where an ACO classifier is used as a base classifier to prepare the ensemble.

Classification General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.