Search Results for author: Anish Acharya

Found 16 papers, 4 papers with code

Contrastive Approach to Prior Free Positive Unlabeled Learning

no code implementations8 Feb 2024 Anish Acharya, Sujay Sanghavi

Positive Unlabeled (PU) learning refers to the task of learning a binary classifier given a few labeled positive samples, and a set of unlabeled samples (which could be positive or negative).

Representation Learning

Positive Unlabeled Contrastive Learning

no code implementations1 Jun 2022 Anish Acharya, Sujay Sanghavi, Li Jing, Bhargav Bhushanam, Michael Rabbat, Inderjit Dhillon

We extend this paradigm to the classical positive unlabeled (PU) setting, where the task is to learn a binary classifier given only a few labeled positive samples, and (often) a large amount of unlabeled samples (which could be positive or negative).

Contrastive Learning Pseudo Label

DISCO : efficient unsupervised decoding for discrete natural language problems via convex relaxation

no code implementations7 Jul 2021 Anish Acharya, Rudrajit Das

In this paper we study test time decoding; an ubiquitous step in almost all sequential text generation task spanning across a wide array of natural language processing (NLP) problems.

Adversarial Text Text Generation

Robust Training in High Dimensions via Block Coordinate Geometric Median Descent

2 code implementations16 Jun 2021 Anish Acharya, Abolfazl Hashemi, Prateek Jain, Sujay Sanghavi, Inderjit S. Dhillon, Ufuk Topcu

Geometric median (\textsc{Gm}) is a classical method in statistics for achieving a robust estimation of the uncorrupted data; under gross corruption, it achieves the optimal breakdown point of 0. 5.

Ranked #19 on Image Classification on MNIST (Accuracy metric)

Image Classification Vocal Bursts Intensity Prediction

Neural Distributed Source Coding

no code implementations5 Jun 2021 Jay Whang, Alliot Nagle, Anish Acharya, Hyeji Kim, Alexandros G. Dimakis

Distributed source coding (DSC) is the task of encoding an input in the absence of correlated side information that is only available to the decoder.

GupShup: An Annotated Corpus for Abstractive Summarization of Open-Domain Code-Switched Conversations

no code implementations17 Apr 2021 Laiba Mehnaz, Debanjan Mahata, Rakesh Gosangi, Uma Sushmitha Gunturi, Riya Jain, Gauri Gupta, Amardeep Kumar, Isabelle Lee, Anish Acharya, Rajiv Ratn Shah

Towards this objective, we introduce abstractive summarization of Hindi-English code-switched conversations and develop the first code-switched conversation summarization dataset - GupShup, which contains over 6, 831 conversations in Hindi-English and their corresponding human-annotated summaries in English and Hindi-English.

Abstractive Text Summarization

Faster Non-Convex Federated Learning via Global and Local Momentum

no code implementations7 Dec 2020 Rudrajit Das, Anish Acharya, Abolfazl Hashemi, Sujay Sanghavi, Inderjit S. Dhillon, Ufuk Topcu

We propose \texttt{FedGLOMO}, a novel federated learning (FL) algorithm with an iteration complexity of $\mathcal{O}(\epsilon^{-1. 5})$ to converge to an $\epsilon$-stationary point (i. e., $\mathbb{E}[\|\nabla f(\bm{x})\|^2] \leq \epsilon$) for smooth non-convex functions -- under arbitrary client heterogeneity and compressed communication -- compared to the $\mathcal{O}(\epsilon^{-2})$ complexity of most prior works.

Federated Learning

On the Benefits of Multiple Gossip Steps in Communication-Constrained Decentralized Optimization

1 code implementation20 Nov 2020 Abolfazl Hashemi, Anish Acharya, Rudrajit Das, Haris Vikalo, Sujay Sanghavi, Inderjit Dhillon

In this paper, we show that, in such compressed decentralized optimization settings, there are benefits to having {\em multiple} gossip steps between subsequent gradient iterations, even when the cost of doing so is appropriately accounted for e. g. by means of reducing the precision of compressed information.

Detecting the Trend in Musical Taste over the Decade -- A Novel Feature Extraction Algorithm to Classify Musical Content with Simple Features

no code implementations19 Dec 2018 Anish Acharya

The main idea conveyed in this article is to come up with a new feature selection scheme that does the classification job elegantly and with high accuracy but with simpler but wisely chosen small number of features thus being less prone to over-fitting.

Classification Dimensionality Reduction +2

Online Embedding Compression for Text Classification using Low Rank Matrix Factorization

no code implementations1 Nov 2018 Anish Acharya, Rahul Goel, Angeliki Metallinou, Inderjit Dhillon

Empirically, we show that the proposed method can achieve 90% compression with minimal impact in accuracy for sentence classification tasks, and outperforms alternative methods like fixed-point quantization or offline word embedding compression.

General Classification Quantization +3

On Image segmentation using Fractional Gradients-Learning Model Parameters using Approximate Marginal Inference

1 code implementation7 May 2016 Anish Acharya, Uddipan Mukherjee, Charless Fowlkes

Estimates of image gradients play a ubiquitous role in image segmentation and classification problems since gradients directly relate to the boundaries or the edges of a scene.

Edge Detection General Classification +3

Are We Ready for Driver-less Vehicles? Security vs. Privacy- A Social Perspective

no code implementations16 Dec 2014 Anish Acharya

At this moment Autonomous cars are probably the biggest and most talked about technology in the Robotics Research Community.

Autonomous Driving

Template Matching based Object Detection Using HOG Feature Pyramid

no code implementations27 Jun 2014 Anish Acharya

This article provides a step by step development of designing a Object Detection scheme using the HOG based Feature Pyramid aligned with the concept of Template Matching.

Object object-detection +2

Cannot find the paper you are looking for? You can Submit a new open access paper.