Search Results for author: Harsh Mishra

Found 3 papers, 1 papers with code

Accelerated Neural Network Training with Rooted Logistic Objectives

no code implementations5 Oct 2023 Zhu Wang, Praveen Raj Veluswami, Harsh Mishra, Sathya N. Ravi

Furthermore, we illustrate applications of our novel rooted loss function in generative modeling based downstream applications, such as finetuning StyleGAN model with the rooted loss.

Binary Classification Data Augmentation

Flag Aggregator: Scalable Distributed Training under Failures and Augmented Losses using Convex Optimization

1 code implementation12 Feb 2023 Hamidreza Almasi, Harsh Mishra, Balajee Vamanan, Sathya N. Ravi

Therefore, to scale computation and data, these models are inevitably trained in a distributed manner in clusters of nodes, and their updates are aggregated before being applied to the model.

Data Augmentation

Using Intermediate Forward Iterates for Intermediate Generator Optimization

no code implementations5 Feb 2023 Harsh Mishra, Jurijs Nazarovs, Manmohan Dogra, Sathya N. Ravi

In score-based models, a generative task is formulated using a parametric model (such as a neural network) to directly learn the gradient of such high dimensional distributions, instead of the density functions themselves, as is done traditionally.

Denoising

Cannot find the paper you are looking for? You can Submit a new open access paper.