Search Results for author: Tanya Marwah

Found 9 papers, 4 papers with code

Sync-DRAW: Automatic Video Generation using Deep Recurrent Attentive Architectures

1 code implementation30 Nov 2016 Gaurav Mittal, Tanya Marwah, Vineeth N. Balasubramanian

This paper introduces a novel approach for generating videos called Synchronized Deep Recurrent Attentive Writer (Sync-DRAW).

Text-to-Video Generation Video Generation

UPS: Towards Foundation Models for PDE Solving via Cross-Modal Adaptation

1 code implementation11 Mar 2024 Junhong Shen, Tanya Marwah, Ameet Talwalkar

We introduce UPS (Unified PDE Solver), an effective and data-efficient approach to solve diverse spatiotemporal PDEs defined over various domains, dimensions, and resolutions.

Multi-Task Learning

Disentangling the Mechanisms Behind Implicit Regularization in SGD

1 code implementation29 Nov 2022 Zachary Novack, Simran Kaur, Tanya Marwah, Saurabh Garg, Zachary C. Lipton

A number of competing hypotheses have been proposed to explain why small-batch Stochastic Gradient Descent (SGD)leads to improved generalization over the full-batch regime, with recent work crediting the implicit regularization of various quantities throughout training.

Sentiment Dynamics in Social Media News Channels

no code implementations21 Aug 2019 Nagendra Kumar, Rakshita Nagalla, Tanya Marwah, Manish Singh

We also analyze users' reactions and opinion sentiment on news posts with different sentiments.

Parametric Complexity Bounds for Approximating PDEs with Neural Networks

no code implementations NeurIPS 2021 Tanya Marwah, Zachary C. Lipton, Andrej Risteski

Recent experiments have shown that deep networks can approximate solutions to high-dimensional PDEs, seemingly escaping the curse of dimensionality.

Neural Network Approximations of PDEs Beyond Linearity: A Representational Perspective

no code implementations21 Oct 2022 Tanya Marwah, Zachary C. Lipton, Jianfeng Lu, Andrej Risteski

We show that if composing a function with Barron norm $b$ with partial derivatives of $L$ produces a function of Barron norm at most $B_L b^p$, the solution to the PDE can be $\epsilon$-approximated in the $L^2$ sense by a function with Barron norm $O\left(\left(dB_L\right)^{\max\{p \log(1/ \epsilon), p^{\log(1/\epsilon)}\}}\right)$.

Deep Equilibrium Based Neural Operators for Steady-State PDEs

no code implementations NeurIPS 2023 Tanya Marwah, Ashwini Pokle, J. Zico Kolter, Zachary C. Lipton, Jianfeng Lu, Andrej Risteski

Motivated by this observation, we propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE as the infinite-depth fixed point of an implicit operator layer using a black-box root solver and differentiates analytically through this fixed point resulting in $\mathcal{O}(1)$ training memory.

Cannot find the paper you are looking for? You can Submit a new open access paper.