1 code implementation • 30 Nov 2016 • Gaurav Mittal, Tanya Marwah, Vineeth N. Balasubramanian
This paper introduces a novel approach for generating videos called Synchronized Deep Recurrent Attentive Writer (Sync-DRAW).
1 code implementation • ICCV 2017 • Tanya Marwah, Gaurav Mittal, Vineeth N. Balasubramanian
This paper proposes a network architecture to perform variable length semantic video generation using captions.
no code implementations • ICLR Workshop DeepGenStruct 2019 • Gaurav Mittal, Shubham Agrawal, Anuva Agarwal, Sushant Mehta, Tanya Marwah
We propose a method to generate an image incrementally based on a sequence of graphs of scene descriptions (scene-graphs).
no code implementations • 21 Aug 2019 • Nagendra Kumar, Rakshita Nagalla, Tanya Marwah, Manish Singh
We also analyze users' reactions and opinion sentiment on news posts with different sentiments.
no code implementations • NeurIPS 2021 • Tanya Marwah, Zachary C. Lipton, Andrej Risteski
Recent experiments have shown that deep networks can approximate solutions to high-dimensional PDEs, seemingly escaping the curse of dimensionality.
no code implementations • 21 Oct 2022 • Tanya Marwah, Zachary C. Lipton, Jianfeng Lu, Andrej Risteski
We show that if composing a function with Barron norm $b$ with partial derivatives of $L$ produces a function of Barron norm at most $B_L b^p$, the solution to the PDE can be $\epsilon$-approximated in the $L^2$ sense by a function with Barron norm $O\left(\left(dB_L\right)^{\max\{p \log(1/ \epsilon), p^{\log(1/\epsilon)}\}}\right)$.
1 code implementation • 29 Nov 2022 • Zachary Novack, Simran Kaur, Tanya Marwah, Saurabh Garg, Zachary C. Lipton
A number of competing hypotheses have been proposed to explain why small-batch Stochastic Gradient Descent (SGD)leads to improved generalization over the full-batch regime, with recent work crediting the implicit regularization of various quantities throughout training.
no code implementations • NeurIPS 2023 • Tanya Marwah, Ashwini Pokle, J. Zico Kolter, Zachary C. Lipton, Jianfeng Lu, Andrej Risteski
Motivated by this observation, we propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE as the infinite-depth fixed point of an implicit operator layer using a black-box root solver and differentiates analytically through this fixed point resulting in $\mathcal{O}(1)$ training memory.
1 code implementation • 11 Mar 2024 • Junhong Shen, Tanya Marwah, Ameet Talwalkar
We introduce UPS (Unified PDE Solver), an effective and data-efficient approach to solve diverse spatiotemporal PDEs defined over various domains, dimensions, and resolutions.