no code implementations • 21 May 2023 • Guy Lorberbom, Itai Gat, Yossi Adi, Alex Schwing, Tamir Hazan
We show that the current version of the forward-forward algorithm is suboptimal when considering information flow in the network, resulting in a lack of collaboration between layers of the network.
1 code implementation • CVPR 2023 • Chen Ziwen, Kaushik Patnaik, Shuangfei Zhai, Alvin Wan, Zhile Ren, Alex Schwing, Alex Colburn, Li Fuxin
To achieve this, we propose AutoFocusFormer (AFF), a local-attention transformer image recognition backbone, which performs adaptive downsampling by learning to retain the most important pixels for the task.
Ranked #4 on
Instance Segmentation
on Cityscapes val
no code implementations • 1 Mar 2023 • Peiye Zhuang, Samira Abnar, Jiatao Gu, Alex Schwing, Joshua M. Susskind, Miguel Ángel Bautista
Diffusion probabilistic models have quickly become a major approach for generative modeling of images, 3D geometry, video and other domains.
1 code implementation • 27 Nov 2022 • Tiantian Fang, Ruoyu Sun, Alex Schwing
In contrast, we propose a Discriminator gradIent Gap regularized GAN (DigGAN) formulation which can be added to any existing GAN.
no code implementations • 1 Jan 2021 • Tiantian Fang, Alex Schwing, Ruoyu Sun
We use this PC-layer in two ways: 1) fixed preconditioning (FPC) adds a fixed PC-layer to all layers, and 2) adaptive preconditioning (APC) adaptively controls the strength of preconditioning.
no code implementations • 1 Jan 2021 • Iou-Jen Liu, Unnat Jain, Alex Schwing
Exploration is critical for good results of deep reinforcement learning algorithms and has drawn much attention.
1 code implementation • NeurIPS 2020 • Ruoyu Sun, Tiantian Fang, Alex Schwing
We also perform experiments to support our theory that RpGAN has a better landscape than separable-GAN.
no code implementations • 28 Sep 2020 • Jyoti Aneja, Alex Schwing, Jan Kautz, Arash Vahdat
To tackle this issue, we propose an energy-based prior defined by the product of a base prior distribution and a reweighting factor, designed to bring the base closer to the aggregate posterior.
1 code implementation • ECCV 2020 • Yash Kant, Dhruv Batra, Peter Anderson, Alex Schwing, Devi Parikh, Jiasen Lu, Harsh Agrawal
Further, each head in our multi-head self-attention layer focuses on a different subset of relations.
no code implementations • 23 Feb 2020 • Yossi Adi, Yaniv Nemcovsky, Alex Schwing, Tamir Hazan
Generalization bounds which assess the difference between the true risk and the empirical risk have been studied extensively.
no code implementations • 26 Nov 2019 • E. A. Huerta, Gabrielle Allen, Igor Andreoni, Javier M. Antelis, Etienne Bachelet, Bruce Berriman, Federica Bianco, Rahul Biswas, Matias Carrasco, Kyle Chard, Minsik Cho, Philip S. Cowperthwaite, Zachariah B. Etienne, Maya Fishbach, Francisco Förster, Daniel George, Tom Gibbs, Matthew Graham, William Gropp, Robert Gruendl, Anushri Gupta, Roland Haas, Sarah Habib, Elise Jennings, Margaret W. G. Johnson, Erik Katsavounidis, Daniel S. Katz, Asad Khan, Volodymyr Kindratenko, William T. C. Kramer, Xin Liu, Ashish Mahabal, Zsuzsa Marka, Kenton McHenry, Jonah Miller, Claudia Moreno, Mark Neubauer, Steve Oberlin, Alexander R. Olivas, Donald Petravick, Adam Rebei, Shawn Rosofsky, Milton Ruiz, Aaron Saxton, Bernard F. Schutz, Alex Schwing, Ed Seidel, Stuart L. Shapiro, Hongyu Shen, Yue Shen, Leo Singer, Brigitta M. Sipőcz, Lunan Sun, John Towns, Antonios Tsokaros, Wei Wei, Jack Wells, Timothy J. Williams, JinJun Xiong, Zhizhen Zhao
Multi-messenger astrophysics is a fast-growing, interdisciplinary field that combines data, which vary in volume and speed of data processing, from many different instruments that probe the Universe using different cosmic messengers: electromagnetic waves, cosmic rays, gravitational waves and neutrinos.
no code implementations • 25 Sep 2019 • Ruoyu Sun, Tiantian Fang, Alex Schwing
In this work, we perform a global analysis of GANs from two perspectives: the global landscape of the outer-optimization problem and the global behavior of the gradient descent dynamics.
no code implementations • 25 Sep 2019 • Yossi Adi, Alex Schwing, Tamir Hazan
Bayesian neural networks, which both use the negative log-likelihood loss function and average their predictions using a learned posterior over the parameters, have been used successfully across many scientific fields, partly due to their ability to `effortlessly' extract desired representations from many large-scale datasets.
no code implementations • 1 Feb 2019 • Gabrielle Allen, Igor Andreoni, Etienne Bachelet, G. Bruce Berriman, Federica B. Bianco, Rahul Biswas, Matias Carrasco Kind, Kyle Chard, Minsik Cho, Philip S. Cowperthwaite, Zachariah B. Etienne, Daniel George, Tom Gibbs, Matthew Graham, William Gropp, Anushri Gupta, Roland Haas, E. A. Huerta, Elise Jennings, Daniel S. Katz, Asad Khan, Volodymyr Kindratenko, William T. C. Kramer, Xin Liu, Ashish Mahabal, Kenton McHenry, J. M. Miller, M. S. Neubauer, Steve Oberlin, Alexander R. Olivas Jr, Shawn Rosofsky, Milton Ruiz, Aaron Saxton, Bernard Schutz, Alex Schwing, Ed Seidel, Stuart L. Shapiro, Hongyu Shen, Yue Shen, Brigitta M. Sipőcz, Lunan Sun, John Towns, Antonios Tsokaros, Wei Wei, Jack Wells, Timothy J. Williams, JinJun Xiong, Zhizhen Zhao
We discuss key aspects to realize this endeavor, namely (i) the design and exploitation of scalable and computationally efficient AI algorithms for Multi-Messenger Astrophysics; (ii) cyberinfrastructure requirements to numerically simulate astrophysical sources, and to process and interpret Multi-Messenger Astrophysics data; (iii) management of gravitational wave detections and triggers to enable electromagnetic and astro-particle follow-ups; (iv) a vision to harness future developments of machine and deep learning and cyberinfrastructure resources to cope with the scale of discovery in the Big Data Era; (v) and the need to build a community that brings domain experts together with data scientists on equal footing to maximize and accelerate discovery in the nascent field of Multi-Messenger Astrophysics.
1 code implementation • NeurIPS 2016 • Renjie Liao, Alex Schwing, Richard Zemel, Raquel Urtasun
In this paper we aim at facilitating generalization for deep networks while supporting interpretability of the learned representations.
no code implementations • NeurIPS 2016 • YAniv Tenzer, Alex Schwing, Kevin Gimpel, Tamir Hazan
Inference in Markov random fields subject to consistency structure is a fundamental problem that arises in many real-life applications.
no code implementations • NeurIPS 2015 • Ofer Meshi, Mehrdad Mahdavi, Alex Schwing
Maximum a-posteriori (MAP) inference is an important task for many applications.
no code implementations • NeurIPS 2014 • Shenlong Wang, Alex Schwing, Raquel Urtasun
In this paper, we prove that every multivariate polynomial with even degree can be decomposed into a sum of convex and concave polynomials.
no code implementations • NeurIPS 2014 • Jian Zhang, Alex Schwing, Raquel Urtasun
To keep up with the Big Data challenge, parallelized algorithms based on dual decomposition have been proposed to perform inference in Markov random fields.
no code implementations • NeurIPS 2013 • Wenjie Luo, Alex Schwing, Raquel Urtasun
In this paper we present active learning algorithms in the context of structured prediction problems.
no code implementations • NeurIPS 2012 • Alex Schwing, Tamir Hazan, Marc Pollefeys, Raquel Urtasun
While finding the exact solution for the MAP inference problem is intractable for many real-world tasks, MAP LP relaxations have been shown to be very effective in practice.