Search Results for author: Alex Schwing

Found 21 papers, 5 papers with code

Layer Collaboration in the Forward-Forward Algorithm

no code implementations21 May 2023 Guy Lorberbom, Itai Gat, Yossi Adi, Alex Schwing, Tamir Hazan

We show that the current version of the forward-forward algorithm is suboptimal when considering information flow in the network, resulting in a lack of collaboration between layers of the network.

AutoFocusFormer: Image Segmentation off the Grid

1 code implementation CVPR 2023 Chen Ziwen, Kaushik Patnaik, Shuangfei Zhai, Alvin Wan, Zhile Ren, Alex Schwing, Alex Colburn, Li Fuxin

To achieve this, we propose AutoFocusFormer (AFF), a local-attention transformer image recognition backbone, which performs adaptive downsampling by learning to retain the most important pixels for the task.

Image Segmentation Instance Segmentation +2

Diffusion Probabilistic Fields

no code implementations1 Mar 2023 Peiye Zhuang, Samira Abnar, Jiatao Gu, Alex Schwing, Joshua M. Susskind, Miguel Ángel Bautista

Diffusion probabilistic models have quickly become a major approach for generative modeling of images, 3D geometry, video and other domains.

Denoising

DigGAN: Discriminator gradIent Gap Regularization for GAN Training with Limited Data

1 code implementation27 Nov 2022 Tiantian Fang, Ruoyu Sun, Alex Schwing

In contrast, we propose a Discriminator gradIent Gap regularized GAN (DigGAN) formulation which can be added to any existing GAN.

Data Augmentation

Coordinated Multi-Agent Exploration Using Shared Goals

no code implementations1 Jan 2021 Iou-Jen Liu, Unnat Jain, Alex Schwing

Exploration is critical for good results of deep reinforcement learning algorithms and has drawn much attention.

reinforcement-learning Reinforcement Learning (RL) +2

Precondition Layer and Its Use for GANs

no code implementations1 Jan 2021 Tiantian Fang, Alex Schwing, Ruoyu Sun

We use this PC-layer in two ways: 1) fixed preconditioning (FPC) adds a fixed PC-layer to all layers, and 2) adaptive preconditioning (APC) adaptively controls the strength of preconditioning.

Towards a Better Global Loss Landscape of GANs

1 code implementation NeurIPS 2020 Ruoyu Sun, Tiantian Fang, Alex Schwing

We also perform experiments to support our theory that RpGAN has a better landscape than separable-GAN.

NCP-VAE: Variational Autoencoders with Noise Contrastive Priors

no code implementations28 Sep 2020 Jyoti Aneja, Alex Schwing, Jan Kautz, Arash Vahdat

To tackle this issue, we propose an energy-based prior defined by the product of a base prior distribution and a reweighting factor, designed to bring the base closer to the aggregate posterior.

On the generalization of bayesian deep nets for multi-class classification

no code implementations23 Feb 2020 Yossi Adi, Yaniv Nemcovsky, Alex Schwing, Tamir Hazan

Generalization bounds which assess the difference between the true risk and the empirical risk have been studied extensively.

General Classification Generalization Bounds +1

PAC-Bayesian Neural Network Bounds

no code implementations25 Sep 2019 Yossi Adi, Alex Schwing, Tamir Hazan

Bayesian neural networks, which both use the negative log-likelihood loss function and average their predictions using a learned posterior over the parameters, have been used successfully across many scientific fields, partly due to their ability to `effortlessly' extract desired representations from many large-scale datasets.

Generalization Bounds

CP-GAN: Towards a Better Global Landscape of GANs

no code implementations25 Sep 2019 Ruoyu Sun, Tiantian Fang, Alex Schwing

In this work, we perform a global analysis of GANs from two perspectives: the global landscape of the outer-optimization problem and the global behavior of the gradient descent dynamics.

Deep Learning for Multi-Messenger Astrophysics: A Gateway for Discovery in the Big Data Era

no code implementations1 Feb 2019 Gabrielle Allen, Igor Andreoni, Etienne Bachelet, G. Bruce Berriman, Federica B. Bianco, Rahul Biswas, Matias Carrasco Kind, Kyle Chard, Minsik Cho, Philip S. Cowperthwaite, Zachariah B. Etienne, Daniel George, Tom Gibbs, Matthew Graham, William Gropp, Anushri Gupta, Roland Haas, E. A. Huerta, Elise Jennings, Daniel S. Katz, Asad Khan, Volodymyr Kindratenko, William T. C. Kramer, Xin Liu, Ashish Mahabal, Kenton McHenry, J. M. Miller, M. S. Neubauer, Steve Oberlin, Alexander R. Olivas Jr, Shawn Rosofsky, Milton Ruiz, Aaron Saxton, Bernard Schutz, Alex Schwing, Ed Seidel, Stuart L. Shapiro, Hongyu Shen, Yue Shen, Brigitta M. Sipőcz, Lunan Sun, John Towns, Antonios Tsokaros, Wei Wei, Jack Wells, Timothy J. Williams, JinJun Xiong, Zhizhen Zhao

We discuss key aspects to realize this endeavor, namely (i) the design and exploitation of scalable and computationally efficient AI algorithms for Multi-Messenger Astrophysics; (ii) cyberinfrastructure requirements to numerically simulate astrophysical sources, and to process and interpret Multi-Messenger Astrophysics data; (iii) management of gravitational wave detections and triggers to enable electromagnetic and astro-particle follow-ups; (iv) a vision to harness future developments of machine and deep learning and cyberinfrastructure resources to cope with the scale of discovery in the Big Data Era; (v) and the need to build a community that brings domain experts together with data scientists on equal footing to maximize and accelerate discovery in the nascent field of Multi-Messenger Astrophysics.

Astronomy Management

Constraints Based Convex Belief Propagation

no code implementations NeurIPS 2016 YAniv Tenzer, Alex Schwing, Kevin Gimpel, Tamir Hazan

Inference in Markov random fields subject to consistency structure is a fundamental problem that arises in many real-life applications.

Efficient Inference of Continuous Markov Random Fields with Polynomial Potentials

no code implementations NeurIPS 2014 Shenlong Wang, Alex Schwing, Raquel Urtasun

In this paper, we prove that every multivariate polynomial with even degree can be decomposed into a sum of convex and concave polynomials.

3D Reconstruction Image Denoising

Message Passing Inference for Large Scale Graphical Models with High Order Potentials

no code implementations NeurIPS 2014 Jian Zhang, Alex Schwing, Raquel Urtasun

To keep up with the Big Data challenge, parallelized algorithms based on dual decomposition have been proposed to perform inference in Markov random fields.

Semantic Segmentation

Latent Structured Active Learning

no code implementations NeurIPS 2013 Wenjie Luo, Alex Schwing, Raquel Urtasun

In this paper we present active learning algorithms in the context of structured prediction problems.

Active Learning Structured Prediction

Globally Convergent Dual MAP LP Relaxation Solvers using Fenchel-Young Margins

no code implementations NeurIPS 2012 Alex Schwing, Tamir Hazan, Marc Pollefeys, Raquel Urtasun

While finding the exact solution for the MAP inference problem is intractable for many real-world tasks, MAP LP relaxations have been shown to be very effective in practice.

Cannot find the paper you are looking for? You can Submit a new open access paper.