Search Results for author: Shaojie Bai

Found 21 papers, 14 papers with code

Fast Registration of Photorealistic Avatars for VR Facial Animation

no code implementations19 Jan 2024 Chaitanya Patel, Shaojie Bai, Te-Li Wang, Jason Saragih, Shih-En Wei

In this work, we first show that the domain gap between the avatar and headset-camera images is one of the primary sources of difficulty, where a transformer-based architecture achieves high accuracy on domain-consistent data, but degrades when the domain-gap is re-introduced.

Style Transfer

From Audio to Photoreal Embodiment: Synthesizing Humans in Conversations

1 code implementation3 Jan 2024 Evonne Ng, Javier Romero, Timur Bagautdinov, Shaojie Bai, Trevor Darrell, Angjoo Kanazawa, Alexander Richard

We present a framework for generating full-bodied photorealistic avatars that gesture according to the conversational dynamics of a dyadic interaction.

Quantization

Path Independent Equilibrium Models Can Better Exploit Test-Time Computation

no code implementations18 Nov 2022 Cem Anil, Ashwini Pokle, Kaiqu Liang, Johannes Treutlein, Yuhuai Wu, Shaojie Bai, Zico Kolter, Roger Grosse

Designing networks capable of attaining better performance with an increased inference budget is important to facilitate generalization to harder problem instances.

Stability of Weighted Majority Voting under Estimated Weights

no code implementations13 Jul 2022 Shaojie Bai, Dongxia Wang, Tim Muller, Peng Cheng, Jiming Chen

To formally analyse the uncertainty to the decision process, we introduce and analyse two important properties of such unbiased trust values: stability of correctness and stability of optimality.

Decision Making

Deep Equilibrium Optical Flow Estimation

1 code implementation CVPR 2022 Shaojie Bai, Zhengyang Geng, Yash Savani, J. Zico Kolter

Many recent state-of-the-art (SOTA) optical flow models use finite-step recurrent update operations to emulate traditional algorithms by encouraging iterative refinements toward a stable flow estimation.

Optical Flow Estimation

$(\textrm{Implicit})^2$: Implicit Layers for Implicit Representations

no code implementations NeurIPS 2021 Zhichun Huang, Shaojie Bai, J. Zico Kolter

Recent research in deep learning has investigated two very different forms of ''implicitness'': implicit representations model high-frequency data such as images or 3D shapes directly via a low-dimensional neural network (often using e. g., sinusoidal bases or nonlinearities); implicit layers, in contrast, refer to techniques where the forward pass of a network is computed via non-linear dynamical systems, such as fixed-point or differential equation solutions, with the backward pass computed via the implicit function theorem.

Joint inference and input optimization in equilibrium networks

1 code implementation NeurIPS 2021 Swaminathan Gurumurthy, Shaojie Bai, Zachary Manchester, J. Zico Kolter

Many tasks in deep learning involve optimizing over the \emph{inputs} to a network to minimize or maximize some objective; examples include optimization over latent spaces in a generative model to match a target image, or adversarially perturbing an input to worsen classifier performance.

Denoising Meta-Learning

Neural Deep Equilibrium Solvers

no code implementations ICLR 2022 Shaojie Bai, Vladlen Koltun, J Zico Kolter

A deep equilibrium (DEQ) model abandons traditional depth by solving for the fixed point of a single nonlinear layer $f_\theta$.

Stabilizing Equilibrium Models by Jacobian Regularization

1 code implementation28 Jun 2021 Shaojie Bai, Vladlen Koltun, J. Zico Kolter

Deep equilibrium networks (DEQs) are a new class of models that eschews traditional depth in favor of finding the fixed point of a single nonlinear layer.

Language Modelling

SHINE: SHaring the INverse Estimate from the forward pass for bi-level optimization and implicit models

2 code implementations ICLR 2022 Zaccharie Ramzi, Florian Mannel, Shaojie Bai, Jean-Luc Starck, Philippe Ciuciu, Thomas Moreau

In Deep Equilibrium Models (DEQs), the training is performed as a bi-level problem, and its computational complexity is partially driven by the iterative inversion of a huge Jacobian matrix.

Hyperparameter Optimization

A Note on Connecting Barlow Twins with Negative-Sample-Free Contrastive Learning

2 code implementations28 Apr 2021 Yao-Hung Hubert Tsai, Shaojie Bai, Louis-Philippe Morency, Ruslan Salakhutdinov

In this report, we relate the algorithmic design of Barlow Twins' method to the Hilbert-Schmidt Independence Criterion (HSIC), thus establishing it as a contrastive learning approach that is free of negative samples.

Contrastive Learning Self-Supervised Learning

Multiscale Deep Equilibrium Models

4 code implementations NeurIPS 2020 Shaojie Bai, Vladlen Koltun, J. Zico Kolter

These simultaneously-learned multi-resolution features allow us to train a single model on a diverse set of tasks and loss functions, such as using a single MDEQ to perform both image classification and semantic segmentation.

General Classification Image Classification +2

Transformer Dissection: An Unified Understanding for Transformer's Attention via the Lens of Kernel

no code implementations IJCNLP 2019 Yao-Hung Hubert Tsai, Shaojie Bai, Makoto Yamada, Louis-Philippe Morency, Ruslan Salakhutdinov

This new formulation gives us a better way to understand individual components of the Transformer{'}s attention, such as the better way to integrate the positional embedding.

Machine Translation Translation

Transformer Dissection: A Unified Understanding of Transformer's Attention via the Lens of Kernel

1 code implementation EMNLP 2019 Yao-Hung Hubert Tsai, Shaojie Bai, Makoto Yamada, Louis-Philippe Morency, Ruslan Salakhutdinov

This new formulation gives us a better way to understand individual components of the Transformer's attention, such as the better way to integrate the positional embedding.

Machine Translation Translation

Trellis Networks for Sequence Modeling

1 code implementation ICLR 2019 Shaojie Bai, J. Zico Kolter, Vladlen Koltun

On the other hand, we show that truncated recurrent networks are equivalent to trellis networks with special sparsity structure in their weight matrices.

Language Modelling Sequential Image Classification

An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling

32 code implementations4 Mar 2018 Shaojie Bai, J. Zico Kolter, Vladlen Koltun

Our results indicate that a simple convolutional architecture outperforms canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory.

Audio Synthesis Language Modelling +5

Cannot find the paper you are looking for? You can Submit a new open access paper.