Search Results for author: Steven Walton

Found 6 papers, 5 papers with code

StyleNAT: Giving Each Head a New Perspective

2 code implementations10 Nov 2022 Steven Walton, Ali Hassani, Xingqian Xu, Zhangyang Wang, Humphrey Shi

Image generation has been a long sought-after but challenging task, and performing the generation task in an efficient manner is similarly difficult.

Face Generation

Design Amortization for Bayesian Optimal Experimental Design

no code implementations7 Oct 2022 Noble Kennamer, Steven Walton, Alexander Ihler

Bayesian optimal experimental design is a sub-field of statistics focused on developing methods to make efficient use of experimental resources.

Computational Efficiency Experimental Design

Neighborhood Attention Transformer

5 code implementations CVPR 2023 Ali Hassani, Steven Walton, Jiachen Li, Shen Li, Humphrey Shi

We present Neighborhood Attention (NA), the first efficient and scalable sliding-window attention mechanism for vision.

Image Classification Object Detection +1

SeMask: Semantically Masked Transformers for Semantic Segmentation

1 code implementation arXiv 2021 Jitesh Jain, Anukriti Singh, Nikita Orlov, Zilong Huang, Jiachen Li, Steven Walton, Humphrey Shi

To achieve this, we propose SeMask, a simple and effective framework that incorporates semantic information into the encoder with the help of a semantic attention operation.

Semantic Segmentation

ConvMLP: Hierarchical Convolutional MLPs for Vision

4 code implementations9 Sep 2021 Jiachen Li, Ali Hassani, Steven Walton, Humphrey Shi

MLP-based architectures, which consist of a sequence of consecutive multi-layer perceptron blocks, have recently been found to reach comparable results to convolutional and transformer-based methods.

Ranked #8 on Image Classification on Flowers-102 (using extra training data)

Image Classification Instance Segmentation +3

Escaping the Big Data Paradigm with Compact Transformers

8 code implementations12 Apr 2021 Ali Hassani, Steven Walton, Nikhil Shah, Abulikemu Abuduweili, Jiachen Li, Humphrey Shi

Our models are flexible in terms of model size, and can have as little as 0. 28M parameters while achieving competitive results.

 Ranked #1 on Image Classification on Flowers-102 (using extra training data)

Fine-Grained Image Classification Superpixel Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.