Search Results for author: Shengbang Tong

Found 15 papers, 11 papers with code

Ctrl123: Consistent Novel View Synthesis via Closed-Loop Transcription

no code implementations16 Mar 2024 Hongxiang Zhao, Xili Dai, Jianan Wang, Shengbang Tong, Jingyuan Zhang, Weida Wang, Lei Zhang, Yi Ma

This consequently limits the performance of downstream tasks, such as image-to-multiview generation and 3D reconstruction.

3D Reconstruction Novel View Synthesis

Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs

1 code implementation11 Jan 2024 Shengbang Tong, Zhuang Liu, Yuexiang Zhai, Yi Ma, Yann Lecun, Saining Xie

To understand the roots of these errors, we explore the gap between the visual embedding space of CLIP and vision-only self-supervised learning.

Representation Learning Self-Supervised Learning +1

White-Box Transformers via Sparse Rate Reduction: Compression Is All There Is?

1 code implementation22 Nov 2023 Yaodong Yu, Sam Buchanan, Druv Pai, Tianzhe Chu, Ziyang Wu, Shengbang Tong, Hao Bai, Yuexiang Zhai, Benjamin D. Haeffele, Yi Ma

This leads to a family of white-box transformer-like deep network architectures, named CRATE, which are mathematically fully interpretable.

Data Compression Denoising +1

Investigating the Catastrophic Forgetting in Multimodal Large Language Models

no code implementations19 Sep 2023 Yuexiang Zhai, Shengbang Tong, Xiao Li, Mu Cai, Qing Qu, Yong Jae Lee, Yi Ma

However, catastrophic forgetting, a notorious phenomenon where the fine-tuned model fails to retain similar performance compared to the pre-trained model, still remains an inherent problem in multimodal LLMs (MLLM).

Image Classification Language Modelling +1

Emergence of Segmentation with Minimalistic White-Box Transformers

1 code implementation30 Aug 2023 Yaodong Yu, Tianzhe Chu, Shengbang Tong, Ziyang Wu, Druv Pai, Sam Buchanan, Yi Ma

Transformer-like models for vision tasks have recently proven effective for a wide range of downstream applications such as segmentation and detection.

Segmentation Self-Supervised Learning

Mass-Producing Failures of Multimodal Systems with Language Models

1 code implementation NeurIPS 2023 Shengbang Tong, Erik Jones, Jacob Steinhardt

Because CLIP is the backbone for most state-of-the-art multimodal systems, these inputs produce failures in Midjourney 5. 1, DALL-E, VideoFusion, and others.

Language Modelling Self-Driving Cars

Image Clustering via the Principle of Rate Reduction in the Age of Pretrained Models

1 code implementation8 Jun 2023 Tianzhe Chu, Shengbang Tong, Tianjiao Ding, Xili Dai, Benjamin David Haeffele, René Vidal, Yi Ma

In this paper, we propose a novel image clustering pipeline that leverages the powerful feature representation of large pre-trained models such as CLIP and cluster images effectively and efficiently at scale.

Clustering Image Clustering +1

White-Box Transformers via Sparse Rate Reduction

1 code implementation NeurIPS 2023 Yaodong Yu, Sam Buchanan, Druv Pai, Tianzhe Chu, Ziyang Wu, Shengbang Tong, Benjamin D. Haeffele, Yi Ma

Particularly, we show that the standard transformer block can be derived from alternating optimization on complementary parts of this objective: the multi-head self-attention operator can be viewed as a gradient descent step to compress the token sets by minimizing their lossy coding rate, and the subsequent multi-layer perceptron can be viewed as attempting to sparsify the representation of the tokens.

Representation Learning

EMP-SSL: Towards Self-Supervised Learning in One Training Epoch

2 code implementations8 Apr 2023 Shengbang Tong, Yubei Chen, Yi Ma, Yann Lecun

Recently, self-supervised learning (SSL) has achieved tremendous success in learning image representation.

Quantization Self-Supervised Learning

Closed-Loop Transcription via Convolutional Sparse Coding

no code implementations18 Feb 2023 Xili Dai, Ke Chen, Shengbang Tong, Jingyuan Zhang, Xingjian Gao, Mingyang Li, Druv Pai, Yuexiang Zhai, Xiaojun Yuan, Heung-Yeung Shum, Lionel M. Ni, Yi Ma

Our method is arguably the first to demonstrate that a concatenation of multiple convolution sparse coding/decoding layers leads to an interpretable and effective autoencoder for modeling the distribution of large-scale natural image datasets.

Rolling Shutter Correction

Unsupervised Manifold Linearizing and Clustering

no code implementations ICCV 2023 Tianjiao Ding, Shengbang Tong, Kwan Ho Ryan Chan, Xili Dai, Yi Ma, Benjamin D. Haeffele

We consider the problem of simultaneously clustering and learning a linear representation of data lying close to a union of low-dimensional manifolds, a fundamental task in machine learning and computer vision.

Clustering Deep Clustering

Unsupervised Learning of Structured Representations via Closed-Loop Transcription

1 code implementation30 Oct 2022 Shengbang Tong, Xili Dai, Yubei Chen, Mingyang Li, Zengyi Li, Brent Yi, Yann Lecun, Yi Ma

This paper proposes an unsupervised method for learning a unified representation that serves both discriminative and generative purposes.

Revisiting Sparse Convolutional Model for Visual Recognition

1 code implementation24 Oct 2022 Xili Dai, Mingyang Li, Pengyuan Zhai, Shengbang Tong, Xingjian Gao, Shao-Lun Huang, Zhihui Zhu, Chong You, Yi Ma

We show that such models have equally strong empirical performance on CIFAR-10, CIFAR-100, and ImageNet datasets when compared to conventional neural networks.

Image Classification

Incremental Learning of Structured Memory via Closed-Loop Transcription

1 code implementation11 Feb 2022 Shengbang Tong, Xili Dai, Ziyang Wu, Mingyang Li, Brent Yi, Yi Ma

Our method is simpler than existing approaches for incremental learning, and more efficient in terms of model size, storage, and computation: it requires only a single, fixed-capacity autoencoding network with a feature space that is used for both discriminative and generative purposes.

Incremental Learning

Closed-Loop Data Transcription to an LDR via Minimaxing Rate Reduction

1 code implementation12 Nov 2021 Xili Dai, Shengbang Tong, Mingyang Li, Ziyang Wu, Michael Psenka, Kwan Ho Ryan Chan, Pengyuan Zhai, Yaodong Yu, Xiaojun Yuan, Heung Yeung Shum, Yi Ma

In particular, we propose to learn a closed-loop transcription between a multi-class multi-dimensional data distribution and a linear discriminative representation (LDR) in the feature space that consists of multiple independent multi-dimensional linear subspaces.

Cannot find the paper you are looking for? You can Submit a new open access paper.