Search Results for author: Ziyang Wu

Found 18 papers, 13 papers with code

Spatial-Temporal Mixture-of-Graph-Experts for Multi-Type Crime Prediction

no code implementations24 Sep 2024 Ziyang Wu, Fan Liu, Jindong Han, Yuxuan Liang, Hao liu

As various types of crime continue to threaten public safety and economic development, predicting the occurrence of multiple types of crimes becomes increasingly vital for effective prevention measures.

Contrastive Learning Crime Prediction

A Survey of Foundation Models for Music Understanding

no code implementations15 Sep 2024 Wenjun Li, Ying Cai, Ziyang Wu, Wenyi Zhang, Yifan Chen, Rundong Qi, Mengqi Dong, Peigen Chen, Xiao Dong, Fenghao Shi, Lei Guo, Junwei Han, Bao Ge, Tianming Liu, Lin Gan, Tuo Zhang

Music is essential in daily life, fulfilling emotional and entertainment needs, and connecting us personally, socially, and culturally.

Survey

LLoCO: Learning Long Contexts Offline

1 code implementation11 Apr 2024 Sijun Tan, Xiuyu Li, Shishir Patil, Ziyang Wu, Tianjun Zhang, Kurt Keutzer, Joseph E. Gonzalez, Raluca Ada Popa

Processing long contexts remains a challenge for large language models (LLMs) due to the quadratic computational and memory overhead of the self-attention mechanism and the substantial KV cache sizes during generation.

4k In-Context Learning +1

Masked Completion via Structured Diffusion with White-Box Transformers

1 code implementation3 Apr 2024 Druv Pai, Ziyang Wu, Sam Buchanan, Yaodong Yu, Yi Ma

We do this by exploiting a fundamental connection between diffusion, compression, and (masked) completion, deriving a deep transformer-like masked autoencoder architecture, called CRATE-MAE, in which the role of each layer is mathematically fully interpretable: they transform the data distribution to and from a structured representation.

Representation Learning

When Do We Not Need Larger Vision Models?

2 code implementations19 Mar 2024 Baifeng Shi, Ziyang Wu, Maolin Mao, Xin Wang, Trevor Darrell

Our results show that a multi-scale smaller model has comparable learning capacity to a larger model, and pre-training smaller models with S$^2$ can match or even exceed the advantage of larger models.

Depth Estimation

White-Box Transformers via Sparse Rate Reduction: Compression Is All There Is?

1 code implementation22 Nov 2023 Yaodong Yu, Sam Buchanan, Druv Pai, Tianzhe Chu, Ziyang Wu, Shengbang Tong, Hao Bai, Yuexiang Zhai, Benjamin D. Haeffele, Yi Ma

This leads to a family of white-box transformer-like deep network architectures, named CRATE, which are mathematically fully interpretable.

Data Compression Denoising +1

Emergence of Segmentation with Minimalistic White-Box Transformers

1 code implementation30 Aug 2023 Yaodong Yu, Tianzhe Chu, Shengbang Tong, Ziyang Wu, Druv Pai, Sam Buchanan, Yi Ma

Transformer-like models for vision tasks have recently proven effective for a wide range of downstream applications such as segmentation and detection.

Segmentation Self-Supervised Learning

White-Box Transformers via Sparse Rate Reduction

1 code implementation NeurIPS 2023 Yaodong Yu, Sam Buchanan, Druv Pai, Tianzhe Chu, Ziyang Wu, Shengbang Tong, Benjamin D. Haeffele, Yi Ma

Particularly, we show that the standard transformer block can be derived from alternating optimization on complementary parts of this objective: the multi-head self-attention operator can be viewed as a gradient descent step to compress the token sets by minimizing their lossy coding rate, and the subsequent multi-layer perceptron can be viewed as attempting to sparsify the representation of the tokens.

Representation Learning

Efficient Maximal Coding Rate Reduction by Variational Forms

no code implementations CVPR 2022 Christina Baek, Ziyang Wu, Kwan Ho Ryan Chan, Tianjiao Ding, Yi Ma, Benjamin D. Haeffele

The principle of Maximal Coding Rate Reduction (MCR$^2$) has recently been proposed as a training objective for learning discriminative low-dimensional structures intrinsic to high-dimensional data to allow for more robust training than standard approaches, such as cross-entropy minimization.

Image Classification

Incremental Learning of Structured Memory via Closed-Loop Transcription

1 code implementation11 Feb 2022 Shengbang Tong, Xili Dai, Ziyang Wu, Mingyang Li, Brent Yi, Yi Ma

Our method is simpler than existing approaches for incremental learning, and more efficient in terms of model size, storage, and computation: it requires only a single, fixed-capacity autoencoding network with a feature space that is used for both discriminative and generative purposes.

Incremental Learning

Closed-Loop Data Transcription to an LDR via Minimaxing Rate Reduction

1 code implementation12 Nov 2021 Xili Dai, Shengbang Tong, Mingyang Li, Ziyang Wu, Michael Psenka, Kwan Ho Ryan Chan, Pengyuan Zhai, Yaodong Yu, Xiaojun Yuan, Heung Yeung Shum, Yi Ma

In particular, we propose to learn a closed-loop transcription between a multi-class multi-dimensional data distribution and a linear discriminative representation (LDR) in the feature space that consists of multiple independent multi-dimensional linear subspaces.

Decoder

Can We Characterize Tasks Without Labels or Features?

1 code implementation CVPR 2021 Bram Wallace, Ziyang Wu, Bharath Hariharan

The problem of expert model selection deals with choosing the appropriate pretrained network ("expert") to transfer to a target task.

Model Selection

Incremental Learning via Rate Reduction

no code implementations CVPR 2021 Ziyang Wu, Christina Baek, Chong You, Yi Ma

Current deep learning architectures suffer from catastrophic forgetting, a failure to retain knowledge of previously learned classes when incrementally trained on new classes.

Deep Learning Incremental Learning

Efficient AutoML Pipeline Search with Matrix and Tensor Factorization

1 code implementation7 Jun 2020 Chengrun Yang, Jicong Fan, Ziyang Wu, Madeleine Udell

Data scientists seeking a good supervised learning model on a new dataset have many choices to make: they must preprocess the data, select features, possibly reduce the dimension, select an estimation algorithm, and choose hyperparameters for each of these pipeline components.

AutoML

PARN: Position-Aware Relation Networks for Few-Shot Learning

1 code implementation ICCV 2019 Ziyang Wu, Yuwei Li, Lihua Guo, Kui Jia

However, due to the inherent local connectivity of CNN, the CNN-based relation network (RN) can be sensitive to the spatial position relationship of semantic objects in two compared images.

Few-Shot Learning Position +3

Cannot find the paper you are looking for? You can Submit a new open access paper.