Search Results for author: Kwan Ho Ryan Chan

Found 10 papers, 5 papers with code

Knowledge Pursuit Prompting for Zero-Shot Multimodal Synthesis

no code implementations29 Nov 2023 Jinqi Luo, Kwan Ho Ryan Chan, Dimitris Dimos, René Vidal

To address this question, we propose Knowledge Pursuit Prompting (KPP), a zero-shot framework that iteratively incorporates external knowledge to help generators produce reliable visual content.

Language Modelling

Variational Information Pursuit with Large Language and Multimodal Models for Interpretable Predictions

no code implementations24 Aug 2023 Kwan Ho Ryan Chan, Aditya Chattopadhyay, Benjamin David Haeffele, Rene Vidal

Variational Information Pursuit (V-IP) is a framework for making interpretable predictions by design by sequentially selecting a short chain of task-relevant, user-defined and interpretable queries about the data that are most informative for the task.

Semantic Similarity Semantic Textual Similarity

Unsupervised Manifold Linearizing and Clustering

no code implementations ICCV 2023 Tianjiao Ding, Shengbang Tong, Kwan Ho Ryan Chan, Xili Dai, Yi Ma, Benjamin D. Haeffele

We consider the problem of simultaneously clustering and learning a linear representation of data lying close to a union of low-dimensional manifolds, a fundamental task in machine learning and computer vision.

Clustering Deep Clustering

Efficient Maximal Coding Rate Reduction by Variational Forms

no code implementations CVPR 2022 Christina Baek, Ziyang Wu, Kwan Ho Ryan Chan, Tianjiao Ding, Yi Ma, Benjamin D. Haeffele

The principle of Maximal Coding Rate Reduction (MCR$^2$) has recently been proposed as a training objective for learning discriminative low-dimensional structures intrinsic to high-dimensional data to allow for more robust training than standard approaches, such as cross-entropy minimization.

Image Classification

Closed-Loop Data Transcription to an LDR via Minimaxing Rate Reduction

1 code implementation12 Nov 2021 Xili Dai, Shengbang Tong, Mingyang Li, Ziyang Wu, Michael Psenka, Kwan Ho Ryan Chan, Pengyuan Zhai, Yaodong Yu, Xiaojun Yuan, Heung Yeung Shum, Yi Ma

In particular, we propose to learn a closed-loop transcription between a multi-class multi-dimensional data distribution and a linear discriminative representation (LDR) in the feature space that consists of multiple independent multi-dimensional linear subspaces.

ReduNet: A White-box Deep Network from the Principle of Maximizing Rate Reduction

2 code implementations21 May 2021 Kwan Ho Ryan Chan, Yaodong Yu, Chong You, Haozhi Qi, John Wright, Yi Ma

This work attempts to provide a plausible theoretical framework that aims to interpret modern deep (convolutional) networks from the principles of data compression and discriminative representation.

Data Compression

Deep Networks from the Principle of Rate Reduction

3 code implementations27 Oct 2020 Kwan Ho Ryan Chan, Yaodong Yu, Chong You, Haozhi Qi, John Wright, Yi Ma

The layered architectures, linear and nonlinear operators, and even parameters of the network are all explicitly constructed layer-by-layer in a forward propagation fashion by emulating the gradient scheme.

Learning Diverse and Discriminative Representations via the Principle of Maximal Coding Rate Reduction

2 code implementations NeurIPS 2020 Yaodong Yu, Kwan Ho Ryan Chan, Chong You, Chaobing Song, Yi Ma

To learn intrinsic low-dimensional structures from high-dimensional data that most discriminate between classes, we propose the principle of Maximal Coding Rate Reduction ($\text{MCR}^2$), an information-theoretic measure that maximizes the coding rate difference between the whole dataset and the sum of each individual class.

Clustering Contrastive Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.