Search Results for author: Yunfan Li

Found 17 papers, 5 papers with code

Contrastive Clustering

1 code implementation21 Sep 2020 Yunfan Li, Peng Hu, Zitao Liu, Dezhong Peng, Joey Tianyi Zhou, Xi Peng

In this paper, we propose a one-stage online clustering method called Contrastive Clustering (CC) which explicitly performs the instance- and cluster-level contrastive learning.

Ranked #4 on Image Clustering on STL-10 (using extra training data)

Clustering Contrastive Learning +1

Twin Contrastive Learning for Online Clustering

2 code implementations21 Oct 2022 Yunfan Li, Mouxing Yang, Dezhong Peng, Taihao Li, Jiantao Huang, Xi Peng

Specifically, we find that when the data is projected into a feature space with a dimensionality of the target cluster number, the rows and columns of its feature matrix correspond to the instance and cluster representation, respectively.

Clustering Contrastive Learning +3

Partially View-aligned Representation Learning with Noise-robust Contrastive Loss

1 code implementation CVPR 2021 Mouxing Yang, Yunfan Li, Zhenyu Huang, Zitao Liu, Peng Hu, Xi Peng

To solve such a less-touched problem without the help of labels, we propose simultaneously learning representation and aligning data using a noise-robust contrastive loss.

Clustering Contrastive Learning +2

UVMBench: A Comprehensive Benchmark Suite for Researching Unified Virtual Memory in GPUs

1 code implementation20 Jul 2020 Yongbin Gu, Wenxuan Wu, Yunfan Li, Lizhong Chen

The recent introduction of Unified Virtual Memory (UVM) in GPUs offers a new programming model that allows GPUs and CPUs to share the same virtual memory space, shifts the complex memory management from programmers to GPU driver/ hardware, and enables kernel execution even when memory is oversubscribed.

Hardware Architecture

Deep Fair Clustering via Maximizing and Minimizing Mutual Information: Theory, Algorithm and Metric

1 code implementation CVPR 2023 Pengxin Zeng, Yunfan Li, Peng Hu, Dezhong Peng, Jiancheng Lv, Xi Peng

Fair clustering aims to divide data into distinct clusters while preventing sensitive attributes (\textit{e. g.}, gender, race, RNA sequencing technique) from dominating the clustering.

Clustering Fairness +1

Horseshoe Regularization for Machine Learning in Complex and Deep Models

no code implementations24 Apr 2019 Anindya Bhadra, Jyotishka Datta, Yunfan Li, Nicholas G. Polson

We also outline the recent computational developments in horseshoe shrinkage for complex models along with a list of available software implementations that allows one to venture out beyond the comfort zone of the canonical linear regression problems.

BIG-bench Machine Learning regression

Large Scale Many-Objective Optimization Driven by Distributional Adversarial Networks

no code implementations16 Mar 2020 Zhenyu Liang, Yunfan Li, Zhongwei Wan

In this paper, we will propose a novel algorithm based on RVEA[1] framework and using Distributional Adversarial Networks (DAN) [2]to generate new offspring.

Stochastic Optimization

Many-Objective Estimation of Distribution Optimization Algorithm Based on WGAN-GP

no code implementations16 Mar 2020 Zhenyu Liang, Yunfan Li, Zhongwei Wan

EDA establishes a probability model to describe the distribution of solution from the perspective of population macroscopically by statistical learning method, and then randomly samples the probability model to generate a new population.

Stochastic Optimization

Incomplete Multi-view Clustering via Prototype-based Imputation

no code implementations26 Jan 2023 Haobin Li, Yunfan Li, Mouxing Yang, Peng Hu, Dezhong Peng, Xi Peng

Thanks to our dual-stream model, both cluster- and view-specific information could be captured, and thus the instance commonality and view versatility could be preserved to facilitate IMvC.

Clustering Contrastive Learning +2

Low-Switching Policy Gradient with Exploration via Online Sensitivity Sampling

no code implementations15 Jun 2023 Yunfan Li, Yiran Wang, Yu Cheng, Lin Yang

We show that, our algorithm obtains an $\varepsilon$-optimal policy with only $\widetilde{O}(\frac{\text{poly}(d)}{\varepsilon^3})$ samples, where $\varepsilon$ is the suboptimality gap and $d$ is a complexity measure of the function class approximating the policy.

Reinforcement Learning (RL)

On the Model-Misspecification in Reinforcement Learning

no code implementations19 Jun 2023 Yunfan Li, Lin Yang

However, in the face of model misspecification (a disparity between the ground-truth and optimal function approximators), it is shown that policy-based approaches can be robust even when the policy function approximation is under a large locally-bounded misspecification error, with which the function class may exhibit a $\Omega(1)$ approximation error in specific states and actions, but remains small on average within a policy-induced state distribution.

Open-Ended Question Answering reinforcement-learning +1

Automated Assessment of Critical View of Safety in Laparoscopic Cholecystectomy

no code implementations13 Sep 2023 Yunfan Li, Himanshu Gupta, Haibin Ling, IV Ramakrishnan, Prateek Prasanna, Georgios Georgakis, Aaron Sasson

Compared with classical open cholecystectomy, laparoscopic cholecystectomy (LC) is associated with significantly shorter recovery period, and hence is the preferred method.

Semantic Segmentation

Image Clustering with External Guidance

no code implementations18 Oct 2023 Yunfan Li, Peng Hu, Dezhong Peng, Jiancheng Lv, Jianping Fan, Xi Peng

The core of clustering is incorporating prior knowledge to construct supervision signals.

Clustering Image Clustering

Achieving Near-Optimal Regret for Bandit Algorithms with Uniform Last-Iterate Guarantee

no code implementations20 Feb 2024 Junyan Liu, Yunfan Li, Lin Yang

This paper introduces a stronger performance measure, the uniform last-iterate (ULI) guarantee, capturing both cumulative and instantaneous performance of bandit algorithms.

Prognostic Covariate Adjustment for Logistic Regression in Randomized Controlled Trials

no code implementations29 Feb 2024 Yunfan Li, Arman Sabbaghi, Jonathan R. Walsh, Charles K. Fisher

We demonstrate that prognostic score adjustment in logistic regression increases the power of the Wald test for the conditional odds ratio under a fixed sample size, or alternatively reduces the necessary sample size to achieve a desired power, compared to the unadjusted analysis.

regression

An Empirical Study of Parameter Efficient Fine-tuning on Vision-Language Pre-train Model

no code implementations13 Mar 2024 Yuxin Tian, Mouxing Yang, Yunfan Li, Dayiheng Liu, Xingzhang Ren, Xi Peng, Jiancheng Lv

A natural expectation for PEFTs is that the performance of various PEFTs is positively related to the data size and fine-tunable parameter size.

Cannot find the paper you are looking for? You can Submit a new open access paper.