Search Results for author: Zhizhong Li

Found 25 papers, 11 papers with code

THRONE: An Object-based Hallucination Benchmark for the Free-form Generations of Large Vision-Language Models

no code implementations CVPR 2024 Prannay Kaul, Zhizhong Li, Hao Yang, Yonatan Dukler, Ashwin Swaminathan, C. J. Taylor, Stefano Soatto

By evaluating a large selection of recent LVLMs using public datasets, we show that an improvement in existing metrics do not lead to a reduction in Type I hallucinations, and that established benchmarks for measuring Type I hallucinations are incomplete.

Attribute Data Augmentation +2

Get the Best of Both Worlds: Improving Accuracy and Transferability by Grassmann Class Representation

1 code implementation ICCV 2023 Haoqi Wang, Zhizhong Li, Wayne Zhang

We generalize the class vectors found in neural networks to linear subspaces (i. e.~points in the Grassmann manifold) and show that the Grassmann Class Representation (GCR) enables the simultaneous improvement in accuracy and feature transferability.

Collaborative Anomaly Detection

no code implementations20 Sep 2022 Ke Bai, Aonan Zhang, Zhizhong Li, Ricardo Heano, Chong Wang, Lawrence Carin

In recommendation systems, items are likely to be exposed to various users and we would like to learn about the familiarity of a new user with an existing item.

Anomaly Detection Density Estimation +1

Class-Incremental Learning with Strong Pre-trained Models

1 code implementation CVPR 2022 Tz-Ying Wu, Gurumurthy Swaminathan, Zhizhong Li, Avinash Ravichandran, Nuno Vasconcelos, Rahul Bhotika, Stefano Soatto

We hypothesize that a strong base model can provide a good representation for novel classes and incremental learning can be done with small adaptations.

Class Incremental Learning Incremental Learning

ViM: Out-Of-Distribution with Virtual-logit Matching

2 code implementations CVPR 2022 Haoqi Wang, Zhizhong Li, Litong Feng, Wayne Zhang

Most of the existing Out-Of-Distribution (OOD) detection algorithms depend on single input source: the feature, the logit, or the softmax probability.

Out-of-Distribution Detection

Representation Consolidation from Multiple Expert Teachers

no code implementations29 Sep 2021 Zhizhong Li, Avinash Ravichandran, Charless Fowlkes, Marzia Polito, Rahul Bhotika, Stefano Soatto

Indeed, we observe experimentally that standard distillation of task-specific teachers, or using these teacher representations directly, **reduces** downstream transferability compared to a task-agnostic generalist model.

Knowledge Distillation

MMOCR: A Comprehensive Toolbox for Text Detection, Recognition and Understanding

1 code implementation14 Aug 2021 Zhanghui Kuang, Hongbin Sun, Zhizhong Li, Xiaoyu Yue, Tsui Hin Lin, Jianyong Chen, Huaqiang Wei, Yiqin Zhu, Tong Gao, Wenwei Zhang, Kai Chen, Wayne Zhang, Dahua Lin

We present MMOCR-an open-source toolbox which provides a comprehensive pipeline for text detection and recognition, as well as their downstream tasks such as named entity recognition and key information extraction.

Key Information Extraction named-entity-recognition +4

Representation Consolidation for Training Expert Students

no code implementations16 Jul 2021 Zhizhong Li, Avinash Ravichandran, Charless Fowlkes, Marzia Polito, Rahul Bhotika, Stefano Soatto

Traditionally, distillation has been used to train a student model to emulate the input/output functionality of a teacher.

Learning Curves for Analysis of Deep Networks

1 code implementation21 Oct 2020 Derek Hoiem, Tanmay Gupta, Zhizhong Li, Michal M. Shlapentokh-Rothman

Learning curves model a classifier's test error as a function of the number of training samples.

Data Augmentation Image Classification

Regularizing Reasons for Outfit Evaluation with Gradient Penalty

no code implementations2 Feb 2020 Xingxing Zou, Zhizhong Li, Ke Bai, Dahua Lin, Waikeung Wong

In this paper, we build an outfit evaluation system which provides feedbacks consisting of a judgment with a convincing explanation.

Sentence

Policy Continuation with Hindsight Inverse Dynamics

1 code implementation NeurIPS 2019 Hao Sun, Zhizhong Li, Xiaotong Liu, Dahua Lin, Bolei Zhou

This approach learns from Hindsight Inverse Dynamics based on Hindsight Experience Replay, enabling the learning process in a self-imitated manner and thus can be trained with supervised learning.

Reinforcement Learning (RL)

Convolutional Sequence Generation for Skeleton-Based Action Synthesis

no code implementations ICCV 2019 2019 Sijie Yan, Zhizhong Li, Yuanjun Xiong, Huahan Yan

It captures the temporal structure at multiple scales through the GP prior and the temporal convolutions; and establishes the spatial connection between the latent vectors and the skeleton graphs via a novel graph refining scheme.

Human action generation

Regulatory Focus: Promotion and Prevention Inclinations in Policy Search

no code implementations25 Sep 2019 Lanxin Lei, Zhizhong Li, Xiaoyang Li, Cong Qiu, Dahua Lin

The estimation of advantage is crucial for a number of reinforcement learning algorithms, as it directly influences the choices of future paths.

Atari Games Continuous Control +1

Biased Estimates of Advantages over Path Ensembles

no code implementations15 Sep 2019 Lanxin Lei, Zhizhong Li, Dahua Lin

The estimation of advantage is crucial for a number of reinforcement learning algorithms, as it directly influences the choices of future paths.

Atari Games Continuous Control +1

Task-Assisted Domain Adaptation with Anchor Tasks

no code implementations16 Aug 2019 Zhizhong Li, Linjie Luo, Sergey Tulyakov, Qieyun Dai, Derek Hoiem

Our key idea to improve domain adaptation is to introduce a separate anchor task (such as facial landmarks) whose annotations can be obtained at no cost or are already available on both synthetic and real datasets.

Depth Estimation Domain Adaptation +2

Reducing Overconfident Errors outside the Known Distribution

no code implementations ICLR 2019 Zhizhong Li, Derek Hoiem

We compare a number of methods from related fields such as calibration and epistemic uncertainty modeling, as well as two proposed methods that reduce overconfident errors of samples from an unknown novel distribution without drastically increasing evaluation time: (1) G-distillation, training an ensemble of classifiers and then distill into a single model using both labeled and unlabeled examples, or (2) NCR, reducing prediction confidence based on its novelty detection score.

Domain Adaptation Novelty Detection

Improving Confidence Estimates for Unfamiliar Examples

1 code implementation CVPR 2020 Zhizhong Li, Derek Hoiem

In this paper, we compare and evaluate several methods to improve confidence estimates for unfamiliar and familiar samples.

Attribute Domain Adaptation

Deep Nearest Class Mean Model for Incremental Odor Classification

no code implementations8 Jan 2018 Yu Cheng, Angus Wong, Kevin Hung, Zhizhong Li, Weitong Li, Jun Zhang

That is, the odor datasets are dynamically growing while both training samples and number of classes are increasing over time.

Classification General Classification

Integrating Specialized Classifiers Based on Continuous Time Markov Chain

no code implementations7 Sep 2017 Zhizhong Li, Dahua Lin

Specialized classifiers, namely those dedicated to a subset of classes, are often adopted in real-world recognition systems.

PolyNet: A Pursuit of Structural Diversity in Very Deep Networks

3 code implementations CVPR 2017 Xingcheng Zhang, Zhizhong Li, Chen Change Loy, Dahua Lin

A number of studies have shown that increasing the depth or width of convolutional networks is a rewarding approach to improve the performance of image recognition.

Diversity Image Classification

Learning without Forgetting

10 code implementations29 Jun 2016 Zhizhong Li, Derek Hoiem

We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities.

Class Incremental Learning Disjoint 10-1 +9

Cannot find the paper you are looking for? You can Submit a new open access paper.