Search Results for author: Zizhao Zhang

Found 37 papers, 15 papers with code

Metaheuristic Algorithms in Artificial Intelligence with Applications to Bioinformatics, Biostatistics, Ecology and, the Manufacturing Industries

1 code implementation8 Aug 2023 Elvis Han Cui, Zizhao Zhang, Culsome Junwen Chen, Weng Kee Wong

Nature-inspired metaheuristic algorithms are important components of artificial intelligence, and are increasingly used across disciplines to tackle various types of challenging optimization problems.

Matrix Completion

Steering Prototype with Prompt-tuning for Rehearsal-free Continual Learning

no code implementations16 Mar 2023 Zhuowei Li, Long Zhao, Zizhao Zhang, Han Zhang, Di Liu, Ting Liu, Dimitris N. Metaxas

Prototype, as a representation of class embeddings, has been explored to reduce memory footprint or mitigate forgetting for continual learning scenarios.

class-incremental learning Class Incremental Learning +2

StraIT: Non-autoregressive Generation with Stratified Image Transformer

no code implementations1 Mar 2023 Shengju Qian, Huiwen Chang, Yuanzhen Li, Zizhao Zhang, Jiaya Jia, Han Zhang

We propose Stratified Image Transformer(StraIT), a pure non-autoregressive(NAR) generative model that demonstrates superiority in high-quality image synthesis over existing autoregressive(AR) and diffusion models(DMs).

Image Generation

QueryForm: A Simple Zero-shot Form Entity Query Framework

no code implementations14 Nov 2022 Zifeng Wang, Zizhao Zhang, Jacob Devlin, Chen-Yu Lee, Guolong Su, Hao Zhang, Jennifer Dy, Vincent Perot, Tomas Pfister

Zero-shot transfer learning for document understanding is a crucial yet under-investigated scenario to help reduce the high cost involved in annotating document entities.

document understanding Transfer Learning

UNesT: Local Spatial Representation Learning with Hierarchical Transformer for Efficient Medical Segmentation

1 code implementation28 Sep 2022 Xin Yu, Qi Yang, Yinchi Zhou, Leon Y. Cai, Riqiang Gao, Ho Hin Lee, Thomas Li, Shunxing Bao, Zhoubing Xu, Thomas A. Lasko, Richard G. Abramson, Zizhao Zhang, Yuankai Huo, Bennett A. Landman, Yucheng Tang

Transformer-based models, capable of learning better global dependencies, have recently demonstrated exceptional representation learning capabilities in computer vision and medical image analysis.

Brain Segmentation Image Segmentation +2

Deep Hypergraph Structure Learning

no code implementations26 Aug 2022 Zizhao Zhang, Yifan Feng, Shihui Ying, Yue Gao

To address this issue, we design a general paradigm of deep hypergraph structure learning, namely DeepHGSL, to optimize the hypergraph structure for hypergraph-based representation learning.

Representation Learning

Exploit Customer Life-time Value with Memoryless Experiments

no code implementations17 Jan 2022 Zizhao Zhang, Yifei Zhao, Guangda Huzhang

As a measure of the long-term contribution produced by customers in a service or product relationship, life-time value, or LTV, can more comprehensively find the optimal strategy for service delivery.

Learning to Prompt for Continual Learning

3 code implementations CVPR 2022 Zifeng Wang, Zizhao Zhang, Chen-Yu Lee, Han Zhang, Ruoxi Sun, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, Tomas Pfister

The mainstream paradigm behind continual learning has been to adapt the model parameters to non-stationary data distributions, where catastrophic forgetting is the central challenge.

Continual Learning Image Classification

Unifying Distribution Alignment as a Loss for Imbalanced Semi-supervised Learning

no code implementations29 Sep 2021 Justin Lazarow, Kihyuk Sohn, Chun-Liang Li, Zizhao Zhang, Chen-Yu Lee, Tomas Pfister

While remarkable progress in imbalanced supervised learning has been made recently, less attention has been given to the setting of imbalanced semi-supervised learning (SSL) where not only is a few labeled data provided, but the underlying data distribution can be severely imbalanced.

Pseudo Label

Learning Fast Sample Re-weighting Without Reward Data

1 code implementation ICCV 2021 Zizhao Zhang, Tomas Pfister

Training sample re-weighting is an effective approach for tackling data biases such as imbalanced and corrupted labels.

Meta-Learning

Improved Transformer for High-Resolution GANs

1 code implementation NeurIPS 2021 Long Zhao, Zizhao Zhang, Ting Chen, Dimitris N. Metaxas, Han Zhang

Attention-based models, exemplified by the Transformer, can effectively model long range dependency, but suffer from the quadratic complexity of self-attention operation, making them difficult to be adopted for high-resolution image generation based on Generative Adversarial Networks (GANs).

Ranked #2 on Image Generation on CelebA 256x256 (FID metric)

Image Generation Vocal Bursts Intensity Prediction

Nested Hierarchical Transformer: Towards Accurate, Data-Efficient and Interpretable Visual Understanding

6 code implementations26 May 2021 Zizhao Zhang, Han Zhang, Long Zhao, Ting Chen, Sercan O. Arik, Tomas Pfister

Hierarchical structures are popular in recent vision transformers, however, they require sophisticated designs and massive datasets to work well.

Image Classification Image Generation

Learning from Weakly-labeled Web Videos via Exploring Sub-Concepts

no code implementations11 Jan 2021 Kunpeng Li, Zizhao Zhang, Guanhang Wu, Xuehan Xiong, Chen-Yu Lee, Zhichao Lu, Yun Fu, Tomas Pfister

To address this issue, we introduce a new method for pre-training video action recognition models using queried web videos.

Action Recognition Pseudo Label +1

Exploring Sub-Pseudo Labels for Learning from Weakly-Labeled Web Videos

no code implementations1 Jan 2021 Kunpeng Li, Zizhao Zhang, Guanhang Wu, Xuehan Xiong, Chen-Yu Lee, Yun Fu, Tomas Pfister

To address this issue, we introduce a new method for pre-training video action recognition models using queried web videos.

Action Recognition Pseudo Label +1

Image Augmentations for GAN Training

no code implementations4 Jun 2020 Zhengli Zhao, Zizhao Zhang, Ting Chen, Sameer Singh, Han Zhang

We provide new state-of-the-art results for conditional generation on CIFAR-10 with both consistency loss and contrastive loss as additional regularizations.

Image Augmentation Image Generation

A Simple Semi-Supervised Learning Framework for Object Detection

7 code implementations10 May 2020 Kihyuk Sohn, Zizhao Zhang, Chun-Liang Li, Han Zhang, Chen-Yu Lee, Tomas Pfister

Semi-supervised learning (SSL) has a potential to improve the predictive performance of machine learning models using unlabeled data.

Ranked #12 on Semi-Supervised Object Detection on COCO 100% labeled data (using extra training data)

Data Augmentation Image Classification +3

Improved Consistency Regularization for GANs

no code implementations11 Feb 2020 Zhengli Zhao, Sameer Singh, Honglak Lee, Zizhao Zhang, Augustus Odena, Han Zhang

Recent work has increased the performance of Generative Adversarial Networks (GANs) by enforcing a consistency cost on the discriminator.

Image Generation

Distance-Based Learning from Errors for Confidence Calibration

no code implementations ICLR 2020 Chen Xing, Sercan Arik, Zizhao Zhang, Tomas Pfister

To circumvent this by inferring the distance for every test sample, we propose to train a confidence model jointly with the classification model.

Classification General Classification

Consistency-based Semi-supervised Active Learning: Towards Minimizing Labeling Cost

no code implementations ECCV 2020 Mingfei Gao, Zizhao Zhang, Guo Yu, Sercan O. Arik, Larry S. Davis, Tomas Pfister

Active learning (AL) combines data labeling and model training to minimize the labeling cost by prioritizing the selection of high value data that can best improve model performance.

Active Learning Image Classification +1

Distilling Effective Supervision from Severe Label Noise

2 code implementations CVPR 2020 Zizhao Zhang, Han Zhang, Sercan O. Arik, Honglak Lee, Tomas Pfister

For instance, on CIFAR100 with a $40\%$ uniform noise ratio and only 10 trusted labeled data per class, our method achieves $80. 2{\pm}0. 3\%$ classification accuracy, where the error rate is only $1. 4\%$ higher than a neural network trained without label noise.

Image Classification

Consistency-Based Semi-Supervised Active Learning: Towards Minimizing Labeling Budget

no code implementations25 Sep 2019 Mingfei Gao, Zizhao Zhang, Guo Yu, Sercan O. Arik, Larry S. Davis, Tomas Pfister

Active learning (AL) aims to integrate data labeling and model training in a unified way, and to minimize the labeling budget by prioritizing the selection of high value data that can best improve model performance.

Active Learning Representation Learning

Hypergraph Neural Networks

2 code implementations25 Sep 2018 Yifan Feng, Haoxuan You, Zizhao Zhang, Rongrong Ji, Yue Gao

In this paper, we present a hypergraph neural networks (HGNN) framework for data representation learning, which can encode high-order data correlation in a hypergraph structure.

Object Recognition Representation Learning

GVCNN: Group-View Convolutional Neural Networks for 3D Shape Recognition

no code implementations CVPR 2018 Yifan Feng, Zizhao Zhang, Xibin Zhao, Rongrong Ji, Yue Gao

The proposed GVCNN framework is composed of a hierarchical view-group-shape architecture, i. e., from the view level, the group level and the shape level, which are organized using a grouping strategy.

3D Shape Classification 3D Shape Recognition +2

Translating and Segmenting Multimodal Medical Volumes with Cycle- and Shape-Consistency Generative Adversarial Network

no code implementations CVPR 2018 Zizhao Zhang, Lin Yang, Yefeng Zheng

In this work, we propose a generic cross-modality synthesis approach with the following targets: 1) synthesizing realistic looking 3D images using unpaired training data, 2) ensuring consistent anatomical structures, which could be changed by geometric distortion in cross-modality synthesis and 3) improving volume segmentation by using synthetic data for modalities with limited training samples.

Computed Tomography (CT) Image Generation +1

Photographic Text-to-Image Synthesis with a Hierarchically-nested Adversarial Network

1 code implementation CVPR 2018 Zizhao Zhang, Yuanpu Xie, Lin Yang

This paper presents a novel method to deal with the challenging task of generating photographic images conditioned on semantic image descriptions.

Image Generation Semantic Similarity +1

Recent Advances in the Applications of Convolutional Neural Networks to Medical Image Contour Detection

no code implementations24 Aug 2017 Zizhao Zhang, Fuyong Xing, Hai Su, Xiaoshuang Shi, Lin Yang

Then we review their recent applications in medical image analysis and point out limitations, with the goal to light some potential directions in medical image analysis.

Contour Detection

TandemNet: Distilling Knowledge from Medical Images Using Diagnostic Reports as Optional Semantic References

no code implementations10 Aug 2017 Zizhao Zhang, Pingjun Chen, Manish Sapkota, Lin Yang

In this paper, we introduce the semantic knowledge of medical images from their diagnostic reports to provide an inspirational network training and an interpretable prediction mechanism with our proposed novel multimodal neural network, namely TandemNet.

Language Modelling

MDNet: A Semantically and Visually Interpretable Medical Image Diagnosis Network

no code implementations CVPR 2017 Zizhao Zhang, Yuanpu Xie, Fuyong Xing, Mason McGough, Lin Yang

In this paper, we propose MDNet to establish a direct multimodal mapping between medical images and diagnostic reports that can read images, generate diagnostic reports, retrieve images by symptom descriptions, and visualize attention, to provide justifications of the network diagnosis process.

Language Modelling

SemiContour: A Semi-supervised Learning Approach for Contour Detection

no code implementations CVPR 2016 Zizhao Zhang, Fuyong Xing, Xiaoshuang Shi, Lin Yang

In this paper, we investigate the usage of semi-supervised learning (SSL) to obtain competitive detection accuracy with very limited training data (three labeled images).

Contour Detection Ensemble Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.