Search Results for author: Bingzhe Wu

Found 52 papers, 19 papers with code

S2DNAS: Transforming Static CNN Model for Dynamic Inference via Neural Architecture Search

no code implementations ECCV 2020 Zhihang Yuan, Bingzhe Wu, Guangyu Sun, Zheng Liang, Shiwan Zhao, Weichen Bi

To this end, based on a given CNN model, we first generate a CNN architecture space in which each architecture is a multi-stage CNN generated from the given model using some predefined transformations.

Neural Architecture Search

Spurious Feature Eraser: Stabilizing Test-Time Adaptation for Vision-Language Foundation Model

1 code implementation1 Mar 2024 Huan Ma, Yan Zhu, Changqing Zhang, Peilin Zhao, Baoyuan Wu, Long-Kai Huang, QinGhua Hu, Bingzhe Wu

Vision-language foundation models have exhibited remarkable success across a multitude of downstream tasks due to their scalability on extensive image-text paired data.

Fine-Grained Image Classification Language Modelling +1

LLM Inference Unveiled: Survey and Roofline Model Insights

2 code implementations26 Feb 2024 Zhihang Yuan, Yuzhang Shang, Yang Zhou, Zhen Dong, Zhe Zhou, Chenhao Xue, Bingzhe Wu, Zhikai Li, Qingyi Gu, Yong Jae Lee, Yan Yan, Beidi Chen, Guangyu Sun, Kurt Keutzer

Our survey stands out from traditional literature reviews by not only summarizing the current state of research but also by introducing a framework based on roofline model for systematic analysis of LLM inference techniques.

Knowledge Distillation Language Modelling +3

Step-On-Feet Tuning: Scaling Self-Alignment of LLMs via Bootstrapping

no code implementations12 Feb 2024 Haoyu Wang, Guozheng Ma, Ziqiao Meng, Zeyu Qin, Li Shen, Zhong Zhang, Bingzhe Wu, Liu Liu, Yatao Bian, Tingyang Xu, Xueqian Wang, Peilin Zhao

To further exploit the capabilities of bootstrapping, we investigate and adjust the training order of data, which yields improved performance of the model.

In-Context Learning

Rethinking and Simplifying Bootstrapped Graph Latents

1 code implementation5 Dec 2023 Wangbin Sun, Jintang Li, Liang Chen, Bingzhe Wu, Yatao Bian, Zibin Zheng

Graph contrastive learning (GCL) has emerged as a representative paradigm in graph self-supervised learning, where negative samples are commonly regarded as the key to preventing model collapse and producing distinguishable representations.

Contrastive Learning Self-Supervised Learning

PsyCoT: Psychological Questionnaire as Powerful Chain-of-Thought for Personality Detection

1 code implementation31 Oct 2023 Tao Yang, Tianyuan Shi, Fanqi Wan, Xiaojun Quan, Qifan Wang, Bingzhe Wu, Jiaxiang Wu

Drawing inspiration from Psychological Questionnaires, which are carefully designed by psychologists to evaluate individual personality traits through a series of targeted items, we argue that these items can be regarded as a collection of well-structured chain-of-thought (CoT) processes.

Language Agents for Detecting Implicit Stereotypes in Text-to-image Models at Scale

no code implementations18 Oct 2023 Qichao Wang, Tian Bian, Yian Yin, Tingyang Xu, Hong Cheng, Helen M. Meng, Zibin Zheng, Liang Chen, Bingzhe Wu

The recent surge in the research of diffusion models has accelerated the adoption of text-to-image models in various Artificial Intelligence Generated Content (AIGC) commercial products.

Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators

1 code implementation11 Oct 2023 Liang Chen, Yang Deng, Yatao Bian, Zeyu Qin, Bingzhe Wu, Tat-Seng Chua, Kam-Fai Wong

Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks when being prompted to generate world knowledge.

Information Retrieval Informativeness +4

Adapting Large Language Models for Content Moderation: Pitfalls in Data Engineering and Supervised Fine-tuning

no code implementations5 Oct 2023 Huan Ma, Changqing Zhang, Huazhu Fu, Peilin Zhao, Bingzhe Wu

Specifically, we discuss the differences between discriminative and generative models using content moderation as an example.

Is GPT4 a Good Trader?

no code implementations20 Sep 2023 Bingzhe Wu

Recently, large language models (LLMs), particularly GPT-4, have demonstrated significant capabilities in various planning and reasoning tasks \cite{cheng2023gpt4, bubeck2023sparks}.

Prompt Engineering

SAILOR: Structural Augmentation Based Tail Node Representation Learning

1 code implementation13 Aug 2023 Jie Liao, Jintang Li, Liang Chen, Bingzhe Wu, Yatao Bian, Zibin Zheng

In the pursuit of promoting the expressiveness of GNNs for tail nodes, we explore how the deficiency of structural information deteriorates the performance of tail nodes and propose a general Structural Augmentation based taIL nOde Representation learning framework, dubbed as SAILOR, which can jointly learn to augment the graph structure and extract more informative representations for tail nodes.

Representation Learning

Semantic Equivariant Mixup

no code implementations12 Aug 2023 Zongbo Han, Tianchi Xie, Bingzhe Wu, QinGhua Hu, Changqing Zhang

Then a generic mixup regularization at the representation level is proposed, which can further regularize the model with the semantic information in mixed samples.

Data Augmentation

Proceedings of the 40th International Conference on Machine Learning

1 code implementation journal 2023 Huan Ma, Qingyang Zhang, Changqing Zhang, Bingzhe Wu, Huazhu Fu, Joey Tianyi Zhou, QinGhua Hu

Specifically, we find that the confidence estimated by current models could even increase when some modalities are corrupted.

Uncertainty in Natural Language Processing: Sources, Quantification, and Applications

no code implementations5 Jun 2023 Mengting Hu, Zhen Zhang, Shiwan Zhao, Minlie Huang, Bingzhe Wu

Therefore, in this survey, we provide a comprehensive review of uncertainty-relevant works in the NLP field.

Uncertainty Quantification

Calibrating Multimodal Learning

no code implementations2 Jun 2023 Huan Ma. Qingyang Zhang, Changqing Zhang, Bingzhe Wu, Huazhu Fu, Joey Tianyi Zhou, QinGhua Hu

Specifically, we find that the confidence estimated by current models could even increase when some modalities are corrupted.

E-NER: Evidential Deep Learning for Trustworthy Named Entity Recognition

1 code implementation29 May 2023 Zhen Zhang, Mengting Hu, Shiwan Zhao, Minlie Huang, Haotian Wang, Lemao Liu, Zhirui Zhang, Zhe Liu, Bingzhe Wu

Most named entity recognition (NER) systems focus on improving model performance, ignoring the need to quantify model uncertainty, which is critical to the reliability of NER systems in open environments.

named-entity-recognition Named Entity Recognition +1

Attention Paper: How Generative AI Reshapes Digital Shadow Industry?

no code implementations26 May 2023 Qichao Wang, Huan Ma, WenTao Wei, Hangyu Li, Liang Chen, Peilin Zhao, Binwen Zhao, Bo Hu, Shu Zhang, Zibin Zheng, Bingzhe Wu

The rapid development of digital economy has led to the emergence of various black and shadow internet industries, which pose potential risks that can be identified and managed through digital risk management (DRM) that uses different techniques such as machine learning and deep learning.

Management

Reweighted Mixup for Subpopulation Shift

no code implementations9 Apr 2023 Zongbo Han, Zhipeng Liang, Fan Yang, Liu Liu, Lanqing Li, Yatao Bian, Peilin Zhao, QinGhua Hu, Bingzhe Wu, Changqing Zhang, Jianhua Yao

Subpopulation shift exists widely in many real-world applications, which refers to the training and test distributions that contain the same subpopulation groups but with different subpopulation proportions.

Fairness Generalization Bounds

SLPerf: a Unified Framework for Benchmarking Split Learning

1 code implementation4 Apr 2023 Tianchen Zhou, Zhanyi Hu, Bingzhe Wu, Cen Chen

Data privacy concerns has made centralized training of data, which is scattered across silos, infeasible, leading to the need for collaborative learning frameworks.

Benchmarking Diversity +1

RPTQ: Reorder-based Post-training Quantization for Large Language Models

1 code implementation3 Apr 2023 Zhihang Yuan, Lin Niu, Jiawei Liu, Wenyu Liu, Xinggang Wang, Yuzhang Shang, Guangyu Sun, Qiang Wu, Jiaxiang Wu, Bingzhe Wu

In this paper, we identify that the challenge in quantizing activations in LLMs arises from varying ranges across channels, rather than solely the presence of outliers.

Quantization

Benchmarking the Reliability of Post-training Quantization: a Particular Focus on Worst-case Performance

no code implementations23 Mar 2023 Zhihang Yuan, Jiawei Liu, Jiaxiang Wu, Dawei Yang, Qiang Wu, Guangyu Sun, Wenyu Liu, Xinggang Wang, Bingzhe Wu

Post-training quantization (PTQ) is a popular method for compressing deep neural networks (DNNs) without modifying their original architecture or training procedures.

Benchmarking Data Augmentation +1

Federated Nearest Neighbor Machine Translation

no code implementations23 Feb 2023 Yichao Du, Zhirui Zhang, Bingzhe Wu, Lemao Liu, Tong Xu, Enhong Chen

To protect user privacy and meet legal regulations, federated learning (FL) is attracting significant attention.

Federated Learning Machine Translation +4

Post-training Quantization on Diffusion Models

1 code implementation CVPR 2023 Yuzhang Shang, Zhihang Yuan, Bin Xie, Bingzhe Wu, Yan Yan

These approaches define a forward diffusion process for transforming data into noise and a backward denoising process for sampling data from noise.

Denoising Noise Estimation +1

Learning with Noisy Labels over Imbalanced Subpopulations

no code implementations16 Nov 2022 Mingcai Chen, Yu Zhao, Bing He, Zongbo Han, Bingzhe Wu, Jianhua Yao

Then, we refurbish the noisy labels using the estimated clean probabilities and the pseudo-labels from the model's predictions.

Learning with noisy labels

Vertical Federated Linear Contextual Bandits

no code implementations20 Oct 2022 Zeyu Cao, Zhipeng Liang, Shu Zhang, Hangyu Li, Ouyang Wen, Yu Rong, Peilin Zhao, Bingzhe Wu

In this paper, we investigate a novel problem of building contextual bandits in the vertical federated setting, i. e., contextual information is vertically distributed over different departments.

Multi-Armed Bandits

UMIX: Improving Importance Weighting for Subpopulation Shift via Uncertainty-Aware Mixup

1 code implementation19 Sep 2022 Zongbo Han, Zhipeng Liang, Fan Yang, Liu Liu, Lanqing Li, Yatao Bian, Peilin Zhao, Bingzhe Wu, Changqing Zhang, Jianhua Yao

Importance reweighting is a normal way to handle the subpopulation shift issue by imposing constant or adaptive sampling weights on each sample in the training dataset.

Generalization Bounds

Recent Advances in Reliable Deep Graph Learning: Inherent Noise, Distribution Shift, and Adversarial Attack

no code implementations15 Feb 2022 Jintang Li, Bingzhe Wu, Chengbin Hou, Guoji Fu, Yatao Bian, Liang Chen, Junzhou Huang, Zibin Zheng

Despite the progress, applying DGL to real-world applications faces a series of reliability threats including inherent noise, distribution shift, and adversarial attacks.

Adversarial Attack Graph Learning

HASCO: Towards Agile HArdware and Software CO-design for Tensor Computation

1 code implementation4 May 2021 Qingcheng Xiao, Size Zheng, Bingzhe Wu, Pengcheng Xu, Xuehai Qian, Yun Liang

Second, the overall design space composed of HW/SW partitioning, hardware optimization, and software optimization is huge.

Bayesian Optimization Q-Learning

Towards Scalable and Privacy-Preserving Deep Neural Network via Algorithmic-Cryptographic Co-design

no code implementations17 Dec 2020 Jun Zhou, Longfei Zheng, Chaochao Chen, Yan Wang, Xiaolin Zheng, Bingzhe Wu, Cen Chen, Li Wang, Jianwei Yin

In this paper, we propose SPNN - a Scalable and Privacy-preserving deep Neural Network learning framework, from algorithmic-cryptographic co-perspective.

Privacy Preserving

ASFGNN: Automated Separated-Federated Graph Neural Network

no code implementations6 Nov 2020 Longfei Zheng, Jun Zhou, Chaochao Chen, Bingzhe Wu, Li Wang, Benyu Zhang

Specifically, to solve the data Non-IID problem, we first propose a separated-federated GNN learning model, which decouples the training of GNN into two parts: the message passing part that is done by clients separately, and the loss computing part that is learnt by clients federally.

Bayesian Optimization Graph Neural Network

ENAS4D: Efficient Multi-stage CNN Architecture Search for Dynamic Inference

no code implementations19 Sep 2020 Zhihang Yuan, Xin Liu, Bingzhe Wu, Guangyu Sun

The inference of a input sample can exit from early stage if the prediction of the stage is confident enough.

A Comprehensive Analysis of Information Leakage in Deep Transfer Learning

no code implementations4 Sep 2020 Cen Chen, Bingzhe Wu, Minghui Qiu, Li Wang, Jun Zhou

To the best of our knowledge, our study is the first to provide a thorough analysis of the information leakage issues in deep transfer learning methods and provide potential solutions to the issue.

Transfer Learning

Vertically Federated Graph Neural Network for Privacy-Preserving Node Classification

no code implementations25 May 2020 Chaochao Chen, Jun Zhou, Longfei Zheng, Huiwen Wu, Lingjuan Lyu, Jia Wu, Bingzhe Wu, Ziqi Liu, Li Wang, Xiaolin Zheng

Recently, Graph Neural Network (GNN) has achieved remarkable progresses in various real-world tasks on graph data, consisting of node features and the adjacent information between different nodes.

Classification General Classification +3

Industrial Scale Privacy Preserving Deep Neural Network

no code implementations11 Mar 2020 Longfei Zheng, Chaochao Chen, Yingting Liu, Bingzhe Wu, Xibin Wu, Li Wang, Lei Wang, Jun Zhou, Shuang Yang

Deep Neural Network (DNN) has been showing great potential in kinds of real-world applications such as fraud detection and distress prediction.

Fraud Detection Privacy Preserving

A Survey of Adversarial Learning on Graphs

2 code implementations10 Mar 2020 Liang Chen, Jintang Li, Jiaying Peng, Tao Xie, Zengxu Cao, Kun Xu, Xiangnan He, Zibin Zheng, Bingzhe Wu

To bridge this gap, we investigate and summarize the existing works on graph adversarial learning tasks systemically.

Clustering Graph Clustering +2

Practical Privacy Preserving POI Recommendation

no code implementations5 Mar 2020 Chaochao Chen, Jun Zhou, Bingzhe Wu, Wenjin Fang, Li Wang, Yuan Qi, Xiaolin Zheng

Meanwhile, the public data need to be accessed by all the users are kept by the recommender to reduce the storage costs of users' devices.

Federated Learning Privacy Preserving

Secure Social Recommendation based on Secret Sharing

no code implementations6 Feb 2020 Chaochao Chen, Liang Li, Bingzhe Wu, Cheng Hong, Li Wang, Jun Zhou

It is well known that social information, which is rich on social platforms such as Facebook, are useful to recommender systems.

Privacy Preserving Recommendation Systems

S2DNAS:Transforming Static CNN Model for Dynamic Inference via Neural Architecture Search

no code implementations16 Nov 2019 Zhihang Yuan, Bingzhe Wu, Zheng Liang, Shiwan Zhao, Weichen Bi, Guangyu Sun

Recently, dynamic inference has emerged as a promising way to reduce the computational cost of deep convolutional neural network (CNN).

Neural Architecture Search

Characterizing Membership Privacy in Stochastic Gradient Langevin Dynamics

no code implementations5 Oct 2019 Bingzhe Wu, Chaochao Chen, Shiwan Zhao, Cen Chen, Yuan YAO, Guangyu Sun, Li Wang, Xiaolu Zhang, Jun Zhou

Based on this framework, we demonstrate that SGLD can prevent the information leakage of the training dataset to a certain extent.

Generalization Bounds

Generalization in Generative Adversarial Networks: A Novel Perspective from Privacy Protection

no code implementations NeurIPS 2019 Bingzhe Wu, Shiwan Zhao, Chaochao Chen, Haoyang Xu, Li Wang, Xiaolu Zhang, Guangyu Sun, Jun Zhou

In this paper, we aim to understand the generalization properties of generative adversarial networks (GANs) from a new perspective of privacy protection.

BAYHENN: Combining Bayesian Deep Learning and Homomorphic Encryption for Secure DNN Inference

no code implementations3 Jun 2019 Peichen Xie, Bingzhe Wu, Guangyu Sun

Specifically, we use homomorphic encryption to protect a client's raw data and use Bayesian neural networks to protect the DNN weights in a cloud server.

Privacy Preserving

G2C: A Generator-to-Classifier Framework Integrating Multi-Stained Visual Cues for Pathological Glomerulus Classification

no code implementations30 Jun 2018 Bingzhe Wu, Xiaolu Zhang, Shiwan Zhao, Lingxi Xie, Caihong Zeng, Zhihong Liu, Guangyu Sun

Given an input image from a specified stain, several generators are first applied to estimate its appearances in other staining methods, and a classifier follows to combine visual cues from different stains for prediction (whether it is pathological, or which type of pathology it has).

Classification Decision Making +2

Cannot find the paper you are looking for? You can Submit a new open access paper.