Search Results for author: Baochun Li

Found 22 papers, 4 papers with code

Hufu: A Modality-Agnositc Watermarking System for Pre-Trained Transformers via Permutation Equivariance

no code implementations9 Mar 2024 Hengyuan Xu, Liyao Xiang, Xingjun Ma, Borui Yang, Baochun Li

The permutation equivariance ensures minimal interference between these two sets of model weights and thus high fidelity on downstream tasks.

FedReview: A Review Mechanism for Rejecting Poisoned Updates in Federated Learning

no code implementations26 Feb 2024 Tianhang Zheng, Baochun Li

Federated learning has recently emerged as a decentralized approach to learn a high-performance model without access to user data.

Federated Learning

Boosting of Thoughts: Trial-and-Error Problem Solving with Large Language Models

no code implementations17 Feb 2024 Sijia Chen, Baochun Li, Di Niu

The reasoning performance of Large Language Models (LLMs) on a wide range of problems critically relies on chain-of-thought prompting, which involves providing a few chain of thought demonstrations as exemplars in prompts.

Language-Guided Diffusion Model for Visual Grounding

no code implementations18 Aug 2023 Sijia Chen, Baochun Li

Specifically, we propose a language-guided diffusion framework for visual grounding, LG-DVG, which trains the model to progressively reason queried object boxes by denoising a set of noisy boxes with the language guide.

Denoising Visual Grounding

Feature Reconstruction Attacks and Countermeasures of DNN training in Vertical Federated Learning

no code implementations13 Oct 2022 Peng Ye, Zhifeng Jiang, Wei Wang, Bo Li, Baochun Li

To address this problem, we develop a novel feature protection scheme against the reconstruction attack that effectively misleads the search to some pre-specified random values.

Reconstruction Attack Vertical Federated Learning

Pisces: Efficient Federated Learning via Guided Asynchronous Training

no code implementations18 Jun 2022 Zhifeng Jiang, Wei Wang, Baochun Li, Bo Li

Current FL systems employ a participant selection strategy to select fast clients with quality data in each iteration.

Federated Learning Navigate

OrphicX: A Causality-Inspired Latent Variable Model for Interpreting Graph Neural Networks

1 code implementation CVPR 2022 WanYu Lin, Hao Lan, Hao Wang, Baochun Li

This paper proposes a new eXplanation framework, called OrphicX, for generating causal explanations for any graph neural networks (GNNs) based on learned latent causal factors.

Graph Learning

Multi-Modal Dynamic Graph Transformer for Visual Grounding

1 code implementation CVPR 2022 Sijia Chen, Baochun Li

We found that existing VG methods are trapped by the single-stage grounding process that performs a sole evaluate-and- rank for meticulously prepared regions.

Visual Grounding

Towards Generalizable Personalized Federated Learning with Adaptive Local Adaptation

no code implementations29 Sep 2021 Sijia Chen, Baochun Li

In this paper, we point out that this issue can be addressed by balancing information flow from the initial model and training dataset to the local adaptation.

Meta-Learning Personalized Federated Learning

PGD-2 can be better than FGSM + GradAlign

no code implementations29 Sep 2021 Tianhang Zheng, Baochun Li

In this paper, we show that PGD-2 AT with random initialization (PGD-2-RS AT) and attack step size $\alpha=1. 25\epsilon/2$ only needs approximately a half computational cost of FGSM + GradAlign AT and actually can avoid catastrophic overfitting for large $\ell_\infty$ perturbations.

Interpreting Graph Neural Networks via Unrevealed Causal Learning

no code implementations29 Sep 2021 WanYu Lin, Hao Lan, Hao Wang, Baochun Li

This paper proposes a new explanation framework, called OrphicX, for generating causal explanations for any graph neural networks (GNNs) based on learned latent causal factors.

Graph Learning

Generative Causal Explanations for Graph Neural Networks

1 code implementation14 Apr 2021 WanYu Lin, Hao Lan, Baochun Li

Specifically, we formulate the problem of providing explanations for the decisions of GNNs as a causal learning task.

Graph Learning

Context-aware deep model compression for edge cloud computing

no code implementations International Conference on Distributed Computing Systems 2020 Lingdong Wang, Liyao Xiang, Jiayu Xu, Jiaju Chen, Xing Zhao, Dixi Yao, Xinbing Wang, Baochun Li

While deep neural networks (DNNs) have led to a paradigm shift, its exorbitant computational requirement has always been a roadblock in its deployment to the edge, such as wearable devices and smartphones.

Cloud Computing Image Classification +1

Towards Assessment of Randomized Smoothing Mechanisms for Certifying Adversarial Robustness

no code implementations15 May 2020 Tianhang Zheng, Di Wang, Baochun Li, Jinhui Xu

Based on our framework, we assess the Gaussian and Exponential mechanisms by comparing the magnitude of additive noise required by these mechanisms and the lower bounds (criteria).

Adversarial Robustness

Towards Understanding the Adversarial Vulnerability of Skeleton-based Action Recognition

no code implementations14 May 2020 Tianhang Zheng, Sheng Liu, Changyou Chen, Junsong Yuan, Baochun Li, Kui Ren

We first formulate generation of adversarial skeleton actions as a constrained optimization problem by representing or approximating the physiological and physical constraints with mathematical formulations.

Action Recognition Skeleton Based Action Recognition

Shoestring: Graph-Based Semi-Supervised Learning with Severely Limited Labeled Data

1 code implementation28 Oct 2019 Wan-Yu Lin, Zhaolin Gao, Baochun Li

More specifically, we address the problem of graph-based semi-supervised learning in the presence of severely limited labeled samples, and propose a new framework, called {\em Shoestring}, that improves the learning performance through semantic transfer from these very few labeled samples to large numbers of unlabeled samples.

Few-Shot Learning General Classification +4

A Unified framework for randomized smoothing based certified defenses

no code implementations25 Sep 2019 Tianhang Zheng, Di Wang, Baochun Li, Jinhui Xu

We answer the above two questions by first demonstrating that Gaussian mechanism and Exponential mechanism are the (near) optimal options to certify the $\ell_2$ and $\ell_\infty$-normed robustness.

Post: Device Placement with Cross-Entropy Minimization and Proximal Policy Optimization

no code implementations NeurIPS 2018 Yuanxiang Gao, Li Chen, Baochun Li

It is critical to place operations in a neural network on these devices in an optimal way, so that the training process can complete within the shortest amount of time.

Spotlight: Optimizing Device Placement for Training Deep Neural Networks

no code implementations ICML 2018 Yuanxiang Gao, Li Chen, Baochun Li

Training deep neural networks (DNNs) requires an increasing amount of computation resources, and it becomes typical to use a mixture of GPU and CPU devices.

reinforcement-learning Reinforcement Learning (RL)

Semi-Dynamic Load Balancing: Efficient Distributed Learning in Non-Dedicated Environments

no code implementations7 Jun 2018 Chen Chen, Qizhen Weng, Wei Wang, Baochun Li, Bo Li

Efficient model training requires eliminating such stragglers, yet for modern ML workloads, existing load balancing strategies are inefficient and even infeasible.

Cannot find the paper you are looking for? You can Submit a new open access paper.