1 code implementation • ACL 2022 • Lan Zhang, Wray Buntine, Ehsan Shareghi
Injecting desired geometric properties into text representations has attracted a lot of attention.
no code implementations • 30 Aug 2024 • Shen Li, Liuyi Yao, Lan Zhang, Yaliang Li
Aligned LLMs are highly secure, capable of recognizing and refusing to answer malicious questions.
no code implementations • 20 Aug 2024 • Zhilong Wang, Haizhou Wang, Nanqing Luo, Lan Zhang, Xiaoyan Sun, Yebo Cao, Peng Liu
By inserting the malicious query to the carrier article, the assembled attack payload can successfully jailbreak LLM.
no code implementations • 15 Jul 2024 • Yao Sun, Lan Zhang, Linke Guo, Jian Li, Dusit Niyato, Yuguang Fang
Semantic communication (SemCom) has been a transformative paradigm, emphasizing the precise exchange of meaningful information over traditional bit-level transmissions.
1 code implementation • 15 May 2024 • Kejia Zhang, Lan Zhang, Haiwei Pan, Baolong Yu
Compared to diffusion models, consistency models can reduce the sampling times to once, not only achieving similar generative effects but also significantly speeding up training and prediction.
no code implementations • 6 May 2024 • Saket S. Chaturvedi, Lan Zhang, Wenbin Zhang, Pan He, Xiaoyong Yuan
To tackle this issue, we propose an innovative 2D-oriented backdoor attack against LiDAR-camera fusion methods for 3D object detection, named BadFusion, for preserving trigger effectiveness throughout the entire fusion process.
no code implementations • 29 Apr 2024 • Tonglong Wei, Youfang Lin, Yan Lin, Shengnan Guo, Lan Zhang, Huaiyu Wan
Recovering intermediate missing GPS points in a sparse trajectory, while adhering to the constraints of the road network, could offer deep insights into users' moving behaviors in intelligent transportation systems.
no code implementations • 22 Feb 2024 • Shen Li, Liuyi Yao, Jinyang Gao, Lan Zhang, Yaliang Li
To support various applications, a prevalent and efficient approach for business owners is leveraging their valuable datasets to fine-tune a pre-trained LLM through the API provided by LLM owners or cloud servers.
1 code implementation • 14 Nov 2023 • Mu Yuan, Lan Zhang, Xiang-Yang Li
Security of model parameters and user data is critical for Transformer-based services, such as ChatGPT.
1 code implementation • 2 Nov 2023 • Marco Valentino, Jordan Meadows, Lan Zhang, André Freitas
To this end, we introduce different multi-operational representation paradigms, modelling mathematical operations as explicit geometric transformations.
no code implementations • 26 Oct 2023 • Runze Cheng, Yao Sun, Dusit Niyato, Lan Zhang, Lei Zhang, Muhammad Ali Imran
Generative AI applications have been recently catering to a vast user base by creating diverse and high-quality AI-generated content (AIGC).
1 code implementation • journal 2023 • Mu Yuan, Lan Zhang, Xuanke You, Xiang-Yang Li
The resource efficiency of video analytics workloads is critical for large-scale deployments on edge nodes and cloud clusters.
no code implementations • 24 Jul 2023 • Zhilong Wang, Lan Zhang, Chen Cao, Nanqing Luo, Xinzhi Luo, Peng Liu
However, one significant difference between programming languages and natural languages is that a programmer has the flexibility to assign any names to variables, methods, and functions in the program, whereas a natural language writer does not.
Ranked #3 on Code Generation on MBPP
no code implementations • 20 Jul 2023 • Shiwei Ding, Lan Zhang, Miao Pan, Xiaoyong Yuan
Collaborative inference has been a promising solution to enable resource-constrained edge devices to perform inference using state-of-the-art deep neural networks (DNNs).
no code implementations • 10 Jul 2023 • Gaurav Bagwe, Xiaoyong Yuan, Miao Pan, Lan Zhang
Federated continual learning (FCL) learns incremental tasks over time from confidential datasets distributed across clients.
no code implementations • 13 Jun 2023 • Shaoang Li, Lan Zhang, Junhao Wang, Xiang-Yang Li
We establish the tight worst-case regret lower bound of $\Omega \left( (TB)^{\alpha} K^{1-\alpha}\right), \alpha = 2^{B} / (2^{B+1}-1)$ for any algorithm with a time horizon $T$, number of arms $K$, and number of passes $B$.
no code implementations • 21 Feb 2023 • Anran Li, Hongyi Peng, Lan Zhang, Jiahui Huang, Qing Guo, Han Yu, Yang Liu
Vertical Federated Learning (VFL) enables multiple data owners, each holding a different subset of features about largely overlapping sets of data sample(s), to jointly train a useful global model.
no code implementations • 20 Jan 2023 • Lan Zhang, Chen Cao, Zhilong Wang, Peng Liu
The Bidirectional Encoder Representations from Transformers (BERT) were proposed in the natural language process (NLP) and shows promising results.
no code implementations • ICCV 2023 • Rui Chen, Qiyu Wan, Pavana Prakash, Lan Zhang, Xu Yuan, Yanmin Gong, Xin Fu, Miao Pan
However, practical deployment of FL over mobile devices is very challenging because (i) conventional FL incurs huge training latency for mobile devices due to interleaved local computing and communications of model updates, (ii) there are heterogeneous training data across mobile devices, and (iii) mobile devices have hardware heterogeneity in terms of computing and communication capabilities.
no code implementations • 5 Dec 2022 • Hong Huang, Lan Zhang, Chaoyue Sun, Ruogu Fang, Xiaoyong Yuan, Dapeng Wu
To address these challenges, we propose FedTiny, a distributed pruning framework for federated learning that generates specialized tiny models for memory- and computing-constrained devices.
3 code implementations • 28 Sep 2022 • Mu Yuan, Lan Zhang, Fengxiang He, Xueting Tong, Miao-Hui Song, Zhengyuan Xu, Xiang-Yang Li
Previous efforts have tailored effective solutions for many applications, but left two essential questions unanswered: (1) theoretical filterability of an inference workload to guide the application of input filtering techniques, thereby avoiding the trial-and-error cost for resource-constrained mobile applications; (2) robust discriminability of feature embedding to allow input filtering to be widely effective for diverse inference tasks and input content.
3 code implementations • 28 Sep 2022 • Mu Yuan, Lan Zhang, Zimu Zheng, Yi-Nan Zhang, Xiang-Yang Li
The cost efficiency of model inference is critical to real-world machine learning (ML) applications, especially for delay-sensitive tasks and resource-limited devices.
no code implementations • 21 Jul 2022 • Madhureeta Das, Xianhao Chen, Xiaoyong Yuan, Lan Zhang
Further, the proposed FSSDA can be effectively generalized to multi-source domain adaptation scenarios.
no code implementations • 21 Jul 2022 • Gaurav Bagwe, Jian Li, Xiaoyong Yuan, Lan Zhang
Moreover, to improve data efficiency and provide better generalization performance, we train the policy model with augmented data (e. g., noisy BSM and noisy surveillance images).
1 code implementation • 25 Jun 2022 • Tongtian Zhu, Fengxiang He, Lan Zhang, Zhengyang Niu, Mingli Song, DaCheng Tao
Our theory indicates that the generalizability of D-SGD is positively correlated with the spectral gap, and can explain why consensus control in initial training phase can ensure better generalization.
no code implementations • 9 May 2022 • Juntao Tan, Lan Zhang, Yang Liu, Anran Li, Ye Wu
To deal with this, we then propose three protection mechanisms, e. g., additive noise mechanism, multiplicative noise mechanism, and hybrid mechanism which leverages local differential privacy and homomorphic encryption techniques, to prevent the attack and improve the robustness of the vertical logistic regression.
no code implementations • 22 Apr 2022 • Saket S. Chaturvedi, Lan Zhang, Xiaoyong Yuan
Specifically, GLA integrates an early-stage fusion via a local attention network and a late-stage fusion via a global attention network to deal with both local and global information, which automatically allocates higher weights to the modality with better detection features at the late-stage fusion to cope with the specific weather condition adaptively.
1 code implementation • 7 Feb 2022 • Xiaoyong Yuan, Lan Zhang
We first explore the impact of neural network pruning on prediction divergence, where the pruning process disproportionately affects the pruned model's behavior for members and non-members.
1 code implementation • 14 Oct 2021 • Lan Zhang, Wray Buntine, Ehsan Shareghi
Deep generative models have been widely used in several areas of NLP, and various techniques have been proposed to augment them or address their training challenges.
no code implementations • 8 Sep 2021 • Lan Zhang, Dapeng Wu, Xiaoyong Yuan
To achieve knowledge transfer across these heterogeneous on-device models, a zero-shot distillation approach is designed without any prerequisites for private on-device data, which is contrary to certain prior research based on a public dataset or a pre-trained data generator.
no code implementations • 30 Aug 2021 • Zhishen Nie, Ying Lin, Sp Ren, Lan Zhang
Adversarial training has become the primary method to defend against adversarial samples.
no code implementations • 18 Jun 2021 • Wensheng Xia, Ying Li, Lan Zhang, Zhonghai Wu, Xiaoyong Yuan
To address these challenges, we propose a novel vertical federated learning framework named Cascade Vertical Federated Learning (CVFL) to fully utilize all horizontally partitioned labels to train neural networks with privacy-preservation.
1 code implementation • ACL (RepL4NLP) 2021 • Lan Zhang, Victor Prokhorov, Ehsan Shareghi
To highlight the challenges of achieving representation disentanglement for text domain in an unsupervised setting, in this paper we select a representative set of successfully applied models from the image domain.
no code implementations • 18 Mar 2021 • Xianhao Chen, Guangyu Zhu, Lan Zhang, Yuguang Fang, Linke Guo, Xinguang Chen
As a result, age-stratified modeling for COVID-19 dynamics is the key to study how to reduce hospitalizations and mortality from COVID-19.
no code implementations • 7 Jan 2021 • Jing Li, Xiang-Xiang Xue, Chao Liu, Bo Zhang, Hans-Walter Rix, Jeffrey L. Carlin, Chengqun Yang, Rene A. Mendez, Jing Zhong, Hao Tian, Lan Zhang, Yan Xu, Yaqian Wu, Gang Zhao, Ruixiang Chang
Their location in [$\alpha$/M] vs. [M/H] space is more metal poor than typical thin disk stars, with [$\alpha$/M] \textbf{lower} than the thick disk.
Astrophysics of Galaxies
no code implementations • 21 Sep 2020 • Xiaoyong Yuan, Leah Ding, Lan Zhang, Xiaolin Li, Dapeng Wu
The experimental results reveal the severity of ES Attack: i) ES Attack successfully steals the victim model without data hurdles, and ES Attack even outperforms most existing model stealing attacks using auxiliary data in terms of model accuracy; ii) most countermeasures are ineffective in defending ES Attack; iii) ES Attack facilitates further attacks relying on the stolen model.
1 code implementation • 11 Sep 2020 • Lan Zhang, Peng Liu, Yoon-Ho Choi, Ping Chen
As an increasing number of deep-learning-based malware scanners have been proposed, the existing evasion techniques, including code obfuscation and polymorphic malware, are found to be less effective.
no code implementations • 9 Aug 2020 • Jian Li, Lan Zhang, Kaiping Xue, Yuguang Fang
Specifically, to guarantee the worst-case achievable secrecy rate among multiple legitimate users, we formulate a max-min problem that can be solved by an alternative optimization method to decouple it into multiple sub-problems.
no code implementations • 22 Feb 2020 • Zhenyu Liu, Dongyu Wang, Lan Zhang, Bin Hu
Depression is a common mental disorder worldwide which causes a range of serious outcomes.
no code implementations • 20 Feb 2020 • Hanshu Cai, Yiwen Gao, Shuting Sun, Na Li, Fuze Tian, Han Xiao, Jianxiu Li, Zhengwu Yang, Xiaowei Li, Qinglin Zhao, Zhenyu Liu, Zhijun Yao, Minqiang Yang, Hong Peng, Jing Zhu, Xiaowei Zhang, Guoping Gao, Fang Zheng, Rui Li, Zhihua Guo, Rong Ma, Jing Yang, Lan Zhang, Xiping Hu, Yumin Li, Bin Hu
The EEG dataset includes not only data collected using traditional 128-electrodes mounted elastic cap, but also a novel wearable 3-electrode EEG collector for pervasive applications.
no code implementations • 8 Feb 2020 • Mu Yuan, Lan Zhang, Xiang-Yang Li, Hui Xiong
With limited computing resources and stringent delay, given a data stream and a collection of applicable resource-hungry deep-learning models, we design a novel approach to adaptively schedule a subset of these models to execute on each data item, aiming to maximize the value of the model output (e. g., the number of high-confidence labels).
no code implementations • 12 Dec 2019 • Yoon-Ho Choi, Peng Liu, Zitong Shang, Haizhou Wang, Zhilong Wang, Lan Zhang, Junwei Zhou, Qingtian Zou
Although using machine learning techniques to solve computer security challenges is not a new idea, the rapidly emerging Deep Learning technology has recently triggered a substantial amount of interests in the computer security community.
Cryptography and Security
no code implementations • 3 Apr 2014 • Cláudia Nalon, Lan Zhang, Clare Dixon, Ullrich Hustadt
We present a prototype tool for automated reasoning for Coalition Logic, a non-normal modal logic that can be used for reasoning about cooperative agency.