1 code implementation • 4 Mar 2024 • Haolin Deng, Chang Wang, Xin Li, Dezhang Yuan, Junlang Zhan, Tianhua Zhou, Jin Ma, Jun Gao, Ruifeng Xu
Enhancing the attribution in large language models (LLMs) is a crucial task.
2 code implementations • 26 Sep 2023 • Haihao Shen, Naveen Mellempudi, Xin He, Qun Gao, Chang Wang, Mengni Wang
Recent advances in deep learning methods such as LLMs and Diffusion models have created a need for improved quantization methods that can meet the computational demands of these modern architectures while maintaining accuracy.
no code implementations • 25 Nov 2022 • Rui Ai, Zhaohua Chen, Xiaotie Deng, Yuqi Pan, Chang Wang, Mingwei Yang
To the best of our knowledge, this is the first $\widetilde O(1)$ regret result in the CBwK problem regardless of information feedback models.
2 code implementations • 31 Oct 2022 • Shira Guskin, Moshe Wasserblat, Chang Wang, Haihao Shen
Our quantized length-adaptive MiniLM model (QuaLA-MiniLM) is trained only once, dynamically fits any inference scenario, and achieves an accuracy-efficiency trade-off superior to any other efficient approaches per any computational budget on the SQuAD1. 1 dataset (up to x8. 8 speedup with <1% accuracy loss).
no code implementations • 11 Jul 2022 • Zhaohua Chen, Chang Wang, Qian Wang, Yuqi Pan, Zhuming Shi, Zheng Cai, Yukun Ren, Zhihua Zhu, Xiaotie Deng
Among various budget control methods, throttling has emerged as a popular choice, managing an advertiser's total expenditure by selecting only a subset of auctions to participate in.
no code implementations • 29 May 2022 • Rui Ai, Chang Wang, Chenchen Li, Jinshan Zhang, Wenhan Huang, Xiaotie Deng
Recently the online advertising market has exhibited a gradual shift from second-price auctions to first-price auctions.
no code implementations • 20 Jan 2021 • Chao Yan, Xiaojia Xiang, Chang Wang, Zhen Lan
Developing the flocking behavior for a dynamic squad of fixed-wing UAVs is still a challenge due to kinematic complexity and environmental uncertainty.
no code implementations • 6 Sep 2020 • Chang Wang, Jian Liang, Mingkai Huang, Bing Bai, Kun Bai, Hao Li
We present HDP-VFL, the first hybrid differentially private (DP) framework for vertical federated learning (VFL) to demonstrate that it is possible to jointly learn a generalized linear model (GLM) from vertically partitioned data with only a negligible cost, w. r. t.
no code implementations • 25 Aug 2020 • Mingkai Huang, Hao Li, Bing Bai, Chang Wang, Kun Bai, Fei Wang
Federated Learning(FL) is a newly developed privacy-preserving machine learning paradigm to bridge data repositories without compromising data security and privacy.
no code implementations • 26 May 2020 • Hao Chen, Chang Wang, Jian Huang, Jianxing Gong
Besides, taking advantages of the condition representation and matching mechanism of XCS, the heuristic policies and the opponent model can provide guidance for situations with similar feature representation.
2 code implementations • 19 May 2018 • Haiwen Huang, Chang Wang, Bin Dong
NosAdam can be regarded as a fix to the non-convergence issue of Adam in alternative to the recent work of [Reddi et al., 2018].
no code implementations • 1 Jun 2015 • Chang Wang, Liangliang Cao, Bo-Wen Zhou
In this paper, we present a novel approach for medical synonym extraction.
no code implementations • 6 Dec 2014 • Liangliang Cao, Chang Wang
Synonym extraction is an important task in natural language processing and often used as a submodule in query expansion, question answering and other applications.