1 code implementation • 26 Feb 2024 • Bowen Zhao, Zander Brumbaugh, Yizhong Wang, Hannaneh Hajishirzi, Noah A. Smith
We then develop several methods, from prompting to finetuning, to align LMs to use their most recent knowledge when answering questions, and investigate various factors in this alignment.
1 code implementation • 22 Jan 2024 • Bowen Zhao, Hannaneh Hajishirzi, Qingqing Cao
Compared to baselines, our experiments show that APT maintains up to 98% task performance when pruning RoBERTa and T5 models with 40% parameters left while keeping 86. 4% LLaMA models' performance with 70% parameters remained.
no code implementations • 13 Dec 2023 • Bowen Zhao, Changkai Ji, Yuejie Zhang, Wen He, Yingwen Wang, Qing Wang, Rui Feng, Xiaobo Zhang
With the Generative Pre-trained Transformer 3. 5 (GPT-3. 5) exhibiting remarkable reasoning and comprehension abilities in Natural Language Processing (NLP), most Question Answering (QA) research has primarily centered around general QA tasks based on GPT, neglecting the specific challenges posed by Complex Table QA.
no code implementations • 9 Oct 2023 • Ruiyang Liu, Jinxu Xiang, Bowen Zhao, Ran Zhang, Jingyi Yu, Changxi Zheng
To tackle the problem of efficiently editing neural implicit fields, we introduce Neural Impostor, a hybrid representation incorporating an explicit tetrahedral mesh alongside a multigrid implicit field designated for each tetrahedron within the explicit mesh.
no code implementations • 12 Apr 2023 • Feng-Feng Wei, Wei-neng Chen, Xiao-Qi Guo, Bowen Zhao, Sang-Woon Jeon, Jun Zhang
Inspired by this, this paper intends to introduce crowdsourcing into evolutionary computation (EC) to propose a crowdsourcing-based evolutionary computation (CEC) paradigm for distributed optimization.
no code implementations • 22 Mar 2023 • Bowen Zhao, Wei-neng Chen, Xiaoguo Li, Ximeng Liu, Qingqi Pei, Jun Zhang
To this end, in this paper, we discuss three typical optimization paradigms (i. e., \textit{centralized optimization, distributed optimization, and data-driven optimization}) to characterize optimization modes of evolutionary computation and propose BOOM to sort out privacy concerns in evolutionary computation.
no code implementations • 18 Mar 2023 • Jiexin Ding, Bowen Zhao, Yuqi Huang, Yuntao Wang, Yuanchun Shi
Automatic unknown word detection techniques can enable new applications for assisting English as a Second Language (ESL) learners, thus improving their reading experiences.
no code implementations • 23 Feb 2023 • Xiaoguo Li, Bowen Zhao, Guomin Yang, Tao Xiang, Jian Weng, Robert H. Deng
To the best of our knowledge, this article is the first survey to review TEE-based secure computation protocols and the comprehensive comparison can serve as a guideline for selecting suitable protocols for deployment in practice.
no code implementations • 22 Feb 2023 • Bowen Zhao, Chen Chen, Qian-Wei Wang, Anfeng He, Shu-Tao Xia
For challenge B, we point out that the gradient contribution statistics can be a reliable indicator to inspect whether the optimization is dominated by bias-aligned samples.
no code implementations • 30 Jan 2023 • Bowen Zhao, Chen Chen, Shu-Tao Xia
However, we find that two unfavorable defects are concealed in the prevalent adaptation methodologies like test-time batch normalization (BN) and self-learning.
1 code implementation • 21 Oct 2022 • Bowen Zhao, Jiuding Sun, Bin Xu, Xingyu Lu, Yuchen Li, Jifan Yu, Minghui Liu, Tingjian Zhang, Qiuyang Chen, Hanming Li, Lei Hou, Juanzi Li
To tackle these issues, we propose EDUKG, a heterogeneous sustainable K-12 Educational Knowledge Graph.
no code implementations • 20 Oct 2022 • Qian-Wei Wang, Bowen Zhao, Mingyan Zhu, Tianxiang Li, Zimo Liu, Shu-Tao Xia
Partial label learning (PLL) learns from training examples each associated with multiple candidate labels, among which only one is valid.
1 code implementation • 16 Oct 2022 • Yuyuan Zeng, Bowen Zhao, Shanzhao Qiu, Tao Dai, Shu-Tao Xia
Most existing methods mainly focus on extracting global features from tampered images, while neglecting the relationships of local features between tampered and authentic regions within a single tampered image.
no code implementations • 14 Jul 2022 • Bowen Zhao, Huanlai Xing, Xinhan Wang, Fuhong Song, Zhiwen Xiao
Attention-based models have been widely used in many areas, such as computer vision and natural language processing.
no code implementations • 27 May 2022 • Bowen Zhao, Wei-neng Chen, Feng-Feng Wei, Ximeng Liu, Qingqi Pei, Jun Zhang
Specifically, PEGA enables users outsourcing COPs to the cloud server holding a competitive GA and approximating the optimal solution in a privacy-preserving manner.
no code implementations • 20 May 2022 • Xinhan Wang, Huanlai Xing, Fuhong Song, Shouxi Luo, Penglin Dai, Bowen Zhao
Mobile devices (MDs) can offload computation-intensive applications, which can be represented by SFCs, fully or partially to MEC servers for remote execution.
no code implementations • 24 Feb 2022 • Fuhong Song, Huanlai Xing, Xinhan Wang, Shouxi Luo, Penglin Dai, Zhiwen Xiao, Bowen Zhao
This paper adapts the evolutionary multi-objective RL (EMORL), a multi-policy multi-objective RL, to the TCTO problem.
no code implementations • 30 Dec 2021 • Huanlai Xing, Zhiwen Xiao, Rong Qu, Zonghai Zhu, Bowen Zhao
For each connected user, its student model's hidden layers' weights are uploaded to the EFDLS server periodically.
1 code implementation • 25 Nov 2021 • Bowen Zhao, Chen Chen, Qian-Wei Wang, Anfeng He, Shu-Tao Xia
For challenge B, we point out that the gradient contribution statistics can be a reliable indicator to inspect whether the optimization is dominated by bias-aligned samples.
no code implementations • 7 Jun 2021 • Bowen Zhao, Chen Chen, Qi Ju, Shutao Xia
Training on class-imbalanced data usually results in biased models that tend to predict samples into the majority classes, which is a common and notorious problem.
no code implementations • 20 Feb 2021 • Bowen Zhao, Ximeng Liu, Wei-neng Chen
Specifically, in order to protect privacy, participants locally process sensing data via federated learning and only upload encrypted training models.
no code implementations • 28 Dec 2020 • Bowen Zhao, Chen Chen, Xi Xiao, Shutao Xia
Object detectors are typically learned on fully-annotated training data with fixed predefined categories.
1 code implementation • CVPR 2020 • Bowen Zhao, Xi Xiao, Guojun Gan, Bin Zhang, Shu-Tao Xia
In this paper, we demonstrate it can indeed help the model to output more discriminative results within old classes.
Ranked #2 on Incremental Learning on ImageNet100 - 10 steps (# M Params metric)
no code implementations • 13 Apr 2019 • Bowen Zhao, Xi Xiao, Wanpeng Zhang, Bin Zhang, Shu-Tao Xia
There is a probabilistic version of PCA, known as Probabilistic PCA (PPCA).