no code implementations • 18 Feb 2025 • Bingning Wang, Haizhou Zhao, Huozhi Zhou, Liang Song, Mingyu Xu, Wei Cheng, Xiangrong Zeng, Yupeng Zhang, Yuqi Huo, Zecheng Wang, Zhengyun Zhao, Da Pan, Fei Kou, Fei Li, Fuzhong Chen, Guosheng Dong, Han Liu, Hongda Zhang, Jin He, Jinjie Yang, Kangxi Wu, Kegeng Wu, Lei Su, Linlin Niu, Linzhuang Sun, Mang Wang, Pengcheng Fan, Qianli Shen, Rihui Xin, Shunya Dang, Songchi Zhou, WeiPeng Chen, Wenjing Luo, Xin Chen, Xin Men, Xionghai Lin, Xuezhen Dong, Yan Zhang, Yifei Duan, Yuyan Zhou, Zhi Ma, Zhiying Wu
The current generation of large language models (LLMs) is typically designed for broad, general-purpose applications, while domain-specific LLMs, especially in vertical fields like medicine, remain relatively scarce.
no code implementations • 30 Oct 2024 • Shuzhen Li, Yuxin Chen, Xuesong Chen, Ruiyang Gao, Yupeng Zhang, Chao Yu, Yunfei Li, Ziyi Ye, Weijun Huang, Hongliang Yi, Yue Leng, Yi Wu
However, reliable BCG-based sleep staging is challenging due to the limited sleep monitoring data available for BCG.
1 code implementation • 14 Aug 2024 • Yongcheng Li, Lingcong Cai, Ying Lu, Cheng Lin, Yupeng Zhang, Jingyan Jiang, Genan Dai, BoWen Zhang, Jingzhou Cao, Xiangzhong Zhang, Xiaomao Fan
To address this issue, we propose a novel framework of domain-invariant representation learning (DoRL) via segment anything model (SAM) for blood cell classification.
1 code implementation • 13 Aug 2024 • Yongcheng Li, Lingcong Cai, Ying Lu, Yupeng Zhang, Jingyan Jiang, Genan Dai, BoWen Zhang, Jingzhou Cao, Xiangzhong Zhang, Xiaomao Fan
Accurate classification of blood cells plays a vital role in hematological analysis as it aids physicians in diagnosing various medical conditions.
1 code implementation • 7 Aug 2024 • Shihao Tu, Yupeng Zhang, Jing Zhang, Zhendong Fu, Yin Zhang, Yang Yang
PowerPM consists of a temporal encoder and a hierarchical encoder.
no code implementations • 18 Jun 2024 • Jie Chen, Yupeng Zhang, Bingning Wang, Wayne Xin Zhao, Ji-Rong Wen, WeiPeng Chen
Synthetic data has been proposed as a solution to address the issue of high-quality data scarcity in the training of large language models (LLMs).
no code implementations • 17 Jun 2024 • Han Liu, Yupeng Zhang, Bingning Wang, WeiPeng Chen, Xiaolin Hu
Deep Neural Networks (DNNs) excel in various domains but face challenges in providing accurate uncertainty estimates, which are crucial for high-stakes applications.
2 code implementations • 19 Sep 2023 • Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, Juntao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, WeiPeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, Zhiying Wu
Large language models (LLMs) have demonstrated remarkable performance on a variety of natural language tasks based on just a few examples of natural language instructions, reducing the need for extensive feature engineering.
1 code implementation • 28 Aug 2023 • Guanting Dong, Rumei Li, Sirui Wang, Yupeng Zhang, Yunsen Xian, Weiran Xu
Knowledge Base Question Answering (KBQA) aims to answer natural language questions with factual information such as entities and relations in KBs.
Ranked #3 on
Knowledge Base Question Answering
on WebQuestionsSP
no code implementations • 27 Feb 2023 • Baturalp Buyukates, Chaoyang He, Shanshan Han, Zhiyong Fang, Yupeng Zhang, Jieyi Long, Ali Farahanchi, Salman Avestimehr
Our goal is to design a data marketplace for such decentralized collaborative/federated learning applications that simultaneously provides i) proof-of-contribution based reward allocation so that the trainers are compensated based on their contributions to the trained model; ii) privacy-preserving decentralized model training by avoiding any data movement from data owners; iii) robustness against malicious parties (e. g., trainers aiming to poison the model); iv) verifiability in the sense that the integrity, i. e., correctness, of all computations in the data market protocol including contribution assessment and outlier detection are verifiable through zero-knowledge proofs; and v) efficient and universal design.
no code implementations • 10 Dec 2022 • Weilin Fu, Zhuoran Li, Yupeng Zhang, Xingyou Zhou
Every financial crisis has caused a dual shock to the global economy.
no code implementations • 22 Oct 2022 • Yupeng Zhang, Hongzhi Zhang, Sirui Wang, Wei Wu, Zhoujun Li
A wide range of NLP tasks benefit from the fine-tuning of pretrained language models (PLMs).
no code implementations • 29 Jul 2020 • Xin Dong, Yaxin Zhu, Yupeng Zhang, Zuohui Fu, Dongkuan Xu, Sen yang, Gerard de Melo
The resulting model then serves as a teacher to induce labels for unlabeled target language samples that can be used during further adversarial training, allowing us to gradually adapt our model to the target language.
no code implementations • 6 Apr 2018 • Xi Chen, Zonghang Li, Yupeng Zhang, Ruiming Long, Hongfang Yu, Xiaojiang Du, Mohsen Guizani
With the ever growing diversity of devices and applications that will be connected to 5G networks, flexible and agile service orchestration with acknowledged QoE that satisfies end-user's functional and QoS requirements is necessary.