1 code implementation • 14 Aug 2024 • Yongcheng Li, Lingcong Cai, Ying Lu, Cheng Lin, Yupeng Zhang, Jingyan Jiang, Genan Dai, BoWen Zhang, Jingzhou Cao, Xiangzhong Zhang, Xiaomao Fan
To address this issue, we propose a novel framework of domain-invariant representation learning (DoRL) via segment anything model (SAM) for blood cell classification.
1 code implementation • 13 Aug 2024 • Yongcheng Li, Lingcong Cai, Ying Lu, Yupeng Zhang, Jingyan Jiang, Genan Dai, BoWen Zhang, Jingzhou Cao, Xiangzhong Zhang, Xiaomao Fan
Accurate classification of blood cells plays a vital role in hematological analysis as it aids physicians in diagnosing various medical conditions.
no code implementations • 24 Mar 2024 • Liangrui Pan, Zhenyu Zhao, Ying Lu, Kewei Tang, Liyong Fu, Qingchun Liang, Shaoliang Peng
Influenced by ChatGPT, artificial intelligence (AI) large models have witnessed a global upsurge in large model research and development.
no code implementations • 5 Sep 2023 • Yu Huang, Jingchuan Guo, William T Donahoo, Zhengkang Fan, Ying Lu, Wei-Han Chen, Huilin Tang, Lori Bilello, Elizabeth A Shenkman, Jiang Bian
Background: Racial and ethnic minority groups and individuals facing social disadvantages, which often stem from their social determinants of health (SDoH), bear a disproportionate burden of type 2 diabetes (T2D) and its complications.
no code implementations • 21 Jul 2023 • Ying Lu, Pei Shi, Xiao-Han Wang, Jie Hu, Shi-Ju Ran
Entanglement propagation provides a key routine to understand quantum many-body dynamics in and out of equilibrium.
no code implementations • 29 Mar 2022 • Ying Lu, Peng-Fei Zhou, Shao-Ming Fei, Shi-Ju Ran
The quantum instruction set (QIS) is defined as the quantum gates that are physically realizable by controlling the qubits in quantum hardware.
no code implementations • 14 Sep 2021 • Lili Wang, Chenghan Huang, Weicheng Ma, Ying Lu, Soroush Vosoughi
In this paper, we present a novel and flexible framework using stress majorization, to transform the high-dimensional role identities in networks directly (without approximation or indirect modeling) to a low-dimensional embedding space.
no code implementations • 3 Jun 2021 • Ying Lu, Yue-Min Li, Peng-Fei Zhou, Shi-Ju Ran
State preparation is of fundamental importance in quantum physics, which can be realized by constructing the quantum circuit as a unitary that transforms the initial state to the target, or implementing a quantum control protocol to evolve to the target state with a designed Hamiltonian.
no code implementations • 3 Nov 2020 • Lili Wang, Ying Lu, Chenghan Huang, Soroush Vosoughi
However, the work on network embedding in hyperbolic space has been focused on microscopic node embedding.
no code implementations • 3 Apr 2019 • Zhengping Che, Guangyu Li, Tracy Li, Bo Jiang, Xuefeng Shi, Xinsheng Zhang, Ying Lu, Guobin Wu, Yan Liu, Jieping Ye
Driving datasets accelerate the development of intelligent driving and related computer vision technologies, while substantial and detailed annotations serve as fuels and powers to boost the efficacy of such datasets to improve learning-based models.
no code implementations • 21 Feb 2018 • Lingkun Luo, Liming Chen, Ying Lu, Shiqiang Hu
Domain adaptation (DA) is transfer learning which aims to learn an effective predictor on target data from source data despite data distribution mismatch between source and target.
no code implementations • 17 Jan 2018 • Ying Lu, Liming Chen, Alexandre Saidi, Xianfeng GU
Correctly estimating the discrepancy between two data distributions has always been an important task in Machine Learning.
no code implementations • 28 Dec 2017 • Lingkun Luo, Liming Chen, Shiqiang Hu, Ying Lu, Xiaofang Wang
Domain adaptation (DA) aims to generalize a learning model across training and testing data despite the mismatch of their data distributions.
no code implementations • 9 Sep 2017 • Ying Lu, Liming Chen, Alexandre Saidi
By adding an Optimal Transport loss (OT loss) between source and target classifier predictions as a constraint on the source classifier, the proposed Joint Transfer Learning Network (JTLN) can effectively learn useful knowledge for target classification from source data.