3 code implementations • 10 Mar 2024 • Gang Hu, Ke Qin, Chenhan Yuan, Min Peng, Alejandro Lopez-Lira, Benyou Wang, Sophia Ananiadou, Wanlong Yu, Jimin Huang, Qianqian Xie
While the progression of Large Language Models (LLMs) has notably propelled financial analysis, their application has largely been confined to singular language realms, leaving untapped the potential of bilingual Chinese-English capacity.
no code implementations • 26 Jan 2024 • Tao He, Tongtong Wu, Dongyang Zhang, Guiduo Duan, Ke Qin, Yuan-Fang Li
Besides, extensive experiments on the two mainstream benchmark datasets, VG and Open-Image(v6), show the superiority of our proposed model to a number of competitive SGG models in terms of continuous learning and conventional settings.
1 code implementation • 31 May 2023 • Yi Luo, Guangchun Luo, Ke Qin, Aiguo Chen
Node classifiers are required to comprehensively reduce prediction errors, training resources, and inference latency in the industry.
no code implementations • 26 Dec 2022 • Rufai Yusuf Zakari, Jim Wilson Owusu, Hailin Wang, Ke Qin, Zaharaddeen Karami Lawal, Yuezhou Dong
Artificial Intelligence (AI) and its applications have sparked extraordinary interest in recent years.
1 code implementation • 19 May 2021 • Haipeng Gao, Kun Yang, Yuxue Yang, Rufai Yusuf Zakari, Jim Wilson Owusu, Ke Qin
Knowledge graph embedding has been an active research topic for knowledge base completion (KGC), with progressive improvement from the initial TransE, TransH, RotatE et al to the current state-of-the-art QuatE.
Ranked #2 on Link Prediction on WN18
no code implementations • 6 Jan 2021 • Hailin Wang, Ke Qin, Rufai Yusuf Zakari, Guoming Lu, Jin Yin
One of the representations of knowledge is semantic relations between entities.
1 code implementation • 23 Sep 2018 • Jean-Paul Ainam, Ke Qin, Guisong Liu
We apply a max filter operation to non-overlapping sub-regions on the high feature representation before element-wise multiplied with the output of the second branch.
1 code implementation • 13 Sep 2018 • Jean-Paul Ainam, Ke Qin, Guisong Liu, Guangchun Luo
Finally, we assign a non-uniform label distribution to the generated samples and define a regularized loss function for training.