no code implementations • 21 Sep 2024 • Liujianfu Wang, Yuyang Du, Jingqi Lin, Kexin Chen, Soung Chang Liew
Large language models (LLMs) are being widely researched across various disciplines, with significant recent efforts focusing on adapting LLMs for understanding of how communication networks operate.
no code implementations • 18 Aug 2024 • Kexin Chen, Yi Liu, Dongxia Wang, Jiaying Chen, Wenhai Wang
Additionally, we explore the relationships among the models, attack strategies, and types of harmful content, as well as the correlations between the evaluation metrics, which proves the validity of our multifaceted evaluation framework.
no code implementations • 22 Jul 2024 • Feifan Zhang, Yuyang Du, Kexin Chen, Yulin Shao, Soung Chang Liew
Semantic communication is a promising technology for next-generation wireless networks.
no code implementations • 18 Jun 2024 • Kexin Chen, Kyunghyun Park, Hoi Ying Wong
In a continuous-time economy, this study formulates the Epstein-Zin (EZ) preference for the discounted dividend (or cash payouts) of stockholders as an EZ singular control utility.
1 code implementation • 26 Feb 2024 • Yuyang Du, Kexin Chen, Yue Zhan, Chang Han Low, Tao You, Mobarakol Islam, Ziyu Guo, Yueming Jin, Guangyong Chen, Pheng-Ann Heng
We further design an adaptive weight assignment approach that balances the generalization ability of the LLM and the domain expertise of the old CL model.
no code implementations • 20 Feb 2024 • Kexin Chen, Hanqun Cao, Junyou Li, Yuyang Du, Menghao Guo, Xin Zeng, Lanqing Li, Jiezhong Qiu, Pheng Ann Heng, Guangyong Chen
The proposed approach marks a significant advancement in automating chemical literature extraction and demonstrates the potential for AI to revolutionize data management and utilization in chemistry.
no code implementations • 24 Dec 2023 • Kexin Chen, Jinping Guan, Ravi Seshadri, Varun Pattabhiraman, Youssef Medhat Aboutaleb, Ali Shamshiripour, Chen Liang, Xiaochun Zhang, Moshe Ben-Akiva
The utility includes both the benefit in the inventory gained and the cost in time, monetary expense as well as maintenance of safety stock.
no code implementations • 16 Nov 2023 • Kexin Chen, Junyou Li, Kunyi Wang, Yuyang Du, Jiahui Yu, Jiamin Lu, Lanqing Li, Jiezhong Qiu, Jianzhang Pan, Yi Huang, Qun Fang, Pheng Ann Heng, Guangyong Chen
Recent AI research plots a promising future of automatic chemical reactions within the chemistry society.
5 code implementations • 1 Sep 2023 • Ziyu Guo, Renrui Zhang, Xiangyang Zhu, Yiwen Tang, Xianzheng Ma, Jiaming Han, Kexin Chen, Peng Gao, Xianzhi Li, Hongsheng Li, Pheng-Ann Heng
We introduce Point-Bind, a 3D multi-modality model aligning point clouds with 2D image, language, audio, and video.
Ranked #5 on 3D Question Answering (3D-QA) on 3D MM-Vet
no code implementations • 14 Jul 2023 • Yuyang Du, Hongyu Deng, Soung Chang Liew, Kexin Chen, Yulin Shao, He Chen
We begin by exploring LLM-assisted code refactoring, reuse, and validation, using an open-source software-defined radio (SDR) project as a case study.
no code implementations • 16 Oct 2022 • Kexin Chen, Hoi Ying Wong
This study investigates an optimal consumption--investment problem in which the unobserved stock trend is modulated by a hidden Markov chain that represents different economic regimes.
no code implementations • 22 Aug 2022 • Kexin Chen, Guangyong Chen, Junyou Li, Yuansheng Huang, Pheng-Ann Heng
In high-throughput experimentation (HTE) datasets, the average yield of our methodology's top 10 high-yield reactions is relatively close to the results of ideal yield selection.
no code implementations • 13 Dec 2021 • Ling Wang, Kexin Chen, Mei Choi Chiu, Hoi Ying Wong
The length of the waiting period is related to the opportunity cost, return, and risk of the expanded business.
no code implementations • 11 Jun 2021 • Zhong Ji, Kexin Chen, Haoran Wang
Image-text matching plays a central role in bridging the semantic gap between vision and language.
no code implementations • 25 Feb 2021 • Xinyun Zou, Eric O. Scott, Alexander B. Johnson, Kexin Chen, Douglas A. Nitz, Kenneth A. De Jong, Jeffrey L. Krichmar
Animals ranging from rats to humans can demonstrate cognitive map capabilities.
1 code implementation • 10 Feb 2021 • Jinwei Xing, Takashi Nagata, Kexin Chen, Xinyun Zou, Emre Neftci, Jeffrey L. Krichmar
To address this issue, we propose a two-stage RL agent that first learns a latent unified state representation (LUSR) which is consistent across multiple domains in the first stage, and then do RL training in one source domain based on LUSR in the second stage.