no code implementations • 17 Nov 2024 • Xi Fang, Jiankun Wang, Xiaochen Cai, Shangqian Chen, Shuwen Yang, Lin Yao, Linfeng Zhang, Guolin Ke
A significant portion of key information is embedded in molecular structure figures, complicating large-scale literature searches and limiting the application of large language models in fields such as biology, chemistry, and pharmaceuticals.
no code implementations • 8 Oct 2024 • Yi Jiang, Qingyang Shen, Shuzhong Lai, Shunyu Qi, Qian Zheng, Lin Yao, Yueming Wang, Gang Pan
Autism spectrum disorder(ASD) is a pervasive developmental disorder that significantly impacts the daily functioning and social participation of individuals.
no code implementations • 6 Sep 2024 • Shang Xiang, Lin Yao, Zhen Wang, Qifan Yu, Wentan Liu, Wentao Guo, Guolin Ke
The field of computer-aided synthesis planning (CASP) has seen rapid advancements in recent years, achieving significant progress across various algorithmic benchmarks.
1 code implementation • 15 Mar 2024 • Hengxing Cai, Xiaochen Cai, Shuwen Yang, Jiankun Wang, Lin Yao, Zhifeng Gao, Junhan Chang, Sihang Li, Mingjun Xu, Changxin Wang, Hongshuai Wang, Yongge Li, Mujie Lin, Yaqi Li, Yuqi Yin, Linfeng Zhang, Guolin Ke
Scientific literature often includes a wide range of multimodal elements, such as tables, charts, and molecule, which are hard for text-focused LLMs to understand and analyze.
1 code implementation • 4 Mar 2024 • Hengxing Cai, Xiaochen Cai, Junhan Chang, Sihang Li, Lin Yao, Changxin Wang, Zhifeng Gao, Hongshuai Wang, Yongge Li, Mujie Lin, Shuwen Yang, Jiankun Wang, Mingjun Xu, Jin Huang, Xi Fang, Jiaxi Zhuang, Yuqi Yin, Yaqi Li, Changhong Chen, Zheng Cheng, Zifeng Zhao, Linfeng Zhang, Guolin Ke
Recent breakthroughs in Large Language Models (LLMs) have revolutionized scientific literature analysis.
no code implementations • 8 Jan 2024 • Qingsi Lai, Lin Yao, Zhifeng Gao, Siyuan Liu, Hongshuai Wang, Shuqi Lu, Di He, LiWei Wang, Cheng Wang, Guolin Ke
XtalNet represents a significant advance in CSP, enabling the prediction of complex structures from PXRD data without the need for external databases or manual intervention.
1 code implementation • 27 Sep 2023 • Lin Yao, Wentao Guo, Zhen Wang, Shang Xiang, Wentan Liu, Guolin Ke
Single-step retrosynthesis (SSR) in organic chemistry is increasingly benefiting from deep learning (DL) techniques in computer-aided synthesis design.
Ranked #1 on Single-step retrosynthesis on USPTO-50k
no code implementations • 25 Aug 2023 • Hanwen Wang, Yu Qi, Lin Yao, Yueming Wang, Dario Farina, Gang Pan
Then a human-machine joint learning framework is proposed: 1) for the human side, we model the learning process in a sequential trial-and-error scenario and propose a novel ``copy/new'' feedback paradigm to help shape the signal generation of the subject toward the optimal distribution; 2) for the machine side, we propose a novel adaptive learning algorithm to learn an optimal signal distribution along with the subject's learning process.
no code implementations • 9 Aug 2023 • WeiJie Chen, Yuhang Wang, Lin Yao
In these methods, only a subset of the input dataset is needed to train neural networks for the estimation of poses and conformations.
no code implementations • 17 May 2023 • Guiyu Zhao, Bo Qiu, A-Li Luo, XIAOYU GUO, Lin Yao, Kun Wang, Yuanbo Liu
The Wide-field Infrared Survey Explorer (WISE) has detected hundreds of millions of sources over the entire sky.
no code implementations • 13 Feb 2023 • Lin Yao, Ruihan Xu, Zhifeng Gao, Guolin Ke, Yuhang Wang
The central problem in cryo-electron microscopy (cryo-EM) is to recover the 3D structure from noisy 2D projection images which requires estimating the missing projection angles (poses).
no code implementations • 12 Feb 2023 • Shuqi Lu, Lin Yao, Xi Chen, Hang Zheng, Di He, Guolin Ke
Extensive experiment results on pocket-based molecular generation demonstrate that VD-Gen can generate novel 3D molecules to fill the target pocket cavity with high binding affinities, significantly outperforming previous baselines.
1 code implementation • 8 May 2022 • Chunyu Xie, Heng Cai, Jincheng Li, Fanjing Kong, Xiaoyu Wu, Jianfei Song, Henrique Morimitsu, Lin Yao, Dexin Wang, Xiangzheng Zhang, Dawei Leng, Baochang Zhang, Xiangyang Ji, Yafeng Deng
In this work, we build a large-scale high-quality Chinese Cross-Modal Benchmark named CCMB for the research community, which contains the currently largest public pre-training dataset Zero and five human-annotated fine-tuning datasets for downstream tasks.
Ranked #3 on Image Retrieval on Flickr30k-CN
no code implementations • 22 Apr 2022 • Lin Yao, Jianfei Song, Ruizhuo Xu, Yingfang Yang, Zijian Chen, Yafeng Deng
Basically, there are two main methods for SLU tasks: (1) Two-stage method, which uses a speech model to transfer speech to text, then uses a language model to get the results of downstream tasks; (2) One-stage method, which just fine-tunes a pre-trained speech model to fit in the downstream tasks.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +6
no code implementations • Journal of Neural Engineering 2022 • Lin Yao, Bingzhao Zhu, Mahsa Shoaran
In this work, we introduce the use of Riemannian-space features and temporal dynamics of electrocorticography (ECoG) signal combined with modern machine learning (ML) tools to improve the motor decoding accuracy at the level of individual fingers.