Search Results for author: Lin Yao

Found 12 papers, 2 papers with code

Uni-SMART: Universal Science Multimodal Analysis and Research Transformer

no code implementations15 Mar 2024 Hengxing Cai, Xiaochen Cai, Shuwen Yang, Jiankun Wang, Lin Yao, Zhifeng Gao, Junhan Chang, Sihang Li, Mingjun Xu, Changxin Wang, Hongshuai Wang, Yongge Li, Mujie Lin, Yaqi Li, Yuqi Yin, Linfeng Zhang, Guolin Ke

Scientific literature often includes a wide range of multimodal elements, such as molecular structure, tables, and charts, which are hard for text-focused LLMs to understand and analyze.

SciAssess: Benchmarking LLM Proficiency in Scientific Literature Analysis

no code implementations4 Mar 2024 Hengxing Cai, Xiaochen Cai, Junhan Chang, Sihang Li, Lin Yao, Changxin Wang, Zhifeng Gao, Hongshuai Wang, Yongge Li, Mujie Lin, Shuwen Yang, Jiankun Wang, Yuqi Yin, Yaqi Li, Linfeng Zhang, Guolin Ke

Recent breakthroughs in Large Language Models (LLMs) have revolutionized natural language understanding and generation, igniting a surge of interest in leveraging these technologies in the field of scientific literature analysis.

Benchmarking Memorization +1

End-to-End Crystal Structure Prediction from Powder X-Ray Diffraction

no code implementations8 Jan 2024 Qingsi Lai, Lin Yao, Zhifeng Gao, Siyuan Liu, Hongshuai Wang, Shuqi Lu, Di He, LiWei Wang, Cheng Wang, Guolin Ke

XtalNet represents a significant advance in CSP, enabling the prediction of complex structures from PXRD data without the need for external databases or manual intervention.

Contrastive Learning Retrieval

Node-Aligned Graph-to-Graph (NAG2G): Elevating Template-Free Deep Learning Approaches in Single-Step Retrosynthesis

1 code implementation27 Sep 2023 Lin Yao, Wentao Guo, Zhen Wang, Shang Xiang, Wentan Liu, Guolin Ke

Single-step retrosynthesis (SSR) in organic chemistry is increasingly benefiting from deep learning (DL) techniques in computer-aided synthesis design.

Benchmarking Graph Generation +2

A Human-Machine Joint Learning Framework to Boost Endogenous BCI Training

no code implementations25 Aug 2023 Hanwen Wang, Yu Qi, Lin Yao, Yueming Wang, Dario Farina, Gang Pan

Then a human-machine joint learning framework is proposed: 1) for the human side, we model the learning process in a sequential trial-and-error scenario and propose a novel ``copy/new'' feedback paradigm to help shape the signal generation of the subject toward the optimal distribution; 2) for the machine side, we propose a novel adaptive learning algorithm to learn an optimal signal distribution along with the subject's learning process.

EEG Motor Imagery

Improved Cryo-EM Pose Estimation and 3D Classification through Latent-Space Disentanglement

no code implementations9 Aug 2023 WeiJie Chen, Yuhang Wang, Lin Yao

In these methods, only a subset of the input dataset is needed to train neural networks for the estimation of poses and conformations.

3D Classification 3D Reconstruction +3

Boosted ab initio Cryo-EM 3D Reconstruction with ACE-EM

no code implementations13 Feb 2023 Lin Yao, Ruihan Xu, Zhifeng Gao, Guolin Ke, Yuhang Wang

The central problem in cryo-electron microscopy (cryo-EM) is to recover the 3D structure from noisy 2D projection images which requires estimating the missing projection angles (poses).

3D Reconstruction

3D Molecular Generation via Virtual Dynamics

no code implementations12 Feb 2023 Shuqi Lu, Lin Yao, Xi Chen, Hang Zheng, Di He, Guolin Ke

Extensive experiment results on pocket-based molecular generation demonstrate that VD-Gen can generate novel 3D molecules to fill the target pocket cavity with high binding affinities, significantly outperforming previous baselines.

Drug Discovery

CCMB: A Large-scale Chinese Cross-modal Benchmark

1 code implementation8 May 2022 Chunyu Xie, Heng Cai, Jincheng Li, Fanjing Kong, Xiaoyu Wu, Jianfei Song, Henrique Morimitsu, Lin Yao, Dexin Wang, Xiangzheng Zhang, Dawei Leng, Baochang Zhang, Xiangyang Ji, Yafeng Deng

In this work, we build a large-scale high-quality Chinese Cross-Modal Benchmark named CCMB for the research community, which contains the currently largest public pre-training dataset Zero and five human-annotated fine-tuning datasets for downstream tasks.

Image Classification Image Retrieval +7

WaBERT: A Low-resource End-to-end Model for Spoken Language Understanding and Speech-to-BERT Alignment

no code implementations22 Apr 2022 Lin Yao, Jianfei Song, Ruizhuo Xu, Yingfang Yang, Zijian Chen, Yafeng Deng

Basically, there are two main methods for SLU tasks: (1) Two-stage method, which uses a speech model to transfer speech to text, then uses a language model to get the results of downstream tasks; (2) One-stage method, which just fine-tunes a pre-trained speech model to fit in the downstream tasks.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +5

Fast and accurate decoding of finger movements from ECoG through Riemannian features and modern machine learning techniques

no code implementations Journal of Neural Engineering 2022 Lin Yao, Bingzhao Zhu, Mahsa Shoaran

In this work, we introduce the use of Riemannian-space features and temporal dynamics of electrocorticography (ECoG) signal combined with modern machine learning (ML) tools to improve the motor decoding accuracy at the level of individual fingers.

Brain Computer Interface

Cannot find the paper you are looking for? You can Submit a new open access paper.