Search Results for author: Mutian He

Found 11 papers, 8 papers with code

DEL-Ranking: Ranking-Correction Denoising Framework for Elucidating Molecular Affinities in DNA-Encoded Libraries

no code implementations19 Oct 2024 Hanqun Cao, Mutian He, Ning Ma, Chang-Yu Hsieh, Chunbin Gu, Pheng-Ann Heng

DNA-encoded library (DEL) screening has revolutionized the detection of protein-ligand interactions through read counts, enabling rapid exploration of vast chemical spaces.

Denoising Zero-shot Generalization

Joint Fine-tuning and Conversion of Pretrained Speech and Language Models towards Linear Complexity

1 code implementation9 Oct 2024 Mutian He, Philip N. Garner

In a series of empirical studies on language processing, language modeling, and speech processing, we show that CALD can effectively recover the result of the original model, and that the guiding strategy contributes to the result.

Language Modeling Language Modelling +1

Unlocking Potential Binders: Multimodal Pretraining DEL-Fusion for Denoising DNA-Encoded Libraries

no code implementations7 Sep 2024 Chunbin Gu, Mutian He, Hanqun Cao, Guangyong Chen, Chang-Yu Hsieh, Pheng Ann Heng

To mitigate these issues, we propose a Multimodal Pretraining DEL-Fusion model (MPDF) that enhances encoder capabilities through pretraining and integrates compound features across various scales.

Denoising Drug Discovery

The Interpreter Understands Your Meaning: End-to-end Spoken Language Understanding Aided by Speech Translation

1 code implementation16 May 2023 Mutian He, Philip N. Garner

Motivated particularly by the task of cross-lingual SLU, we demonstrate that the task of speech translation (ST) is a good means of pretraining speech models for end-to-end SLU on both intra- and cross-lingual scenarios.

Abstractive Text Summarization Continual Learning +7

Acquiring and Modelling Abstract Commonsense Knowledge via Conceptualization

1 code implementation3 Jun 2022 Mutian He, Tianqing Fang, Weiqi Wang, Yangqiu Song

Conceptualization, or viewing entities and situations as instances of abstract concepts in mind and making inferences based on that, is a vital component in human intelligence for commonsense reasoning.

Knowledge Graphs

Neural Lexicon Reader: Reduce Pronunciation Errors in End-to-end TTS by Leveraging External Textual Knowledge

1 code implementation19 Oct 2021 Mutian He, Jingzhou Yang, Lei He, Frank K. Soong

End-to-end TTS requires a large amount of speech/text paired data to cover all necessary knowledge, particularly how to pronounce different words in diverse contexts, so that a neural model may learn such knowledge accordingly.

Multilingual Byte2Speech Models for Scalable Low-resource Speech Synthesis

2 code implementations5 Mar 2021 Mutian He, Jingzhou Yang, Lei He, Frank K. Soong

To scale neural speech synthesis to various real-world languages, we present a multilingual end-to-end framework that maps byte inputs to spectrograms, thus allowing arbitrary input scripts.

Speech Synthesis

On the Role of Conceptualization in Commonsense Knowledge Graph Construction

1 code implementation6 Mar 2020 Mutian He, Yangqiu Song, Kun Xu, Dong Yu

Commonsense knowledge graphs (CKGs) like Atomic and ASER are substantially different from conventional KGs as they consist of much larger number of nodes formed by loosely-structured text, which, though, enables them to handle highly diverse queries in natural language related to commonsense, leads to unique challenges for automatic KG construction methods.

Diversity graph construction +2

Neural Subgraph Isomorphism Counting

1 code implementation25 Dec 2019 Xin Liu, Haojie Pan, Mutian He, Yangqiu Song, Xin Jiang, Lifeng Shang

In this paper, we study a new graph learning problem: learning to count subgraph isomorphisms.

Domain Adaptation Graph Learning +4

Robust Sequence-to-Sequence Acoustic Modeling with Stepwise Monotonic Attention for Neural TTS

1 code implementation3 Jun 2019 Mutian He, Yan Deng, Lei He

In this paper, we propose a novel stepwise monotonic attention method in sequence-to-sequence acoustic modeling to improve the robustness on out-of-domain inputs.

Hard Attention

Cannot find the paper you are looking for? You can Submit a new open access paper.