Search Results for author: Chang Su

Found 32 papers, 6 papers with code

PINNacle: A Comprehensive Benchmark of Physics-Informed Neural Networks for Solving PDEs

1 code implementation15 Jun 2023 Zhongkai Hao, Jiachen Yao, Chang Su, Hang Su, Ziao Wang, Fanzhi Lu, Zeyu Xia, Yichi Zhang, Songming Liu, Lu Lu, Jun Zhu

In addition to providing a standardized means of assessing performance, PINNacle also offers an in-depth analysis to guide future research, particularly in areas such as domain decomposition methods and loss reweighting for handling multi-scale problems and complex geometry.

Benchmarking

DPOT: Auto-Regressive Denoising Operator Transformer for Large-Scale PDE Pre-Training

1 code implementation6 Mar 2024 Zhongkai Hao, Chang Su, Songming Liu, Julius Berner, Chengyang Ying, Hang Su, Anima Anandkumar, Jian Song, Jun Zhu

Pre-training has been investigated to improve the efficiency and performance of training neural operators in data-scarce settings.

Denoising

A Multi-View Ensemble Classification Model for Clinically Actionable Genetic Mutations

2 code implementations26 Jun 2018 Xi Sheryl Zhang, Dandi Chen, Yongjun Zhu, Chao Che, Chang Su, Sendong Zhao, Xu Min, Fei Wang

This paper presents details of our winning solutions to the task IV of NIPS 2017 Competition Track entitled Classifying Clinically Actionable Genetic Mutations.

BIG-bench Machine Learning General Classification

Empirical Bayes PCA in high dimensions

1 code implementation21 Dec 2020 Xinyi Zhong, Chang Su, Zhou Fan

When the dimension of data is comparable to or larger than the number of data samples, Principal Components Analysis (PCA) may exhibit problematic high-dimensional noise.

Vocal Bursts Intensity Prediction

Probing Simile Knowledge from Pre-trained Language Models

1 code implementation ACL 2022 WeiJie Chen, Yongzhu Chang, Rongsheng Zhang, Jiashu Pu, Guandan Chen, Le Zhang, Yadong Xi, Yijiang Chen, Chang Su

In this paper, we probe simile knowledge from PLMs to solve the SI and SG tasks in the unified framework of simile triple completion for the first time.

Language Modelling Position +1

Evaluation of Rarity of Fingerprints in Forensics

no code implementations NeurIPS 2010 Chang Su, Sargur Srihari

The probability of random correspondence for several latent fingerprints are evaluated for varying numbers of minutiae.

Gaussian Processes

GRAPHENE: A Precise Biomedical Literature Retrieval Engine with Graph Augmented Deep Learning and External Knowledge Empowerment

no code implementations2 Nov 2019 Sendong Zhao, Chang Su, Andrea Sboner, Fei Wang

GRAPHENE consists of three main different modules 1) graph-augmented document representation learning; 2) query expansion and representation learning and 3) learning to rank biomedical articles.

Learning-To-Rank Representation Learning +1

Federated Learning for Healthcare Informatics

no code implementations13 Nov 2019 Jie Xu, Benjamin S. Glicksberg, Chang Su, Peter Walker, Jiang Bian, Fei Wang

With the rapid development of computer software and hardware technologies, more and more healthcare data are becoming readily available from clinical institutions, patients, insurance companies and pharmaceutical industries, among others.

Federated Learning

Robust Multi-Modal Policies for Industrial Assembly via Reinforcement Learning and Demonstrations: A Large-Scale Study

no code implementations21 Mar 2021 Jianlan Luo, Oleg Sushkov, Rugile Pevceviciute, Wenzhao Lian, Chang Su, Mel Vecerik, Ning Ye, Stefan Schaal, Jon Scholz

In this paper we define criteria for industry-oriented DRL, and perform a thorough comparison according to these criteria of one family of learning approaches, DRL from demonstration, against a professional industrial integrator on the recently established NIST assembly benchmark.

Make the Blind Translator See The World: A Novel Transfer Learning Solution for Multimodal Machine Translation

no code implementations MTSummit 2021 Minghan Wang, Jiaxin Guo, Yimeng Chen, Chang Su, Min Zhang, Shimin Tao, Hao Yang

Based on large-scale pretrained networks and the liability to be easily overfitting with limited labelled training data of multimodal translation (MMT) is a critical issue in MMT.

Multimodal Machine Translation NMT +2

Joint-training on Symbiosis Networks for Deep Nueral Machine Translation models

no code implementations22 Dec 2021 Zhengzhe Yu, Jiaxin Guo, Minghan Wang, Daimeng Wei, Hengchao Shang, Zongyao Li, Zhanglin Wu, Yuxia Wang, Yimeng Chen, Chang Su, Min Zhang, Lizhi Lei, Shimin Tao, Hao Yang

Deep encoders have been proven to be effective in improving neural machine translation (NMT) systems, but it reaches the upper bound of translation quality when the number of encoder layers exceeds 18.

Machine Translation NMT +1

Self-Distillation Mixup Training for Non-autoregressive Neural Machine Translation

no code implementations22 Dec 2021 Jiaxin Guo, Minghan Wang, Daimeng Wei, Hengchao Shang, Yuxia Wang, Zongyao Li, Zhengzhe Yu, Zhanglin Wu, Yimeng Chen, Chang Su, Min Zhang, Lizhi Lei, Shimin Tao, Hao Yang

An effective training strategy to improve the performance of AT models is Self-Distillation Mixup (SDM) Training, which pre-trains a model on raw data, generates distilled data by the pre-trained model itself and finally re-trains a model on the combination of raw data and distilled data.

Knowledge Distillation Machine Translation +1

The Relationship between Digital RMB and Digital Economy in China

no code implementations28 May 2022 Chang Su, Wenbo Lyu, Yueting Liu

By comparing the historical patterns of currency development, this paper pointed out the inevitability of the development of digital currency and the relationship between digital currency and the digital economy.

Memory Efficient Temporal & Visual Graph Model for Unsupervised Video Domain Adaptation

no code implementations13 Aug 2022 Xinyue Hu, Lin Gu, Liangchen Liu, Ruijiang Li, Chang Su, Tatsuya Harada, Yingying Zhu

Existing video domain adaption (DA) methods need to store all temporal combinations of video frames or pair the source and target videos, which are memory cost expensive and can't scale up to long videos.

Domain Adaptation Graph Attention

PingAnTech at SMM4H task1: Multiple pre-trained model approaches for Adverse Drug Reactions

no code implementations SMM4H (COLING) 2022 Xi Liu, Han Zhou, Chang Su

For task 1a, the system achieved an F1 score of 0. 68; for task 1b Overlapping F1 score of 0. 65 and a Strict F1 score of 0. 49.

Language Modelling

Application of Deep Learning on Single-Cell RNA-sequencing Data Analysis: A Review

no code implementations11 Oct 2022 Matthew Brendel, Chang Su, Zilong Bai, Hao Zhang, Olivier Elemento, Fei Wang

Single-cell RNA-sequencing (scRNA-seq) has become a routinely used technique to quantify the gene expression profile of thousands of single cells simultaneously.

MultiAdam: Parameter-wise Scale-invariant Optimizer for Multiscale Training of Physics-informed Neural Networks

no code implementations5 Jun 2023 Jiachen Yao, Chang Su, Zhongkai Hao, Songming Liu, Hang Su, Jun Zhu

Physics-informed Neural Networks (PINNs) have recently achieved remarkable progress in solving Partial Differential Equations (PDEs) in various fields by minimizing a weighted sum of PDE loss and boundary loss.

A Multitask Training Approach to Enhance Whisper with Contextual Biasing and Open-Vocabulary Keyword Spotting

no code implementations18 Sep 2023 Yuang Li, Yinglu Li, Min Zhang, Chang Su, Mengxin Ren, Xiaosong Qiao, Xiaofeng Zhao, Mengyao Piao, Jiawei Yu, Xinglin Lv, Miaomiao Ma, Yanqing Zhao, Hao Yang

End-to-end automatic speech recognition (ASR) systems often struggle to recognize rare name entities, such as personal names, organizations, and terminologies not frequently encountered in the training data.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Using Large Language Model for End-to-End Chinese ASR and NER

no code implementations21 Jan 2024 Yuang Li, Jiawei Yu, Yanqing Zhao, Min Zhang, Mengxin Ren, Xiaofeng Zhao, Xiaosong Qiao, Chang Su, Miaomiao Ma, Hao Yang

In this work, we connect the Whisper encoder with ChatGLM3 and provide in-depth comparisons of these two approaches using Chinese automatic speech recognition (ASR) and name entity recognition (NER) tasks.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

Preconditioning for Physics-Informed Neural Networks

no code implementations1 Feb 2024 Songming Liu, Chang Su, Jiachen Yao, Zhongkai Hao, Hang Su, Youjia Wu, Jun Zhu

Physics-informed neural networks (PINNs) have shown promise in solving various partial differential equations (PDEs).

From Handcrafted Features to LLMs: A Brief Survey for Machine Translation Quality Estimation

no code implementations21 Mar 2024 Haofei Zhao, Yilun Liu, Shimin Tao, Weibin Meng, Yimeng Chen, Xiang Geng, Chang Su, Min Zhang, Hao Yang

Machine Translation Quality Estimation (MTQE) is the task of estimating the quality of machine-translated text in real time without the need for reference translations, which is of great importance for the development of MT.

Machine Translation Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.