Search Results for author: Tianyu Chen

Found 12 papers, 1 papers with code

Pseudo-Label Guided Unsupervised Domain Adaptation of Contextual Embeddings

no code implementations EACL (AdaptNLP) 2021 Tianyu Chen, Shaohan Huang, Furu Wei, JianXin Li

In unsupervised domain adaptation, we aim to train a model that works well on a target domain when provided with labeled source samples and unlabeled target samples.

Language Modelling Masked Language Modeling +3

EFSA: Towards Event-Level Financial Sentiment Analysis

no code implementations8 Apr 2024 Tianyu Chen, Yiming Zhang, Guoxin Yu, Dapeng Zhang, Li Zeng, Qing He, Xiang Ao

In this paper, we extend financial sentiment analysis~(FSA) to event-level since events usually serve as the subject of the sentiment in financial text.

Building Flexible Machine Learning Models for Scientific Computing at Scale

no code implementations25 Feb 2024 Tianyu Chen, Haoyi Zhou, Ying Li, Hao Wang, Chonghan Gao, Shanghang Zhang, JianXin Li

Foundation models have revolutionized knowledge acquisition across domains, and our study introduces OmniArch, a paradigm-shifting approach designed for building foundation models in multi-physics scientific computing.

Zero-Shot Learning

PhoGAD: Graph-based Anomaly Behavior Detection with Persistent Homology Optimization

no code implementations19 Jan 2024 Ziqi Yuan, Haoyi Zhou, Tianyu Chen, JianXin Li

The analysis of persistent homology demonstrates its effectiveness in capturing the topological structure formed by normal edge features.

Anomaly Detection

iSCAN: Identifying Causal Mechanism Shifts among Nonlinear Additive Noise Models

1 code implementation NeurIPS 2023 Tianyu Chen, Kevin Bello, Bryon Aragam, Pradeep Ravikumar

Structural causal models (SCMs) are widely used in various disciplines to represent causal relationships among variables in complex systems.

Learning Music Sequence Representation from Text Supervision

no code implementations31 May 2023 Tianyu Chen, Yuan Xie, Shuai Zhang, Shaohan Huang, Haoyi Zhou, JianXin Li

Music representation learning is notoriously difficult for its complex human-related concepts contained in the sequence of numerical signals.

Contrastive Learning Representation Learning

A Method for Training-free Person Image Picture Generation

no code implementations16 May 2023 Tianyu Chen

To solve this problem, the Character Image Feature Encoder model proposed in this paper enables the user to use the process by simply providing a picture of the character to make the image of the character in the generated image match the expectation.

Advancing underwater acoustic target recognition via adaptive data pruning and smoothness-inducing regularization

no code implementations24 Apr 2023 Yuan Xie, Tianyu Chen, Ji Xu

Underwater acoustic recognition for ship-radiated signals has high practical application value due to the ability to recognize non-line-of-sight targets.

MoEC: Mixture of Expert Clusters

no code implementations19 Jul 2022 Yuan Xie, Shaohan Huang, Tianyu Chen, Furu Wei

Sparsely Mixture of Experts (MoE) has received great interest due to its promising scaling capability with affordable computational overhead.

Machine Translation Natural Language Understanding

Task-Specific Expert Pruning for Sparse Mixture-of-Experts

no code implementations1 Jun 2022 Tianyu Chen, Shaohan Huang, Yuan Xie, Binxing Jiao, Daxin Jiang, Haoyi Zhou, JianXin Li, Furu Wei

The sparse Mixture-of-Experts (MoE) model is powerful for large-scale pre-training and has achieved promising results due to its model capacity.

THE-X: Privacy-Preserving Transformer Inference with Homomorphic Encryption

no code implementations Findings (ACL) 2022 Tianyu Chen, Hangbo Bao, Shaohan Huang, Li Dong, Binxing Jiao, Daxin Jiang, Haoyi Zhou, JianXin Li, Furu Wei

As more and more pre-trained language models adopt on-cloud deployment, the privacy issues grow quickly, mainly for the exposure of plain-text user data (e. g., search history, medical record, bank account).

Privacy Preserving

Gradient Broadcast Adaptation: Defending against the backdoor attack in pre-trained models

no code implementations29 Sep 2021 Tianyu Chen, Haoyi Zhou, He Mingrui, JianXin Li

Pre-trained language models (e. g, BERT, GPT-3) have revolutionized the NLP research and fine-tuning becomes the indispensable step of downstream adaptation.

Backdoor Attack text-classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.