Search Results for author: Tsung-Hui Chang

Found 40 papers, 18 papers with code

Exploring Word Segmentation and Medical Concept Recognition for Chinese Medical Texts

1 code implementation NAACL (BioNLP) 2021 Yang Liu, Yuanhe Tian, Tsung-Hui Chang, Song Wu, Xiang Wan, Yan Song

Chinese word segmentation (CWS) and medical concept recognition are two fundamental tasks to process Chinese electronic medical records (EMRs) and play important roles in downstream tasks for understanding Chinese EMRs.

Chinese Word Segmentation Model Selection +1

A Label-Aware Autoregressive Framework for Cross-Domain NER

1 code implementation Findings (NAACL) 2022 Jinpeng Hu, He Zhao, Dan Guo, Xiang Wan, Tsung-Hui Chang

In doing so, label information contained in the embedding vectors can be effectively transferred to the target domain, and Bi-LSTM can further model the label relationship among different domains by pre-train and then fine-tune setting.

Cross-Domain Named Entity Recognition named-entity-recognition +2

Optimal Joint Fronthaul Compression and Beamforming Design for Networked ISAC Systems

no code implementations15 Aug 2024 Kexin Zhang, Yanqing Xu, Ruisi He, Chao Shen, Tsung-Hui Chang

The primary objective is to minimize the total transmit power while meeting the signal-to-interference-plus-noise ratio (SINR) requirements for communication and sensing under fronthaul capacity constraints, resulting in a joint fronthaul compression and beamforming design (J-FCBD) problem.

Inference-Time Alignment of Diffusion Models with Direct Noise Optimization

no code implementations29 May 2024 Zhiwei Tang, Jiangweizhi Peng, Jiasheng Tang, Mingyi Hong, Fan Wang, Tsung-Hui Chang

In this work, we focus on the alignment problem of diffusion models with a continuous reward function, which represents specific objectives for downstream tasks, such as increasing darkness or improving the aesthetics of images.

Accelerating Parallel Sampling of Diffusion Models

1 code implementation15 Feb 2024 Zhiwei Tang, Jiasheng Tang, Hao Luo, Fan Wang, Tsung-Hui Chang

Our experiments demonstrate that ParaTAA can decrease the inference steps required by common sequential sampling algorithms such as DDIM and DDPM by a factor of 4$\sim$14 times.

Image Generation

FedLion: Faster Adaptive Federated Optimization with Fewer Communication

1 code implementation15 Feb 2024 Zhiwei Tang, Tsung-Hui Chang

In Federated Learning (FL), a framework to train machine learning models across distributed data, well-known algorithms like FedAvg tend to have slow convergence rates, resulting in high communication costs during training.

Federated Learning

Decentralized Equalization for Massive MIMO Systems With Colored Noise Samples

no code implementations22 May 2023 Xiaotong Zhao, Mian Li, Bo wang, Enbin Song, Tsung-Hui Chang, Qingjiang Shi

However, current detection methods tailored to DBP only consider ideal white Gaussian noise scenarios, while in practice, the noise is often colored due to interference from neighboring cells.

Dimensionality Reduction

Zeroth-Order Optimization Meets Human Feedback: Provable Learning via Ranking Oracles

1 code implementation7 Mar 2023 Zhiwei Tang, Dmitry Rybin, Tsung-Hui Chang

In this study, we delve into an emerging optimization challenge involving a black-box objective function that can only be gauged via a ranking oracle-a situation frequently encountered in real-world scenarios, especially when the function is evaluated by human judges.

Image Generation reinforcement-learning +2

A Physics-based and Data-driven Approach for Localized Statistical Channel Modeling

no code implementations4 Mar 2023 Shutao Zhang, Xinzhi Ning, Xi Zheng, Qingjiang Shi, Tsung-Hui Chang, Zhi-Quan Luo

Localized channel modeling is crucial for offline performance optimization of 5G cellular networks, but the existing channel models are for general scenarios and do not capture local geographical structures.

$z$-SignFedAvg: A Unified Stochastic Sign-based Compression for Federated Learning

no code implementations6 Feb 2023 Zhiwei Tang, Yanmeng Wang, Tsung-Hui Chang

In this paper, we propose a novel noisy perturbation scheme with a general symmetric noise distribution for sign-based compression, which not only allows one to flexibly control the tradeoff between gradient bias and convergence performance, but also provides a unified viewpoint to existing stochastic sign-based methods.

Federated Learning Privacy Preserving

Why Batch Normalization Damage Federated Learning on Non-IID Data?

1 code implementation8 Jan 2023 Yanmeng Wang, Qingjiang Shi, Tsung-Hui Chang

In view of this, we develop a new FL algorithm that is tailored to BN, called FedTAN, which is capable of achieving robust FL performance under a variety of data distributions via iterative layer-wise parameter aggregation.

Federated Learning

Large-Scale Bandwidth and Power Optimization for Multi-Modal Edge Intelligence Autonomous Driving

no code implementations18 Oct 2022 Xinrao Li, Tong Zhang, Shuai Wang, Guangxu Zhu, Rui Wang, Tsung-Hui Chang

However, wireless channels between the edge server and the autonomous vehicles are time-varying due to the high-mobility of vehicles.

Autonomous Driving

Improving Radiology Summarization with Radiograph and Anatomy Prompts

no code implementations15 Oct 2022 Jinpeng Hu, Zhihong Chen, Yang Liu, Xiang Wan, Tsung-Hui Chang

The impression is crucial for the referring physicians to grasp key information since it is concluded from the findings and reasoning of radiologists.

Anatomy Contrastive Learning +1

Multi-Modal Masked Autoencoders for Medical Vision-and-Language Pre-Training

1 code implementation15 Sep 2022 Zhihong Chen, Yuhao Du, Jinpeng Hu, Yang Liu, Guanbin Li, Xiang Wan, Tsung-Hui Chang

Besides, we conduct further analysis to better verify the effectiveness of different components of our approach and various settings of pre-training.

Self-Supervised Learning

Rethinking Bayesian Learning for Data Analysis: The Art of Prior and Inference in Sparsity-Aware Modeling

no code implementations28 May 2022 Lei Cheng, Feng Yin, Sergios Theodoridis, Sotirios Chatzis, Tsung-Hui Chang

However, a come back of Bayesian methods is taking place that sheds new light on the design of deep neural networks, which also establish firm links with Bayesian models and inspire new paths for unsupervised learning, such as Bayesian tensor decomposition.

Gaussian Processes Tensor Decomposition +1

Federated Stochastic Primal-dual Learning with Differential Privacy

no code implementations26 Apr 2022 Yiwei Li, Shuai Wang, Tsung-Hui Chang, Chong-Yung Chi

Specifically, we show that, by guaranteeing $(\epsilon, \delta)$-DP for each client per communication round, the proposed algorithm guarantees $(\mathcal{O}(q\epsilon \sqrt{p T}), \delta)$-DP after $T$ communication rounds while maintaining an $\mathcal{O}(1/\sqrt{pTQ})$ convergence rate for a convex and non-smooth learning problem, where $Q$ is the number of local SGD steps, $p$ is the client sampling probability, $q=\max_{i} q_i/\sqrt{1-q_i}$ and $q_i$ is the data sampling probability of each client under PCP.

Federated Learning

Graph Enhanced Contrastive Learning for Radiology Findings Summarization

1 code implementation ACL 2022 Jinpeng Hu, Zhuo Li, Zhihong Chen, Zhen Li, Xiang Wan, Tsung-Hui Chang

To address the limitation, we propose a unified framework for exploiting both extra knowledge and the original findings in an integrated way so that the critical information (i. e., key words and their relations) can be extracted in an appropriate way to facilitate impression generation.

Contrastive Learning

Word Graph Guided Summarization for Radiology Findings

1 code implementation Findings (ACL) 2021 Jinpeng Hu, Jianling Li, Zhihong Chen, Yaling Shen, Yan Song, Xiang Wan, Tsung-Hui Chang

In this paper, we propose a novel method for automatic impression generation, where a word graph is constructed from the findings to record the critical words and their relations, then a Word Graph guided Summarization model (WGSum) is designed to generate impressions with the help of the word graph.

Text Summarization

Federated Semi-Supervised Learning with Class Distribution Mismatch

no code implementations29 Oct 2021 Zhiguo Wang, Xintong Wang, Ruoyu Sun, Tsung-Hui Chang

Similar to that encountered in federated supervised learning, class distribution of labeled/unlabeled data could be non-i. i. d.

Federated Learning

Low-rank Matrix Recovery With Unknown Correspondence

no code implementations15 Oct 2021 Zhiwei Tang, Tsung-Hui Chang, Xiaojing Ye, Hongyuan Zha

We study a matrix recovery problem with unknown correspondence: given the observation matrix $M_o=[A,\tilde P B]$, where $\tilde P$ is an unknown permutation matrix, we aim to recover the underlying matrix $M=[A, B]$.

Quantized Federated Learning under Transmission Delay and Outage Constraints

no code implementations17 Jun 2021 Yanmeng Wang, Yanqing Xu, Qingjiang Shi, Tsung-Hui Chang

Federated learning (FL) has been recognized as a viable distributed learning paradigm which trains a machine learning model collaboratively with massive mobile devices in the wireless edge while protecting user privacy.

Federated Learning Quantization

An Efficient Learning Framework For Federated XGBoost Using Secret Sharing And Distributed Optimization

1 code implementation12 May 2021 Lunchen Xie, Jiaqi Liu, Songtao Lu, Tsung-Hui Chang, Qingjiang Shi

XGBoost is one of the most widely used machine learning models in the industry due to its superior learning accuracy and efficiency.

Distributed Optimization

Decentralized Non-Convex Learning with Linearly Coupled Constraints

no code implementations9 Mar 2021 Jiawei Zhang, Songyang Ge, Tsung-Hui Chang, Zhi-Quan Luo

Motivated by the need for decentralized learning, this paper aims at designing a distributed algorithm for solving nonconvex problems with general linear constraints over a multi-agent network.

Optimization and Control Systems and Control Systems and Control

Learning to Continuously Optimize Wireless Resource In Episodically Dynamic Environment

4 code implementations16 Nov 2020 Haoran Sun, Wenqiang Pu, Minghe Zhu, Xiao Fu, Tsung-Hui Chang, Mingyi Hong

We propose to build the notion of continual learning (CL) into the modeling process of learning wireless systems, so that the learning model can incrementally adapt to the new episodes, {\it without forgetting} knowledge learned from the previous episodes.

Continual Learning Fairness

Distributed Stochastic Consensus Optimization with Momentum for Nonconvex Nonsmooth Problems

no code implementations10 Nov 2020 Zhiguo Wang, Jiawei Zhang, Tsung-Hui Chang, Jian Li, Zhi-Quan Luo

While many distributed optimization algorithms have been proposed for solving smooth or convex problems over the networks, few of them can handle non-convex and non-smooth problems.

Distributed Optimization

Learning to Beamform in Heterogeneous Massive MIMO Networks

no code implementations8 Nov 2020 Minghe Zhu, Tsung-Hui Chang, Mingyi Hong

It is well-known that the problem of finding the optimal beamformers in massive multiple-input multiple-output (MIMO) networks is challenging because of its non-convexity, and conventional optimization based algorithms suffer from high computational costs.

Generating Radiology Reports via Memory-driven Transformer

2 code implementations EMNLP 2020 Zhihong Chen, Yan Song, Tsung-Hui Chang, Xiang Wan

Particularly, this is the first work reporting the generation results on MIMIC-CXR to the best of our knowledge.

Decoder Text Generation

Federated Matrix Factorization: Algorithm Design and Application to Data Clustering

no code implementations12 Feb 2020 Shuai Wang, Tsung-Hui Chang

Recent demands on data privacy have called for federated learning (FL) as a new distributed learning paradigm in massive and heterogeneous networks.

Clustering Federated Learning

Distributed Learning in the Non-Convex World: From Batch to Streaming Data, and Beyond

no code implementations14 Jan 2020 Tsung-Hui Chang, Mingyi Hong, Hoi-To Wai, Xinwei Zhang, Songtao Lu

In particular, we {provide a selective review} about the recent techniques developed for optimizing non-convex models (i. e., problem classes), processing batch and streaming data (i. e., data types), over the networks in a distributed manner (i. e., communication and computation paradigm).

Clustering by Orthogonal NMF Model and Non-Convex Penalty Optimization

1 code implementation3 Jun 2019 Shuai Wang, Tsung-Hui Chang, Ying Cui, Jong-Shi Pang

We then apply a non-convex penalty (NCP) approach to add them to the objective as penalty terms, leading to a problem that is efficiently solvable.

Clustering

Stochastic Proximal Gradient Consensus Over Random Networks

no code implementations28 Nov 2015 Mingyi Hong, Tsung-Hui Chang

We consider solving a convex, possibly stochastic optimization problem over a randomly time-varying multi-agent network.

Optimization and Control Information Theory Information Theory

Asynchronous Distributed ADMM for Large-Scale Optimization- Part II: Linear Convergence Analysis and Numerical Performance

no code implementations9 Sep 2015 Tsung-Hui Chang, Wei-Cheng Liao, Mingyi Hong, Xiangfeng Wang

Unfortunately, a direct synchronous implementation of such algorithm does not scale well with the problem size, as the algorithm speed is limited by the slowest computing nodes.

Asynchronous Distributed ADMM for Large-Scale Optimization- Part I: Algorithm and Convergence Analysis

no code implementations9 Sep 2015 Tsung-Hui Chang, Mingyi Hong, Wei-Cheng Liao, Xiangfeng Wang

By formulating the learning problem as a consensus problem, the ADMM can be used to solve the consensus problem in a fully parallel fashion over a computer network with a star topology.

Distributed Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.