1 code implementation • NAACL (BioNLP) 2021 • Yang Liu, Yuanhe Tian, Tsung-Hui Chang, Song Wu, Xiang Wan, Yan Song
Chinese word segmentation (CWS) and medical concept recognition are two fundamental tasks to process Chinese electronic medical records (EMRs) and play important roles in downstream tasks for understanding Chinese EMRs.
1 code implementation • Findings (NAACL) 2022 • Yang Liu, Jinpeng Hu, Xiang Wan, Tsung-Hui Chang
Few-shot Relation Extraction refers to fast adaptation to novel relation classes with few samples through training on the known relation classes.
1 code implementation • Findings (NAACL) 2022 • Jinpeng Hu, He Zhao, Dan Guo, Xiang Wan, Tsung-Hui Chang
In doing so, label information contained in the embedding vectors can be effectively transferred to the target domain, and Bi-LSTM can further model the label relationship among different domains by pre-train and then fine-tune setting.
Cross-Domain Named Entity Recognition named-entity-recognition +2
no code implementations • 15 Aug 2024 • Kexin Zhang, Yanqing Xu, Ruisi He, Chao Shen, Tsung-Hui Chang
The primary objective is to minimize the total transmit power while meeting the signal-to-interference-plus-noise ratio (SINR) requirements for communication and sensing under fronthaul capacity constraints, resulting in a joint fronthaul compression and beamforming design (J-FCBD) problem.
no code implementations • 29 May 2024 • Zhiwei Tang, Jiangweizhi Peng, Jiasheng Tang, Mingyi Hong, Fan Wang, Tsung-Hui Chang
In this work, we focus on the alignment problem of diffusion models with a continuous reward function, which represents specific objectives for downstream tasks, such as increasing darkness or improving the aesthetics of images.
1 code implementation • 15 Feb 2024 • Zhiwei Tang, Jiasheng Tang, Hao Luo, Fan Wang, Tsung-Hui Chang
Our experiments demonstrate that ParaTAA can decrease the inference steps required by common sequential sampling algorithms such as DDIM and DDPM by a factor of 4$\sim$14 times.
1 code implementation • 15 Feb 2024 • Zhiwei Tang, Tsung-Hui Chang
In Federated Learning (FL), a framework to train machine learning models across distributed data, well-known algorithms like FedAvg tend to have slow convergence rates, resulting in high communication costs during training.
no code implementations • 22 May 2023 • Xiaotong Zhao, Mian Li, Bo wang, Enbin Song, Tsung-Hui Chang, Qingjiang Shi
However, current detection methods tailored to DBP only consider ideal white Gaussian noise scenarios, while in practice, the noise is often colored due to interference from neighboring cells.
1 code implementation • 7 Mar 2023 • Zhiwei Tang, Dmitry Rybin, Tsung-Hui Chang
In this study, we delve into an emerging optimization challenge involving a black-box objective function that can only be gauged via a ranking oracle-a situation frequently encountered in real-world scenarios, especially when the function is evaluated by human judges.
no code implementations • 4 Mar 2023 • Shutao Zhang, Xinzhi Ning, Xi Zheng, Qingjiang Shi, Tsung-Hui Chang, Zhi-Quan Luo
Localized channel modeling is crucial for offline performance optimization of 5G cellular networks, but the existing channel models are for general scenarios and do not capture local geographical structures.
no code implementations • 6 Feb 2023 • Zhiwei Tang, Yanmeng Wang, Tsung-Hui Chang
In this paper, we propose a novel noisy perturbation scheme with a general symmetric noise distribution for sign-based compression, which not only allows one to flexibly control the tradeoff between gradient bias and convergence performance, but also provides a unified viewpoint to existing stochastic sign-based methods.
1 code implementation • 8 Jan 2023 • Yanmeng Wang, Qingjiang Shi, Tsung-Hui Chang
In view of this, we develop a new FL algorithm that is tailored to BN, called FedTAN, which is capable of achieving robust FL performance under a variety of data distributions via iterative layer-wise parameter aggregation.
no code implementations • 3 Dec 2022 • Shuai Wang, Yanqing Xu, Zhiguo Wang, Tsung-Hui Chang, Tony Q. S. Quek, Defeng Sun
In this paper, we firstly reveal the fact that the federated ADMM is essentially a client-variance-reduced algorithm.
no code implementations • 18 Oct 2022 • Xinrao Li, Tong Zhang, Shuai Wang, Guangxu Zhu, Rui Wang, Tsung-Hui Chang
However, wireless channels between the edge server and the autonomous vehicles are time-varying due to the high-mobility of vehicles.
no code implementations • 15 Oct 2022 • Jinpeng Hu, Zhihong Chen, Yang Liu, Xiang Wan, Tsung-Hui Chang
The impression is crucial for the referring physicians to grasp key information since it is concluded from the findings and reasoning of radiologists.
1 code implementation • 15 Sep 2022 • Zhihong Chen, Yuhao Du, Jinpeng Hu, Yang Liu, Guanbin Li, Xiang Wan, Tsung-Hui Chang
Besides, we conduct further analysis to better verify the effectiveness of different components of our approach and various settings of pre-training.
no code implementations • 28 May 2022 • Lei Cheng, Feng Yin, Sergios Theodoridis, Sotirios Chatzis, Tsung-Hui Chang
However, a come back of Bayesian methods is taking place that sheds new light on the design of deep neural networks, which also establish firm links with Bayesian models and inspire new paths for unsupervised learning, such as Bayesian tensor decomposition.
1 code implementation • Findings (ACL) 2022 • Yang Liu, Jinpeng Hu, Xiang Wan, Tsung-Hui Chang
We argue that relation information can be introduced more explicitly and effectively into the model.
1 code implementation • NAACL 2022 • Jinpeng Hu, Yaling Shen, Yang Liu, Xiang Wan, Tsung-Hui Chang
Named entity recognition (NER) is a fundamental and important task in NLP, aiming at identifying named entities (NEs) from free text.
Ranked #1 on Named Entity Recognition (NER) on WNUT 2016
no code implementations • 26 Apr 2022 • Yiwei Li, Shuai Wang, Tsung-Hui Chang, Chong-Yung Chi
Specifically, we show that, by guaranteeing $(\epsilon, \delta)$-DP for each client per communication round, the proposed algorithm guarantees $(\mathcal{O}(q\epsilon \sqrt{p T}), \delta)$-DP after $T$ communication rounds while maintaining an $\mathcal{O}(1/\sqrt{pTQ})$ convergence rate for a convex and non-smooth learning problem, where $Q$ is the number of local SGD steps, $p$ is the client sampling probability, $q=\max_{i} q_i/\sqrt{1-q_i}$ and $q_i$ is the data sampling probability of each client under PCP.
1 code implementation • ACL 2022 • Jinpeng Hu, Zhuo Li, Zhihong Chen, Zhen Li, Xiang Wan, Tsung-Hui Chang
To address the limitation, we propose a unified framework for exploiting both extra knowledge and the original findings in an integrated way so that the critical information (i. e., key words and their relations) can be extracted in an appropriate way to facilitate impression generation.
1 code implementation • Findings (ACL) 2021 • Jinpeng Hu, Jianling Li, Zhihong Chen, Yaling Shen, Yan Song, Xiang Wan, Tsung-Hui Chang
In this paper, we propose a novel method for automatic impression generation, where a word graph is constructed from the findings to record the critical words and their relations, then a Word Graph guided Summarization model (WGSum) is designed to generate impressions with the help of the word graph.
no code implementations • 29 Oct 2021 • Zhiguo Wang, Xintong Wang, Ruoyu Sun, Tsung-Hui Chang
Similar to that encountered in federated supervised learning, class distribution of labeled/unlabeled data could be non-i. i. d.
no code implementations • 15 Oct 2021 • Zhiwei Tang, Tsung-Hui Chang, Xiaojing Ye, Hongyuan Zha
We study a matrix recovery problem with unknown correspondence: given the observation matrix $M_o=[A,\tilde P B]$, where $\tilde P$ is an unknown permutation matrix, we aim to recover the underlying matrix $M=[A, B]$.
no code implementations • 17 Jun 2021 • Yanmeng Wang, Yanqing Xu, Qingjiang Shi, Tsung-Hui Chang
Federated learning (FL) has been recognized as a viable distributed learning paradigm which trains a machine learning model collaboratively with massive mobile devices in the wireless edge while protecting user privacy.
1 code implementation • 12 May 2021 • Lunchen Xie, Jiaqi Liu, Songtao Lu, Tsung-Hui Chang, Qingjiang Shi
XGBoost is one of the most widely used machine learning models in the industry due to its superior learning accuracy and efficiency.
1 code implementation • 3 May 2021 • Haoran Sun, Wenqiang Pu, Xiao Fu, Tsung-Hui Chang, Mingyi Hong
However, it is often challenging for these approaches to learn in a dynamic environment.
1 code implementation • 27 Apr 2021 • Wenwen Min, Taosheng Xu, Xiang Wan, Tsung-Hui Chang
Non-negative matrix factorization (NMF) is a powerful tool for dimensionality reduction and clustering.
no code implementations • 9 Mar 2021 • Jiawei Zhang, Songyang Ge, Tsung-Hui Chang, Zhi-Quan Luo
Motivated by the need for decentralized learning, this paper aims at designing a distributed algorithm for solving nonconvex problems with general linear constraints over a multi-agent network.
Optimization and Control Systems and Control Systems and Control
4 code implementations • 16 Nov 2020 • Haoran Sun, Wenqiang Pu, Minghe Zhu, Xiao Fu, Tsung-Hui Chang, Mingyi Hong
We propose to build the notion of continual learning (CL) into the modeling process of learning wireless systems, so that the learning model can incrementally adapt to the new episodes, {\it without forgetting} knowledge learned from the previous episodes.
no code implementations • 10 Nov 2020 • Zhiguo Wang, Jiawei Zhang, Tsung-Hui Chang, Jian Li, Zhi-Quan Luo
While many distributed optimization algorithms have been proposed for solving smooth or convex problems over the networks, few of them can handle non-convex and non-smooth problems.
no code implementations • 8 Nov 2020 • Minghe Zhu, Tsung-Hui Chang, Mingyi Hong
It is well-known that the problem of finding the optimal beamformers in massive multiple-input multiple-output (MIMO) networks is challenging because of its non-convexity, and conventional optimization based algorithms suffer from high computational costs.
2 code implementations • EMNLP 2020 • Zhihong Chen, Yan Song, Tsung-Hui Chang, Xiang Wan
Particularly, this is the first work reporting the generation results on MIMIC-CXR to the best of our knowledge.
no code implementations • 12 Feb 2020 • Shuai Wang, Tsung-Hui Chang
Recent demands on data privacy have called for federated learning (FL) as a new distributed learning paradigm in massive and heterogeneous networks.
no code implementations • 11 Feb 2020 • Junjie Sheng, Xiangfeng Wang, Bo Jin, Junchi Yan, Wenhao Li, Tsung-Hui Chang, Jun Wang, Hongyuan Zha
This work explores the large-scale multi-agent communication mechanism under a multi-agent reinforcement learning (MARL) setting.
no code implementations • 14 Jan 2020 • Tsung-Hui Chang, Mingyi Hong, Hoi-To Wai, Xinwei Zhang, Songtao Lu
In particular, we {provide a selective review} about the recent techniques developed for optimizing non-convex models (i. e., problem classes), processing batch and streaming data (i. e., data types), over the networks in a distributed manner (i. e., communication and computation paradigm).
1 code implementation • 3 Jun 2019 • Shuai Wang, Tsung-Hui Chang, Ying Cui, Jong-Shi Pang
We then apply a non-convex penalty (NCP) approach to add them to the objective as penalty terms, leading to a problem that is efficiently solvable.
no code implementations • 28 Nov 2015 • Mingyi Hong, Tsung-Hui Chang
We consider solving a convex, possibly stochastic optimization problem over a randomly time-varying multi-agent network.
Optimization and Control Information Theory Information Theory
no code implementations • 9 Sep 2015 • Tsung-Hui Chang, Wei-Cheng Liao, Mingyi Hong, Xiangfeng Wang
Unfortunately, a direct synchronous implementation of such algorithm does not scale well with the problem size, as the algorithm speed is limited by the slowest computing nodes.
no code implementations • 9 Sep 2015 • Tsung-Hui Chang, Mingyi Hong, Wei-Cheng Liao, Xiangfeng Wang
By formulating the learning problem as a consensus problem, the ADMM can be used to solve the consensus problem in a fully parallel fashion over a computer network with a star topology.