no code implementations • EANCS 2021 • Yi Huang, Junlan Feng, Xiaoting Wu, Xiaoyu Du
Our findings are: the performance variance of generative DSTs is not only due to the model structure itself, but can be attributed to the distribution of cross-domain values.
no code implementations • 26 Jan 2023 • Runze Lei, Pinghui Wang, Junzhou Zhao, Lin Lan, Jing Tao, Chao Deng, Junlan Feng, Xidian Wang, Xiaohong Guan
In this work, we propose a novel FL framework for graph data, FedCog, to efficiently handle coupled graphs that are a kind of distributed graph data, but widely exist in a variety of real-world applications such as mobile carriers' communication networks and banks' transaction networks.
1 code implementation • 17 Oct 2022 • Hong Liu, Yucheng Cai, Zhijian Ou, Yi Huang, Junlan Feng
Second, an important ingredient in a US is that the user goal can be effectively incorporated and tracked; but how to flexibly integrate goal state tracking and develop an end-to-end trainable US for multi-domains has remained to be a challenge.
no code implementations • 13 Oct 2022 • Hong Liu, Zhijian Ou, Yi Huang, Junlan Feng
Recently, there has been progress in supervised funetuning pretrained GPT-2 to build end-to-end task-oriented dialog (TOD) systems.
1 code implementation • 27 Sep 2022 • Hong Liu, Hao Peng, Zhijian Ou, Juanzi Li, Yi Huang, Junlan Feng
Recently, there have merged a class of task-oriented dialogue (TOD) datasets collected through Wizard-of-Oz simulated games.
1 code implementation • COLING 2022 • Yutao Mou, Keqing He, Yanan Wu, Pei Wang, Jingang Wang, Wei Wu, Yi Huang, Junlan Feng, Weiran Xu
Traditional intent classification models are based on a pre-defined intent set and only recognize limited in-domain (IND) intent classes.
no code implementations • COLING 2022 • Guanting Dong, Daichi Guo, LiWen Wang, Xuefeng Li, Zechen Wang, Chen Zeng, Keqing He, Jinzheng Zhao, Hao Lei, Xinyue Cui, Yi Huang, Junlan Feng, Weiran Xu
Most existing slot filling models tend to memorize inherent patterns of entities and corresponding contexts from training data.
1 code implementation • SIGDIAL (ACL) 2022 • Yucheng Cai, Hong Liu, Zhijian Ou, Yi Huang, Junlan Feng
In this paper, we propose to apply JSA to semi-supervised learning of the latent state TOD models, which is referred to as JSA-TOD.
1 code implementation • 6 Jul 2022 • Zhijian Ou, Junlan Feng, Juanzi Li, Yakun Li, Hong Liu, Hao Peng, Yi Huang, Jiangjiang Zhao
A challenge on Semi-Supervised and Reinforced Task-Oriented Dialog Systems, Co-located with EMNLP2022 SereTOD Workshop.
no code implementations • 26 Jun 2022 • Yingying Gao, Junlan Feng, Chao Deng, Shilei Zhang
Spoken language understanding (SLU) treats automatic speech recognition (ASR) and natural language understanding (NLU) as a unified task and usually suffers from data scarcity.
no code implementations • 16 Jun 2022 • Yingying Gao, Junlan Feng, Tianrui Wang, Chao Deng, Shilei Zhang
Analysis shows that our proposed approach brings a better uniformity for the trained model and enlarges the CTC spikes obviously.
1 code implementation • 6 Jun 2022 • Pei Ke, Haozhe Ji, Zhenyu Yang, Yi Huang, Junlan Feng, Xiaoyan Zhu, Minlie Huang
Despite the success of text-to-text pre-trained models in various natural language generation (NLG) tasks, the generation performance is largely restricted by the number of labeled data in downstream tasks, particularly in data-to-text generation tasks.
1 code implementation • 25 Apr 2022 • Shuo Zhang, Junzhou Zhao, Pinghui Wang, Yu Li, Yi Huang, Junlan Feng
Multi-action dialog policy (MADP), which generates multiple atomic dialog actions per turn, has been widely applied in task-oriented dialog systems to provide expressive and efficient system responses.
no code implementations • 19 Apr 2022 • Zhuoran Li, Xing Wang, Ling Pan, Lin Zhu, Zhendong Wang, Junlan Feng, Chao Deng, Longbo Huang
A2C-GS consists of three novel components, including a verifier to validate the correctness of a generated network topology, a graph neural network (GNN) to efficiently approximate topology rating, and a DRL actor layer to conduct a topology search.
1 code implementation • 13 Apr 2022 • Hong Liu, Yucheng Cai, Zhijian Ou, Yi Huang, Junlan Feng
Recently, Transformer based pretrained language models (PLMs), such as GPT2 and T5, have been leveraged to build generative task-oriented dialog (TOD) systems.
no code implementations • 1 Apr 2022 • Tianrui Wang, Weibin Zhu, Yingying Gao, Junlan Feng, Shilei Zhang
Joint training of speech enhancement model (SE) and speech recognition model (ASR) is a common solution for robust ASR in noisy environments.
no code implementations • 25 Feb 2022 • Tianrui Wang, Weibin Zhu, Yingying Gao, Yanan Chen, Junlan Feng, Shilei Zhang
Therefore, we previously proposed a harmonic gated compensation network (HGCN) to predict the full harmonic locations based on the unmasked harmonics and process the result of a coarse enhancement module to recover the masked harmonics.
1 code implementation • 30 Jan 2022 • Tianrui Wang, Weibin Zhu, Yingying Gao, Junlan Feng, Shilei Zhang
Mask processing in the time-frequency (T-F) domain through the neural network has been one of the mainstreams for single-channel speech enhancement.
no code implementations • 1 Nov 2021 • Xing Wang, Juan Zhao, Lin Zhu, Xu Zhou, Zhao Li, Junlan Feng, Chao Deng, Yong Zhang
AMF-STGCN extends GCN by (1) jointly modeling the complex spatial-temporal dependencies in mobile networks, (2) applying attention mechanisms to capture various Receptive Fields of heterogeneous base stations, and (3) introducing an extra decoder based on a fully connected deep network to conquer the error propagation challenge with multi-step forecasting.
1 code implementation • 9 Sep 2021 • Hong Liu, Yucheng Cai, Zhenru Lin, Zhijian Ou, Yi Huang, Junlan Feng
In this paper, we propose Variational Latent-State GPT model (VLS-GPT), which is the first to combine the strengths of the two approaches.
no code implementations • 1 Jan 2021 • Xiaolei Hua, Su Wang, Lin Zhu, Dong Zhou, Junlan Feng, Yiting Wang, Chao Deng, Shuo Wang, Mingtao Mei
However, due to complex correlations and various temporal patterns of large-scale multivariate time series, a general unsupervised anomaly detection model with higher F1-score and Timeliness remains a challenging task.
no code implementations • 1 Jan 2021 • Xing Wang, Lin Zhu, Juan Zhao, Zhou Xu, Zhao Li, Junlan Feng, Chao Deng
Spatial-temporal data forecasting is of great importance for industries such as telecom network operation and transportation management.
1 code implementation • 15 Dec 2020 • Shuo Zhang, Junzhou Zhao, Pinghui Wang, Nuo Xu, Yang Yang, Yiting Liu, Yi Huang, Junlan Feng
This will result in the issue of contract inconsistencies, which may severely impair the legal validity of the contract.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Yi Huang, Junlan Feng, Shuo Ma, Xiaoyu Du, Xiaoting Wu
In this paper, we propose a meta-learning based semi-supervised explicit dialogue state tracker (SEDST) for neural dialogue generation, denoted as MEDST.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Fanyu Meng, Junlan Feng, Danping Yin, Si Chen, Min Hu
Syntactic information is essential for both sentiment analysis(SA) and aspect-based sentiment analysis(ABSA).
1 code implementation • EMNLP 2020 • Yichi Zhang, Zhijian Ou, Huixin Wang, Junlan Feng
In this paper we aim at alleviating the reliance on belief state labels in building end-to-end dialog systems, by leveraging unlabeled dialog data towards semi-supervised learning.
Ranked #2 on
End-To-End Dialogue Modelling
on MULTIWOZ 2.1
no code implementations • ACL 2020 • Yi Huang, Junlan Feng, Min Hu, Xiaoting Wu, Xiaoyu Du, Shuo Ma
The state-of-the-art accuracy for DST is below 50{\%} for a multi-domain dialogue task.
no code implementations • 4 Nov 2018 • Yinpei Dai, Yichi Zhang, Zhijian Ou, Yanmeng Wang, Junlan Feng
Second, the one-hot encoding of slot labels ignores the semantic meanings and relations for slots, which are implicit in their natural language descriptions.
no code implementations • 4 Nov 2018 • Kai Hu, Zhijian Ou, Min Hu, Junlan Feng
Conditional random fields (CRFs) have been shown to be one of the most successful approaches to sequence labeling.