1 code implementation • 29 Aug 2023 • Longbin Ji, Pengfei Wei, Yi Ren, Jinglin Liu, Chen Zhang, Xiang Yin
Co-speech gesture generation is crucial for automatic digital avatar animation.
no code implementations • 30 Jul 2023 • Peng Tang, Zhiqiang Xu, Pengfei Wei, Xiaobin Hu, Peilin Zhao, Xin Cao, Chunlai Zhou, Tobias Lasser
To further alleviate the contingent effect of recursive stacking, i. e., ringing artifacts, we add identity shortcuts between atrous convolutions to simulate residual deconvolutions.
no code implementations • 14 Jul 2023 • Ziyue Jiang, Jinglin Liu, Yi Ren, Jinzheng He, Zhenhui Ye, Shengpeng Ji, Qian Yang, Chen Zhang, Pengfei Wei, Chunfeng Wang, Xiang Yin, Zejun Ma, Zhou Zhao
However, the prompting mechanisms of zero-shot TTS still face challenges in the following aspects: 1) previous works of zero-shot TTS are typically trained with single-sentence prompts, which significantly restricts their performance when the data is relatively sufficient during the inference stage.
no code implementations • 14 Mar 2023 • Han Zheng, Xufang Luo, Pengfei Wei, Xuan Song, Dongsheng Li, Jing Jiang
In this paper, we consider an offline-to-online setting where the agent is first learned from the offline dataset and then trained online, and propose a framework called Adaptive Policy Learning for effectively taking advantage of offline and online data.
1 code implementation • ICCV 2023 • Zhi Li, Pengfei Wei, Xiang Yin, Zejun Ma, Alex C. Kot
In our method, human pose and garment keypoints are extracted from source images and constructed as graphs to predict the garment keypoints at the target pose.
1 code implementation • 10 Nov 2022 • Mo Wang, Kexin Lou, Zeming Liu, Pengfei Wei, Quanying Liu
In this paper, we propose a general framework called multi-objective optimization via evolutionary algorithms (MOVEA) to address the non-convex optimization problem in designing TES strategies without predefined direction.
1 code implementation • KDD 2022 • Xinghua Qu, Yew-Soon Ong, Abhishek Gupta, Pengfei Wei, Zhu Sun, Zejun Ma
Given such an issue, we denote the \emph{frame importance} as its contribution to the expected reward on a particular frame, and hypothesize that adapting such frame importance could benefit the performance of the distilled student policy.
1 code implementation • NeurIPS 2023 • Pengfei Wei, Lingdong Kong, Xinghua Qu, Yi Ren, Zhiqiang Xu, Jing Jiang, Xiang Yin
Specifically, we consider the generation of cross-domain videos from two sets of latent factors, one encoding the static information and another encoding the dynamic information.
no code implementations • 9 Mar 2022 • Yizhou Lu, Mingkun Huang, Xinghua Qu, Pengfei Wei, Zejun Ma
It makes room for language specific modeling by pruning out unimportant parameters for each language, without requiring any manually designed language specific component.
1 code implementation • 24 Nov 2021 • Zhining Liu, Pengfei Wei, Zhepei Wei, Boyang Yu, Jing Jiang, Wei Cao, Jiang Bian, Yi Chang
Class-imbalance is a common problem in machine learning practice.
no code implementations • 29 Sep 2021 • Han Zheng, Jing Jiang, Pengfei Wei, Guodong Long, Xuan Song, Chengqi Zhang
URPL adds an uncertainty regularization term in the policy learning objective to enforce to learn a more stable policy under the offline setting.
no code implementations • 29 Sep 2021 • Xinghua Qu, Pengfei Wei, Mingyong Gao, Zhu Sun, Yew-Soon Ong, Zejun Ma
Adversarial examples in automatic speech recognition (ASR) are naturally sounded by humans yet capable of fooling well trained ASR models to transcribe incorrectly.
no code implementations • 29 Sep 2021 • Han Zheng, Xufang Luo, Pengfei Wei, Xuan Song, Dongsheng Li, Jing Jiang
Specifically, we explicitly consider the difference between the online and offline data and apply an adaptive update scheme accordingly, i. e., a pessimistic update strategy for the offline dataset and a greedy or no pessimistic update scheme for the online dataset.
no code implementations • 14 Jun 2021 • Ruichu Cai, Fengzhu Wu, Zijian Li, Pengfei Wei, Lingling Yi, Kun Zhang
Based on this assumption, we propose a disentanglement-based unsupervised domain adaptation method for the graph-structured data, which applies variational graph auto-encoders to recover these latent variables and disentangles them via three supervised learning modules.
no code implementations • 31 May 2021 • Thanh Vinh Vo, Pengfei Wei, Trong Nghia Hoang, Tze-Yun Leong
The proposed method can infer causal effects in the target population without prior knowledge of data discrepancy between the additional data sources and the target.
no code implementations • 9 Feb 2021 • Pengfei Wei, Bi Zeng, Wenxiong Liao
In this paper, we propose a new joint model with a wheel-graph attention network (Wheel-GAT) which is able to model interrelated connections directly for intent detection and slot filling.
2 code implementations • 22 Dec 2020 • Ruichu Cai, Zijian Li, Pengfei Wei, Jie Qiao, Kun Zhang, Zhifeng Hao
Different from previous efforts on the entangled feature space, we aim to extract the domain invariant semantic information in the latent disentangled semantic representation (DSR) of the data.
no code implementations • 27 Nov 2020 • Pengfei Wei, Xinghua Qu, Yew Soon Ong, Zejun Ma
Existing studies usually assume that the learned new feature representation is \emph{domain-invariant}, and thus train a transfer model $\mathcal{M}$ on the source domain.
no code implementations • NeurIPS 2020 • Han Zheng, Pengfei Wei, Jing Jiang, Guodong Long, Qinghua Lu, Chengqi Zhang
Numerous deep reinforcement learning agents have been proposed, and each of them has its strengths and flaws.
2 code implementations • NeurIPS 2020 • Zhining Liu, Pengfei Wei, Jing Jiang, Wei Cao, Jiang Bian, Yi Chang
This makes MESA generally applicable to most of the existing learning models and the meta-sampler can be efficiently applied to new tasks.
no code implementations • 8 Aug 2020 • Xinyi Xu, Tiancheng Huang, Pengfei Wei, Akshay Narayan, Tze-Yun Leong
This work is inspired by recent advances in hierarchical reinforcement learning (HRL) (Barto and Mahadevan 2003; Hengst 2010), and improvements in learning efficiency from heuristic-based subgoal selection, experience replay (Lin 1993; Andrychowicz et al. 2017), and task-based curriculum learning (Bengio et al. 2009; Zaremba and Sutskever 2014).
no code implementations • 6 May 2020 • Pengfei Wei, Yiping Ke, Xinghua Qu, Tze-Yun Leong
Specifically, we propose to use low-dimensional manifold to represent subdomain, and align the local data distribution discrepancy in each manifold across domains.
no code implementations • 24 Apr 2020 • Thanh Vinh Vo, Pengfei Wei, Wicher Bergsma, Tze-Yun Leong
This work extends causal inference with stochastic confounders.
no code implementations • 10 Nov 2019 • Xinghua Qu, Zhu Sun, Yew-Soon Ong, Abhishek Gupta, Pengfei Wei
Recent studies have revealed that neural network-based policies can be easily fooled by adversarial examples.
no code implementations • ICML 2017 • Pengfei Wei, Ramon Sagarna, Yiping Ke, Yew-Soon Ong, Chi-Keong Goh
A key challenge in multi-source transfer learning is to capture the diverse inter-domain similarities.