no code implementations • NAACL (DeeLIO) 2021 • Junjie Wu, Hao Zhou
Dialog topic management and background knowledge selection are essential factors for the success of knowledge-grounded open-domain conversations.
1 code implementation • ACL 2022 • Yu Bao, Hao Zhou, ShuJian Huang, Dongqi Wang, Lihua Qian, Xinyu Dai, Jiajun Chen, Lei LI
Recently, parallel text generation has received widespread attention due to its success in generation efficiency.
1 code implementation • EMNLP 2021 • Hao Zhou, Minlie Huang, Yong liu, Wei Chen, Xiaoyan Zhu
Generating informative and appropriate responses is challenging but important for building human-like dialogue systems.
no code implementations • ICML 2020 • Wenxian Shi, Hao Zhou, Ning Miao, Lei LI
Interpretability is important in text generation for guiding the generation with interpretable attributes.
no code implementations • NAACL 2022 • Hao Zhou, Gongshen Liu, Kewei Tu
Many natural language processing tasks involve text spans and thus high-quality span representations are needed to enhance neural approaches to these tasks.
no code implementations • 7 Sep 2023 • Chujie Zheng, Hao Zhou, Fandong Meng, Jie zhou, Minlie Huang
Multi-choice questions (MCQs) serve as a common yet important task format in the research of large language models (LLMs).
no code implementations • 29 Aug 2023 • Yun Liao, Yide Di, Hao Zhou, Kaijun Zhu, Mingyu Lu, Yijia Zhang, Qing Duan, Junhui Liu
Local feature matching remains a challenging task, primarily due to difficulties in matching sparse keypoints and low-texture regions.
no code implementations • ICCV 2023 • Huijie Yao, Wengang Zhou, Hao Feng, Hezhen Hu, Hao Zhou, Houqiang Li
Technically, IP-SLT consists of feature extraction, prototype initialization, and iterative prototype refinement.
Ranked #5 on
Sign Language Translation
on CSL-Daily
no code implementations • 15 Aug 2023 • Can Jiang, Xiong Liang, Yu-Cheng Zhou, Yong Tian, Shengli Xu, Jia-Rui Lin, Zhiliang Ma, Shiji Yang, Hao Zhou
This requirement is a prerequisite for obtaining a building permit during the conceptual design of a residential project.
1 code implementation • 29 Jul 2023 • Lean Wang, Wenkai Yang, Deli Chen, Hao Zhou, Yankai Lin, Fandong Meng, Jie zhou, Xu sun
As large language models (LLMs) generate texts with increasing fluency and realism, there is a growing need to identify the source of texts to prevent the abuse of LLMs.
no code implementations • 12 Jul 2023 • Qiying Yu, Yudi Zhang, Yuyan Ni, Shikun Feng, Yanyan Lan, Hao Zhou, Jingjing Liu
Self-supervised molecular representation learning is critical for molecule-based tasks such as AI-assisted drug discovery.
no code implementations • 6 Jul 2023 • Liangzhe Yuan, Nitesh Bharadwaj Gundavarapu, Long Zhao, Hao Zhou, Yin Cui, Lu Jiang, Xuan Yang, Menglin Jia, Tobias Weyand, Luke Friedman, Mikhail Sirotenko, Huisheng Wang, Florian Schroff, Hartwig Adam, Ming-Hsuan Yang, Ting Liu, Boqing Gong
We evaluate existing foundation models video understanding capabilities using a carefully designed experiment protocol consisting of three hallmark tasks (action recognition, temporal localization, and spatiotemporal localization), eight datasets well received by the community, and four adaptation methods tailoring a foundation model (FM) for a downstream task.
1 code implementation • 3 Jul 2023 • Min Li, Hao Zhou, Qun Liu, Yabin Shao, GuoYing Wang
It uses granular balls to simulate the spatial distribution characteristics of datasets, and informed entropy is utilized to further optimize the granular-ball space.
no code implementations • 24 May 2023 • Jiahuan Li, Hao Zhou, ShuJian Huang, Shanbo Cheng, Jiajun Chen
Secondly, we find that LLMs' ability to carry out translation instructions relies on the understanding of translation instructions and the alignment among different languages.
no code implementations • 23 May 2023 • Lean Wang, Lei LI, Damai Dai, Deli Chen, Hao Zhou, Fandong Meng, Jie zhou, Xu sun
In-context learning (ICL) emerges as a promising capability of large language models (LLMs) by providing them with demonstration examples to perform diverse tasks.
no code implementations • 14 May 2023 • Mina Razghandi, Hao Zhou, Melike Erol-Kantarci, Damla Turgut
In this paper, we propose a novel variational auto-encoder-generative adversarial network (VAE-GAN) technique for generating time-series data on energy consumption in smart homes.
no code implementations • 8 May 2023 • Zhiyuan Zhang, Deli Chen, Hao Zhou, Fandong Meng, Jie zhou, Xu sun
To settle this issue, we propose the Fine-purifying approach, which utilizes the diffusion theory to study the dynamic process of fine-tuning for finding potentially poisonous dimensions.
no code implementations • 7 May 2023 • Yijun Wang, Changzhi Sun, Yuanbin Wu, Lei LI, Junchi Yan, Hao Zhou
Entity relation extraction consists of two sub-tasks: entity recognition and relation extraction.
1 code implementation • 5 May 2023 • Bo Qiang, Yuxuan Song, Minkai Xu, Jingjing Gong, Bowen Gao, Hao Zhou, WeiYing Ma, Yanyan Lan
Generating desirable molecular structures in 3D is a fundamental problem for drug discovery.
no code implementations • 4 May 2023 • Jiaxin Wen, Hao Zhou, Minlie Huang
Large-scale open-domain dialogue data crawled from public social media has greatly improved the performance of dialogue models.
no code implementations • 26 Apr 2023 • Hao Zhou, Medhat Elsayed, Majid Bavand, Raimundas Gaigalas, Steve Furr, Melike Erol-Kantarci
In this work, we jointly consider sleep and transmission power control for reconfigurable intelligent surface (RIS)-aided energy-efficient heterogeneous networks (Hetnets).
1 code implementation • 27 Mar 2023 • Yiqun Wang, Yuning Shen, Shi Chen, Lihao Wang, Fei Ye, Hao Zhou
In this work, we propose a Harmonic Molecular Representation learning (HMR) framework, which represents a molecule using the Laplace-Beltrami eigenfunctions of its molecular surface.
no code implementations • 25 Mar 2023 • Hao Zhou, Melike Erol-Kantarci, Yuanwei Liu, H. Vincent Poor
Model-based, heuristic, and ML approaches are compared in terms of stability, robustness, optimality and so on, providing a systematic understanding of these techniques.
1 code implementation • 12 Mar 2023 • Hao Zhou, Chongyang Zhang, Yanjun Chen, Chuanping Hu
In this study, we reformulate this task as a one-vs-many optimization problem under the condition of single positive labels.
1 code implementation • journal 2023 • Junxiao Xue, Hao Zhou, Huawei Song, Bin Wu, Lei Shi
Researchers have proposed many methods to defend against these attacks, but in the existing methods, researchers just focus on speech features.
Ranked #1 on
Voice Anti-spoofing
on ASVspoof 2019 - PA
no code implementations • 1 Feb 2023 • Ycaro Dantas, Pedro Enrique Iturria-Rivera, Hao Zhou, Majid Bavand, Medhat Elsayed, Raimundas Gaigalas, Melike Erol-Kantarci
Compared to the ESB and fixed transmission power strategy, the proposed approach achieves more than twice the average EE in the scenarios under test and is closer to the maximum theoretical EE.
1 code implementation • 28 Jan 2023 • Danqing Wang, Fei Ye, Hao Zhou
The development of general protein and antibody-specific pre-trained language models both facilitate antibody prediction tasks.
1 code implementation • 26 Jan 2023 • Xiaohu Huang, Hao Zhou, Jian Wang, Haocheng Feng, Junyu Han, Errui Ding, Jingdong Wang, Xinggang Wang, Wenyu Liu, Bin Feng
In this paper, we propose a graph contrastive learning framework for skeleton-based action recognition (\textit{SkeletonGCL}) to explore the \textit{global} context across all sequences.
Ranked #6 on
Skeleton Based Action Recognition
on NTU RGB+D
no code implementations • 25 Jan 2023 • Wenkai Yang, Deli Chen, Hao Zhou, Fandong Meng, Jie zhou, Xu sun
Federated Learning (FL) has become a popular distributed learning paradigm that involves multiple clients training a global model collaboratively in a data privacy-preserving manner.
no code implementations • 7 Jan 2023 • Hao Zhou, Long Kong, Medhat Elsayed, Majid Bavand, Raimundas Gaigalas, Steve Furr, Melike Erol-Kantarci
Reconfigurable intelligent surface (RIS) is emerging as a promising technology to boost the energy efficiency (EE) of 5G beyond and 6G networks.
no code implementations • 20 Dec 2022 • Lihua Qian, Mingxuan Wang, Yang Liu, Hao Zhou
Autoregressive models can achieve high generation quality, but the sequential decoding scheme causes slow decoding speed.
1 code implementation • 28 Nov 2022 • Danqing Wang, Zeyu Wen, Fei Ye, Lei LI, Hao Zhou
By sampling in the latent space, LSSAMP can simultaneously generate peptides with ideal sequence attributes and secondary structures.
no code implementations • 28 Nov 2022 • Hao Zhou, Shaoming Li, Guibin Jiang, Jiaqi Zheng, Dong Wang
Our key intuition is that we introduce the decision factor to establish a bridge between ML and OR such that the solution can be directly obtained in OR by only performing the sorting or comparison operations on the decision factor.
1 code implementation • 18 Oct 2022 • Lan Jiang, Hao Zhou, Yankai Lin, Peng Li, Jie zhou, Rui Jiang
Even though the large-scale language models have achieved excellent performances, they suffer from various adversarial attacks.
1 code implementation • 13 Oct 2022 • Hao Zhou, Man Lan, Yuanbin Wu, Yuefeng Chen, Meirong Ma
Due to the absence of connectives, implicit discourse relation recognition (IDRR) is still a challenging and crucial task in discourse analysis.
1 code implementation • 7 Oct 2022 • Jiangtao Feng, Yi Zhou, Jun Zhang, Xian Qian, Liwei Wu, Zhexi Zhang, Yanming Liu, Mingxuan Wang, Lei LI, Hao Zhou
PARAGEN is a PyTorch-based NLP toolkit for further development on parallel generation.
no code implementations • 30 Sep 2022 • Rahul Duggal, Shengyun Peng, Hao Zhou, Duen Horng Chau
In this paper, we propose a new and complementary direction for improving performance on long tailed datasets - optimizing the backbone architecture through neural architecture search (NAS).
no code implementations • 27 Sep 2022 • Rahul Duggal, Hao Zhou, Shuo Yang, Jun Fang, Yuanjun Xiong, Wei Xia
With the shift towards on-device deep learning, ensuring a consistent behavior of an AI service across diverse compute platforms becomes tremendously important.
2 code implementations • 30 Aug 2022 • Prince Grover, Julia Xu, Justin Tittelfitz, Anqi Cheng, Zheng Li, Jakub Zablocki, Jianbo Liu, Hao Zhou
Standardized datasets and benchmarks have spurred innovations in computer vision, natural language processing, multi-modal and tabular settings.
no code implementations • 16 Aug 2022 • Ryuichi Takanobu, Hao Zhou, Yankai Lin, Peng Li, Jie zhou, Minlie Huang
Modeling these subtasks is consistent with the human agent's behavior patterns.
no code implementations • 3 Aug 2022 • Yujie Yao, Hao Zhou, Melike Erol-Kantarci
Then we propose a UK-medoids based method for user clustering with location uncertainty, and the clustering results are consequently used for the beam management.
no code implementations • 13 Jun 2022 • Fei Huang, Tianhua Tao, Hao Zhou, Lei LI, Minlie Huang
Non-autoregressive Transformer (NAT) is a family of text generation models, which aims to reduce the decoding latency by predicting the whole sentences in parallel.
1 code implementation • 16 May 2022 • Fei Huang, Hao Zhou, Yang Liu, Hang Li, Minlie Huang
Non-autoregressive Transformers (NATs) significantly reduce the decoding latency by generating all tokens in parallel.
1 code implementation • ICLR 2022 • Huiyun Yang, Huadong Chen, Hao Zhou, Lei LI
Based on large-scale pre-trained multilingual representations, recent cross-lingual transfer methods have achieved impressive transfer performances.
no code implementations • 23 Apr 2022 • Yujie Yao, Hao Zhou, Melike Erol-Kantarci
In this paper, we propose a UK-means-based clustering and deep reinforcement learning-based resource allocation algorithm (UK-DRL) for radio resource allocation and beam management in 5G mmWave networks.
1 code implementation • 20 Apr 2022 • Hao Zhou, Yixin Chen, David Troendle, Byunghyun Jang
Our model takes advantage of a well-designed Gabor filter bank to analyze fabric texture.
1 code implementation • ACL 2022 • Zhiyi Fu, Wangchunshu Zhou, Jingjing Xu, Hao Zhou, Lei LI
How do masked language models (MLMs) such as BERT learn contextual representations?
1 code implementation • 5 Apr 2022 • Yu Bao, Hao Zhou, ShuJian Huang, Dongqi Wang, Lihua Qian, Xinyu Dai, Jiajun Chen, Lei LI
Recently, parallel text generation has received widespread attention due to its success in generation efficiency.
1 code implementation • ACL 2022 • Pei Ke, Hao Zhou, Yankai Lin, Peng Li, Jie zhou, Xiaoyan Zhu, Minlie Huang
Existing reference-free metrics have obvious limitations for evaluating controlled text generation models.
no code implementations • Findings (ACL) 2022 • Jiangjie Chen, Rui Xu, Ziquan Fu, Wei Shi, Zhongqiao Li, Xinbo Zhang, Changzhi Sun, Lei LI, Yanghua Xiao, Hao Zhou
Holding the belief that models capable of reasoning should be right for the right reasons, we propose a first-of-its-kind Explainable Knowledge-intensive Analogical Reasoning benchmark (E-KAR).
no code implementations • 19 Jan 2022 • Mina Razghandi, Hao Zhou, Melike Erol-Kantarci, Damla Turgut
To this end, in this paper, we propose a Variational AutoEncoder Generative Adversarial Network (VAE-GAN) as a smart grid data generative model which is capable of learning various types of data distributions and generating plausible samples from the same distribution without performing any prior analysis on the data before the training phase. We compared the Kullback-Leibler (KL) divergence, maximum mean discrepancy (MMD), and Wasserstein distance between the synthetic data (electrical load and PV production) distribution generated by the proposed model, vanilla GAN network, and the real data distribution, to evaluate the performance of our model.
1 code implementation • 10 Dec 2021 • Jiangjie Chen, Chun Gan, Sijie Cheng, Hao Zhou, Yanghua Xiao, Lei LI
We also propose a new metric to alleviate the shortcomings of current automatic metrics and better evaluate the trade-off.
no code implementations • 22 Nov 2021 • Hao Zhou, Atakan Aral, Ivona Brandic, Melike Erol-Kantarci
Microgrids (MGs) are important players for the future transactive energy systems where a number of intelligent Internet of Things (IoT) devices interact for energy management in the smart grid.
1 code implementation • EMNLP 2021 • Dongyu Ru, Changzhi Sun, Jiangtao Feng, Lin Qiu, Hao Zhou, Weinan Zhang, Yong Yu, Lei LI
LogiRE treats logic rules as latent variables and consists of two modules: a rule generator and a relation extractor.
Ranked #21 on
Relation Extraction
on DocRED
no code implementations • 8 Nov 2021 • Jingjing Xu, Wangchunshu Zhou, Zhiyi Fu, Hao Zhou, Lei LI
In recent years, larger and deeper models are springing up and continuously pushing state-of-the-art (SOTA) results across various fields like natural language processing (NLP) and computer vision (CV).
no code implementations • 28 Oct 2021 • Hao Zhou, Dongchun Ren, Xu Yang, Mingyu Fan, Hai Huang
First, with the continuation of time, the prediction error at each time step increases significantly, causing the final displacement error to be impossible to ignore.
no code implementations • 21 Oct 2021 • Danqing Wang, Jiaze Chen, Xianze Wu, Hao Zhou, Lei LI
In this paper, we present a large-scale Chinese news summarization dataset CNewSum, which consists of 304, 307 documents and human-written summaries for the news feed.
1 code implementation • Findings (ACL) 2022 • Hao Sun, Guangxuan Xu, Jiawen Deng, Jiale Cheng, Chujie Zheng, Hao Zhou, Nanyun Peng, Xiaoyan Zhu, Minlie Huang
We propose a taxonomy for dialogue safety specifically designed to capture unsafe behaviors in human-bot dialogue settings, with focuses on context-sensitive unsafety, which is under-explored in prior works.
2 code implementations • 14 Oct 2021 • Chenyang Huang, Hao Zhou, Osmar R. Zaïane, Lili Mou, Lei LI
How do we perform efficient inference while retaining high translation quality?
no code implementations • 1 Oct 2021 • Xianggen Liu, Pengyong Li, Fandong Meng, Hao Zhou, Huasong Zhong, Jie zhou, Lili Mou, Sen Song
The key idea is to integrate powerful neural networks into metaheuristics (e. g., simulated annealing, SA) to restrict the search space in discrete optimization.
no code implementations • 29 Sep 2021 • Danqing Wang, Zeyu Wen, Lei LI, Hao Zhou
By sampling in the latent secondary structure space, we can generate peptides with ideal amino acids and secondary structures at the same time.
no code implementations • 29 Sep 2021 • Xinbo Zhang, Changzhi Sun, Yue Zhang, Lei LI, Hao Zhou
Logical reasoning over natural text is an important capability towards human level intelligence.
no code implementations • 29 Sep 2021 • Yuwei Yang, Siqi Ouyang, Meihua Dang, Mingyue Zheng, Lei LI, Hao Zhou
Deep learning models have been widely used in automatic drug design.
no code implementations • ICLR 2022 • Zhenqiao Song, Hao Zhou, Lihua Qian, Jingjing Xu, Shanbo Cheng, Mingxuan Wang, Lei LI
Multilingual machine translation aims to develop a single model for multiple language directions.
no code implementations • 25 Sep 2021 • Mina Razghandi, Hao Zhou, Melike Erol-Kantarci, Damla Turgut
A smart home energy management system (HEMS) can contribute towards reducing the energy costs of customers; however, HEMS suffers from uncertainty in both energy generation and consumption patterns.
no code implementations • WMT (EMNLP) 2021 • Lihua Qian, Yi Zhou, Zaixiang Zheng, Yaoming Zhu, Zehui Lin, Jiangtao Feng, Shanbo Cheng, Lei LI, Mingxuan Wang, Hao Zhou
This paper describes the Volctrans' submission to the WMT21 news translation shared task for German->English translation.
no code implementations • 16 Sep 2021 • Hao Zhou, Melike Erol-Kantarci, Vincent Poor
In this paper, we propose a deep transfer reinforcement learning (DTRL) scheme for joint radio and cache resource allocation to serve 5G RAN slicing.
no code implementations • 1 Sep 2021 • Junxiao Xue, Hao Zhou, Yabo Wang
This method involves feature extraction, a densely connected convolutional neural network with squeeze and excitation block (SE-DenseNet), multi-scale residual neural network with squeeze and excitation block (SE-Res2Net) and feature fusion strategies.
1 code implementation • Findings (NAACL) 2022 • Yiran Chen, Zhenqiao Song, Xianze Wu, Danqing Wang, Jingjing Xu, Jiaze Chen, Hao Zhou, Lei LI
We introduce MTG, a new benchmark suite for training and evaluating multilingual text generation.
no code implementations • 9 Aug 2021 • Hao Zhou, Anye Zhou, Tienan Li, Danjue Chen, Srinivas Peeta, Jorge Laval
This paper demonstrates that the acceleration/deceleration limits in ACC systems can make a string stable ACC amplify the speed perturbation in natural driving.
2 code implementations • 3 Aug 2021 • Hao Zhou, Pei Ke, Zheng Zhang, Yuxian Gu, Yinhe Zheng, Chujie Zheng, Yida Wang, Chen Henry Wu, Hao Sun, Xiaocong Yang, Bosi Wen, Xiaoyan Zhu, Minlie Huang, Jie Tang
Although pre-trained language models have remarkably enhanced the generation ability of dialogue systems, open-domain Chinese dialogue systems are still limited by the dialogue data and the model size compared with English ones.
no code implementations • 20 Jul 2021 • Wenxian Shi, Yuxuan Song, Hao Zhou, Bohan Li, Lei LI
However, it has been observed that a converged heavy teacher model is strongly constrained for learning a compact student network and could make the optimization subject to poor local optima.
1 code implementation • ACL 2021 • Yijun Wang, Changzhi Sun, Yuanbin Wu, Hao Zhou, Lei LI, Junchi Yan
Entities and relations are represented by squares and rectangles in the table.
1 code implementation • Findings (ACL) 2021 • Changzhi Sun, Xinbo Zhang, Jiangjie Chen, Chun Gan, Yuanbin Wu, Jiaze Chen, Hao Zhou, Lei LI
In this paper, we propose PRobr, a novel approach for joint answer prediction and proof generation.
no code implementations • 26 Jun 2021 • Mina Razghandi, Hao Zhou, Melike Erol-Kantarci, Damla Turgut
Appliance-level load forecasting plays a critical role in residential energy management, besides having significant importance for ancillary services performed by the utilities.
no code implementations • CVPR 2021 • Hao Zhou, Wengang Zhou, Weizhen Qi, Junfu Pu, Houqiang Li
Finally, the synthetic parallel data serves as a strong supplement for the end-to-end training of the encoder-decoder SLT framework.
Ranked #4 on
Sign Language Translation
on CSL-Daily
no code implementations • CVPR 2021 • Rahul Duggal, Hao Zhou, Shuo Yang, Yuanjun Xiong, Wei Xia, Zhuowen Tu, Stefano Soatto
Existing systems use the same embedding model to compute representations (embeddings) for the query and gallery images.
no code implementations • 12 May 2021 • Tienan Li, Danjue Chen, Hao Zhou, Yuanchang Xie, Jorge Laval
Experimental measurements on commercial adaptive cruise control (ACC) vehicles \RoundTwo{are} becoming increasingly available from around the world, providing an unprecedented opportunity to study the traffic flow characteristics that arise from this technology.
1 code implementation • NeurIPS 2021 • Zaixiang Zheng, Hao Zhou, ShuJian Huang, Jiajun Chen, Jingjing Xu, Lei LI
Thus REDER enables reversible machine translation by simply flipping the input and output ends.
no code implementations • 15 Apr 2021 • Hao Zhou, Anye Zhou, Tienan Li, Danjue Chen, Srinivas Peeta, Jorge Laval
Current commercial adaptive cruise control (ACC) systems consist of an upper-level planner controller that decides the optimal trajectory that should be followed, and a low-level controller in charge of sending the gas/brake signals to the mechanical system to actually move the vehicle.
1 code implementation • EACL 2021 • Yijun Wang, Changzhi Sun, Yuanbin Wu, Hao Zhou, Lei LI, Junchi Yan
Current state-of-the-art systems for joint entity relation extraction (Luan et al., 2019; Wad-den et al., 2019) usually adopt the multi-task learning framework.
no code implementations • CVPR 2021 • Hao Zhou, Chongyang Zhang, Yan Luo, Yanjun Chen, Chuanping Hu
Meanwhile, modified feature assigned with style-like words (including adjectives, adverbs, etc) represents the subjective information, and thus brings personalized predictions; De-bias - We propose a de-bias mechanism to generate diverse predictions, aim to alleviate the bias caused by single-style annotations in the presence of label uncertainty.
no code implementations • 18 Mar 2021 • Yan Luo, Chongyang Zhang, Muming Zhao, Hao Zhou, Jun Sun
Consequently, we address the weakness of IoU by introducing one geometric sensitive search algorithm as a new assignment and regression metric.
1 code implementation • ICLR 2021 • Yutong Xie, Chence Shi, Hao Zhou, Yuwei Yang, Weinan Zhang, Yong Yu, Lei LI
Searching for novel molecules with desired chemical properties is crucial in drug discovery.
no code implementations • 6 Mar 2021 • Hao Zhou, Melike Erol-Kantarci
Microgrid (MG) energy management is an important part of MG operation.
no code implementations • 6 Mar 2021 • Hao Zhou, Melike Erol-Kantarci
The EMS of an MG could be rather complicated when renewable energy resources (RER), energy storage system (ESS) and demand side management (DSM) need to be orchestrated.
no code implementations • 27 Jan 2021 • Zhenqiao Song, Jiaze Chen, Hao Zhou, Lei LI
Our proposed model is simple yet effective: by using bidword as the bridge between search query and advertisement, the generation of search query, advertisement and bidword can be jointly learned in the triangular training framework.
no code implementations • 1 Jan 2021 • Jingjing Xu, Hao Zhou, Chun Gan, Zaixiang Zheng, Lei LI
In this paper, we find an exciting relation between an information-theoretic feature and the performance of NLP tasks such as machine translation with a given vocabulary.
no code implementations • 1 Jan 2021 • Lihua Qian, Hao Zhou, Yu Bao, Mingxuan Wang, Lin Qiu, Weinan Zhang, Yong Yu, Lei LI
Although non-autoregressive models with one-iteration generation achieves remarkable inference speed-up, they still falls behind their autoregressive counterparts inprediction accuracy.
no code implementations • 1 Jan 2021 • Wenxian Shi, Yuxuan Song, Hao Zhou, Bohan Li, Lei LI
However, it has been observed that a converged heavy teacher model is strongly constrained for learning a compact student network and could make the optimization subject to poor local optima.
no code implementations • 1 Jan 2021 • Xunpeng Huang, Vicky Jiaqi Zhang, Hao Zhou, Lei LI
Adaptive gradient methods have been shown to outperform SGD in many tasks of training neural networks.
1 code implementation • ACL 2021 • Jingjing Xu, Hao Zhou, Chun Gan, Zaixiang Zheng, Lei LI
The choice of token vocabulary affects the performance of machine translation.
1 code implementation • 25 Dec 2020 • Jun Yu, Hao Zhou, Yibing Zhan, DaCheng Tao
Essentially, DGCPN addresses the inaccurate similarity problem by exploring and exploiting the data's intrinsic relationships in a graph.
1 code implementation • 25 Dec 2020 • Jiangjie Chen, Qiaoben Bao, Changzhi Sun, Xinbo Zhang, Jiaze Chen, Hao Zhou, Yanghua Xiao, Lei LI
The final claim verification is based on all latent variables.
no code implementations • CVPR 2020 • Yan Luo, Chongyang Zhang, Muming Zhao, Hao Zhou, Jun Sun
i) We generate a bird view map, which is naturally free from occlusion issues, and scan all points on it to look for suitable locations for each pedestrian instance.
1 code implementation • 5 Dec 2020 • Minkai Xu, Mingxuan Wang, Zhouhan Lin, Hao Zhou, Weinan Zhang, Lei LI
Despite the recent success on image classification, self-training has only achieved limited gains on structured prediction tasks such as neural machine translation (NMT).
5 code implementations • 1 Dec 2020 • Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun
However, applying GPT-3 to address Chinese NLP tasks is still challenging, as the training corpus of GPT-3 is primarily English, and the parameters are not publicly available.
3 code implementations • EMNLP 2020 • Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, Lei LI
Pre-trained contextual representations like BERT have achieved great success in natural language processing.
Ranked #12 on
Semantic Textual Similarity
on STS16
1 code implementation • Findings (EMNLP) 2021 • Zhiyu Chen, Honglei Liu, Hu Xu, Seungwhan Moon, Hao Zhou, Bing Liu
As there is no clean mapping for a user's free form utterance to an ontology, we first model the user preferences as estimated distributions over the system ontology and map the users' utterances to such distributions.
1 code implementation • Findings (ACL) 2022 • Zewei Sun, Mingxuan Wang, Hao Zhou, Chengqi Zhao, ShuJian Huang, Jiajun Chen, Lei LI
This paper does not aim at introducing a novel model for document-level neural machine translation.
1 code implementation • EMNLP 2020 • Zehui Lin, Xiao Pan, Mingxuan Wang, Xipeng Qiu, Jiangtao Feng, Hao Zhou, Lei LI
We investigate the following question for machine translation (MT): can we develop a single universal MT model to serve as the common seed and obtain derivative and improved models on arbitrary language pairs?
Ranked #3 on
Machine Translation
on WMT2014 English-French
(using extra training data)
no code implementations • WMT (EMNLP) 2020 • Fandong Meng, Jianhao Yan, Yijin Liu, Yuan Gao, Xianfeng Zeng, Qinsong Zeng, Peng Li, Ming Chen, Jie zhou, Sifan Liu, Hao Zhou
We participate in the WMT 2020 shared news translation task on Chinese to English.
1 code implementation • 21 Sep 2020 • Qianqian Dong, Rong Ye, Mingxuan Wang, Hao Zhou, Shuang Xu, Bo Xu, Lei LI
Can we build a system to fully utilize signals in a parallel ST corpus?
1 code implementation • 21 Sep 2020 • Qianqian Dong, Mingxuan Wang, Hao Zhou, Shuang Xu, Bo Xu, Lei LI
The key idea is to generate source transcript and target translation text with a single decoder.
2 code implementations • 21 Aug 2020 • Jorge Laval, Hao Zhou
Notably, we found that no control (i. e. random policy) can be an effective control strategy for a surprisingly large family of networks.
no code implementations • ACL 2021 • Lihua Qian, Hao Zhou, Yu Bao, Mingxuan Wang, Lin Qiu, Wei-Nan Zhang, Yong Yu, Lei LI
With GLM, we develop Glancing Transformer (GLAT) for machine translation.
Ranked #67 on
Machine Translation
on WMT2014 English-German
no code implementations • ACL 2019 • Huangzhao Zhang, Hao Zhou, Ning Miao, Lei LI
Efficiently building an adversarial attacker for natural language processing (NLP) tasks is a real challenge.
1 code implementation • ACL 2020 • Ning Miao, Yuxuan Song, Hao Zhou, Lei LI
It has been a common approach to pre-train a language model on a large corpus and fine-tune it on task-specific data.
no code implementations • 12 Jul 2020 • Yuxuan Song, Ning Miao, Hao Zhou, Lantao Yu, Mingxuan Wang, Lei LI
Auto-regressive sequence generative models trained by Maximum Likelihood Estimation suffer the exposure bias problem in practical finite sample scenarios.
no code implementations • ACL 2020 • Runxin Xu, Jun Cao, Mingxuan Wang, Jiaze Chen, Hao Zhou, Ying Zeng, Yu-Ping Wang, Li Chen, Xiang Yin, Xijin Zhang, Songcheng Jiang, Yuxuan Wang, Lei LI
This paper proposes the building of Xiaomingbot, an intelligent, multilingual and multimodal software robot equipped with four integral capabilities: news generation, news translation, news reading and avatar animation.
1 code implementation • 12 Jun 2020 • Xunpeng Huang, Runxin Xu, Hao Zhou, Zhe Wang, Zhengyang Liu, Lei LI
Due to its simplicity and outstanding ability to generalize, stochastic gradient descent (SGD) is still the most widely used optimization method despite its slow convergence.
no code implementations • 12 Jun 2020 • Xunpeng Huang, Hao Zhou, Runxin Xu, Zhe Wang, Lei LI
Adaptive gradient methods have attracted much attention of machine learning communities due to the high efficiency.
1 code implementation • CVPR 2020 • Koutilya PNVR, Hao Zhou, David Jacobs
Ideally, this results in images from two domains that present shared information to the primary network.
Ranked #2 on
Monocular Depth Estimation
on Make3D
no code implementations • 20 May 2020 • Yuanfei Luo, Hao Zhou, Wei-Wei Tu, Yuqiang Chen, Wenyuan Dai, Qiang Yang
As a result, the intra-field information and the non-linear interactions between those operations (e. g. neural network and factorization machines) are ignored.
no code implementations • ICLR 2020 • Zaixiang Zheng, Hao Zhou, Shu-Jian Huang, Lei LI, Xin-yu Dai, Jia-Jun Chen
Training neural machine translation models (NMT) requires a large amount of parallel corpus, which is scarce for many language pairs.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Dongyu Ru, Jiangtao Feng, Lin Qiu, Hao Zhou, Mingxuan Wang, Wei-Nan Zhang, Yong Yu, Lei LI
We propose adversarial uncertainty sampling in discrete space (AUSDS) to retrieve informative unlabeled samples more efficiently.
1 code implementation • ACL 2020 • Hao Zhou, Chujie Zheng, Kaili Huang, Minlie Huang, Xiaoyan Zhu
The research of knowledge-driven conversational systems is largely limited due to the lack of dialog data which consist of multi-turn conversations on multiple topics and with knowledge annotations.
1 code implementation • 3 Apr 2020 • Yuxuan Song, Minkai Xu, Lantao Yu, Hao Zhou, Shuo Shao, Yong Yu
In this paper, motivated by the inherent connections between neural joint source-channel coding and discrete representation learning, we propose a novel regularization method called Infomax Adversarial-Bit-Flip (IABF) to improve the stability and robustness of the neural joint source-channel coding scheme.
no code implementations • 27 Feb 2020 • Seyedramin Rasoulinezhad, Sean Fox, Hao Zhou, Lingli Wang, David Boland, Philip H. W. Leong
Binarized neural networks (BNNs) have shown exciting potential for utilising neural networks in embedded implementations where area, energy and latency constraints are paramount.
no code implementations • 8 Feb 2020 • Hao Zhou, Wengang Zhou, Yun Zhou, Houqiang Li
Our STMC network consists of a spatial multi-cue (SMC) module and a temporal multi-cue (TMC) module.
1 code implementation • ICLR 2020 • Rong Ye, Wenxian Shi, Hao Zhou, Zhongyu Wei, Lei LI
We propose the variational template machine (VTM), a novel method to generate text descriptions from data tables.
no code implementations • 25 Nov 2019 • Yu Bao, Hao Zhou, Jiangtao Feng, Mingxuan Wang, Shu-Jian Huang, Jia-Jun Chen, Lei LI
Non-autoregressive models are promising on various text generation tasks.
no code implementations • 25 Nov 2019 • Qingyang Wu, Lei LI, Hao Zhou, Ying Zeng, Zhou Yu
We propose to automate this headline editing process through neural network models to provide more immediate writing support for these social media news writers.
1 code implementation • 2 Nov 2019 • Hao Zhou, Chongyang Zhang, Chuanping Hu
Visual relationship detection, as a challenging task used to find and distinguish the interactions between object pairs in one image, has received much attention recently.
1 code implementation • NeurIPS 2019 • Ning Miao, Hao Zhou, Chengqi Zhao, Wenxian Shi, Lei LI
Neural models for text generation require a softmax layer with proper token embeddings during the decoding phase.
no code implementations • 2 Oct 2019 • Hao Zhou, Jorge Laval, Anye Zhou, Yu Wang, Wenchao Wu, Zhu Qing, Srinivas Peeta
Some suggestions towards congestion mitigation for future mMP studies are proposed: i) enrich data collection to facilitate the congestion learning, ii) incorporate non-imitation learning methods to combine traffic efficiency into a safety-oriented technical route, and iii) integrate domain knowledge from the traditional car following (CF) theory to improve the string stability of mMP.
no code implementations • ICCV 2019 • Hao Zhou, Xiang Yu, David W. Jacobs
In this work, we propose a Global-Local Spherical Harmonics (GLoSH) lighting model to improve the lighting component, and jointly predict reflectance and surface normals.
1 code implementation • ICCV 2019 • Hao Zhou, Sunil Hadap, Kalyan Sunkavalli, David W. Jacobs
In this work, we apply a physically-based portrait relighting method to generate a large scale, high quality, "in the wild" portrait relighting dataset (DPR).
Ranked #1 on
Single-Image Portrait Relighting
on Multi-PIE
(using extra training data)
1 code implementation • WS 2019 • Yao Fu, Hao Zhou, Jiaze Chen, Lei LI
We apply this framework to existing datasets and models and show that: (1) the pivot words are strong features for the classification of sentence attributes; (2) to change the attribute of a sentence, many datasets only requires to change certain pivot words; (3) consequently, many transfer models only perform the lexical-level modification, while leaving higher-level sentence structures unchanged.
no code implementations • 25 Sep 2019 • Yu Bao, Hao Zhou, Jiangtao Feng, Mingxuan Wang, ShuJian Huang, Jiajun Chen, Lei LI
However, position modeling of output words is an essential problem in non-autoregressive text generation.
no code implementations • ACL 2020 • Xianggen Liu, Lili Mou, Fandong Meng, Hao Zhou, Jie zhou, Sen Song
Unsupervised paraphrase generation is a promising and important research topic in natural language processing.
2 code implementations • 15 Aug 2019 • Jiacheng Yang, Mingxuan Wang, Hao Zhou, Chengqi Zhao, Yong Yu, Wei-Nan Zhang, Lei LI
Our experiments in machine translation show CTNMT gains of up to 3 BLEU score on the WMT14 English-German language pair which even surpasses the previous state-of-the-art pre-training aided NMT by 1. 4 BLEU score.
no code implementations • 7 Aug 2019 • Jorge A. Laval, Hao Zhou
We find that: (i) a policy trained with supervised learning with only two examples outperforms LQF, (ii) random search is able to generate near-optimal policies, (iii) the prevailing average network occupancy during training is the major determinant of the effectiveness of DRL policies.
no code implementations • 8 Jul 2019 • Rongxiang Weng, Hao Zhou, Shu-Jian Huang, Lei LI, Yifan Xia, Jia-Jun Chen
Experiments in both ideal and real interactive translation settings demonstrate that our proposed \method enhances machine translation results significantly while requires fewer revision instructions from human compared to previous methods.
1 code implementation • ACL 2019 • Yu Bao, Hao Zhou, Shu-Jian Huang, Lei LI, Lili Mou, Olga Vechtomova, Xin-yu Dai, Jia-Jun Chen
In this paper, we propose to generate sentences from disentangled syntactic and semantic spaces.
1 code implementation • 16 Jun 2019 • Wenxian Shi, Hao Zhou, Ning Miao, Lei LI
To enhance the controllability and interpretability, one can replace the Gaussian prior with a mixture of Gaussian distributions (GM-VAE), whose mixture components could be related to hidden semantic aspects of data.
no code implementations • ACL 2019 • Bingzhen Wei, Mingxuan Wang, Hao Zhou, Junyang Lin, Jun Xie, Xu sun
Non-autoregressive translation models (NAT) have achieved impressive inference speedup.
1 code implementation • ACL 2019 • Yunxuan Xiao, Yanru Qu, Lin Qiu, Hao Zhou, Lei LI, Wei-Nan Zhang, Yong Yu
However, many difficult questions require multiple supporting evidence from scattered text among two or more documents.
Ranked #36 on
Question Answering
on HotpotQA
no code implementations • 29 Apr 2019 • Yuanfei Luo, Mengshuo Wang, Hao Zhou, Quanming Yao, Wei-Wei Tu, Yuqiang Chen, Qiang Yang, Wenyuan Dai
Feature crossing captures interactions among categorical features and is useful to enhance learning from tabular data in real-world businesses.
no code implementations • 27 Feb 2019 • Hao Zhou, Minlie Huang, Yishun Mao, Changlei Zhu, Peng Shu, Xiaoyan Zhu
Second, the inefficient ad impression issue: a large proportion of search queries, which are unpopular yet relevant to many ad keywords, have no ads presented on their search result pages.
1 code implementation • 14 Nov 2018 • Ning Miao, Hao Zhou, Lili Mou, Rui Yan, Lei LI
In real-world applications of natural language generation, there are often constraints on the target sentences in addition to fluency and naturalness requirements.
no code implementations • 30 Oct 2018 • Hao Zhou, Ke Chen
Speech emotion recognition plays an important role in building more intelligent and human-like agents.
no code implementations • 30 Oct 2018 • Yuqi Yu, Hanbing Yan, Hongchao Guan, Hao Zhou
In the Internet age, cyber-attacks occur frequently with complex types.
1 code implementation • EMNLP 2018 • Haoyue Shi, Hao Zhou, Jiaze Chen, Lei LI
To study the effectiveness of different tree structures, we replace the parsing trees with trivial trees (i. e., binary balanced tree, left-branching tree and right-branching tree) in the encoders.
Ranked #9 on
Sentiment Analysis
on Amazon Review Full
1 code implementation • NAACL 2019 • Hareesh Bahuleyan, Lili Mou, Hao Zhou, Olga Vechtomova
The variational autoencoder (VAE) imposes a probabilistic distribution (typically Gaussian) on the latent space and penalizes the Kullback--Leibler (KL) divergence between the posterior and prior.
no code implementations • CVPR 2018 • Hao Zhou, Jin Sun, Yaser Yacoob, David W. Jacobs
We propose to train a deep Convolutional Neural Network (CNN) to regress lighting parameters from a single face image.
4 code implementations • NeurIPS 2018 • Wei Cao, Dong Wang, Jian Li, Hao Zhou, Lei LI, Yitan Li
It is ubiquitous that time series contains many missing values.
General Classification
Multivariate Time Series Forecasting
+5
1 code implementation • 10 Apr 2018 • Lin Qiu, Hao Zhou, Yanru Qu, Wei-Nan Zhang, Suoheng Li, Shu Rong, Dongyu Ru, Lihua Qian, Kewei Tu, Yong Yu
Information Extraction (IE) refers to automatically extracting structured relation tuples from unstructured texts.
no code implementations • 6 Dec 2017 • Bolin Wei, Shuai Lu, Lili Mou, Hao Zhou, Pascal Poupart, Ge Li, Zhi Jin
This paper addresses the question: Why do neural dialog systems generate short and meaningless replies?
1 code implementation • TACL 2018 • Zaixiang Zheng, Hao Zhou, Shu-Jian Huang, Lili Mou, Xin-yu Dai, Jia-Jun Chen, Zhaopeng Tu
The Past and Future contents are fed to both the attention model and the decoder states, which offers NMT systems the knowledge of translated and untranslated contents.
no code implementations • LREC 2018 • Zi-Yi Dou, Hao Zhou, Shu-Jian Huang, Xin-yu Dai, Jia-Jun Chen
However, there are certain limitations in Scheduled Sampling and we propose two dynamic oracle-based methods to improve it.
no code implementations • 16 Sep 2017 • Tom Young, Erik Cambria, Iti Chaturvedi, Minlie Huang, Hao Zhou, Subham Biswas
Building dialog agents that can converse naturally with humans is a challenging yet intriguing problem of artificial intelligence.
no code implementations • 12 Sep 2017 • Hui Ding, Hao Zhou, Shaohua Kevin Zhou, Rama Chellappa
First, a weakly-supervised face region localization network is designed to automatically detect regions (or parts) specific to attributes.
no code implementations • CVPR 2018 • Hao Zhou, Jin Sun, Yaser Yacoob, David W. Jacobs
We propose to train a deep Convolutional Neural Network (CNN) to regress lighting parameters from a single face image.
no code implementations • EMNLP 2017 • Hao Zhou, Zhenting Yu, Yue Zhang, Shu-Jian Huang, Xin-yu Dai, Jia-Jun Chen
Neural parsers have benefited from automatically labeled data via dependency-context word embeddings.
1 code implementation • ACL 2017 • Hao Zhou, Zhaopeng Tu, Shu-Jian Huang, Xiaohua Liu, Hang Li, Jia-Jun Chen
In typical neural machine translation~(NMT), the decoder generates a sentence word by word, packing all linguistic granularities in the same time-scale of RNN.
6 code implementations • 4 Apr 2017 • Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, Bing Liu
Perception and expression of emotion are key factors to the success of dialogue systems or conversational agents.
no code implementations • 2 Feb 2017 • Soumyadip Sengupta, Hao Zhou, Walter Forkel, Ronen Basri, Tom Goldstein, David W. Jacobs
We introduce a new, integrated approach to uncalibrated photometric stereo.
no code implementations • COLING 2016 • Hao Zhou, Minlie Huang, Xiaoyan Zhu
Most tranditional QA systems based on templates or rules tend to generate rigid and stylised responses without the natural variation of human language.
no code implementations • NeurIPS 2016 • Hao Zhou, Vamsi K. Ithapu, Sathya Narayanan Ravi, Vikas Singh, Grace Wahba, Sterling C. Johnson
Consider samples from two different data sources $\{\mathbf{x_s^i}\} \sim P_{\rm source}$ and $\{\mathbf{x_t^i}\} \sim P_{\rm target}$.
no code implementations • LREC 2016 • Hao Zhou, Yue Zhang, Shu-Jian Huang, Xin-yu Dai, Jia-Jun Chen
Greedy transition-based parsers are appealing for their very fast speed, with reasonably high accuracies.