no code implementations • 21 Nov 2023 • Ke Xu, Yuanjie Zhu, Weizhi Zhang, Philip S. Yu
This inspired us to address the computational limitations of GCN-based models by designing a simple and efficient NODE-based model that can skip some GCN layers to reach the final state, thus avoiding the need to create many layers.
no code implementations • 21 Oct 2023 • Luping Xiang, Ke Xu, Jie Hu, Christos Masouros, Kun Yang
This paper proposes a novel non-orthogonal multiple access (NOMA)-assisted orthogonal time-frequency space (OTFS)-integrated sensing and communication (ISAC) network, which uses unmanned aerial vehicles (UAVs) as air base stations to support multiple users.
no code implementations • 21 Oct 2023 • Luping Xiang, Ke Xu, Jie Hu, Kun Yang
In this paper, we propose a green beamforming design for the integrated sensing and communication (ISAC) system, using beam-matching error to assess radar performance.
no code implementations • 10 Oct 2023 • Yang Wang, Bo Dong, Ke Xu, Haiyin Piao, Yufei Ding, BaoCai Yin, Xin Yang
Hence, given different inputs, it requires different time for converging to an adversarial sample.
no code implementations • 10 Oct 2023 • Ke Xu, Jiangtao Wang, Hongyuan Zhu, Dingchang Zheng
We attribute this issue to the inappropriate alignment criteria, which disrupt the semantic distance consistency between the feature space and the input space.
1 code implementation • 1 Oct 2023 • Zekun Moore Wang, Zhongyuan Peng, Haoran Que, Jiaheng Liu, Wangchunshu Zhou, Yuhan Wu, Hongcheng Guo, Ruitong Gan, Zehao Ni, Man Zhang, Zhaoxiang Zhang, Wanli Ouyang, Ke Xu, Wenhu Chen, Jie Fu, Junran Peng
The advent of Large Language Models (LLMs) has paved the way for complex tasks such as role-playing, which enhances user interactions by enabling models to imitate various characters.
no code implementations • 17 Sep 2023 • Hongcheng Guo, Jian Yang, Jiaheng Liu, Liqun Yang, Linzheng Chai, Jiaqi Bai, Junran Peng, Xiaorong Hu, Chao Chen, Dongfeng Zhang, Xu Shi, Tieqiao Zheng, Liangfan Zheng, Bo Zhang, Ke Xu, Zhoujun Li
However, there is a lack of specialized LLMs for IT operations.
1 code implementation • 11 Sep 2023 • Qingxiu Dong, Li Dong, Ke Xu, Guangyan Zhou, Yaru Hao, Zhifang Sui, Furu Wei
In this work, we use large language models (LLMs) to augment and accelerate research on the P versus NP problem, one of the most important open problems in theoretical computer science and mathematics.
1 code implementation • ICCV 2023 • Fang Liu, Yuhao Liu, Yuqiu Kong, Ke Xu, Lihe Zhang, BaoCai Yin, Gerhard Hancke, Rynson Lau
Hence, we propose a novel weakly-supervised RIS framework to formulate the target localization problem as a classification process to differentiate between positive and negative text expressions.
1 code implementation • ICCV 2023 • Weiran Gou, Ziyao Yi, Yan Xiang, Shaoqing Li, Zibin Liu, Dehui Kong, Ke Xu
To effectively connect the results by asymmetrical branches, a Quadratic Connection Unit(QCU) is proposed.
1 code implementation • ICCV 2023 • Ke Xu, Lei Han, Ye Tian, Shangshang Yang, Xingyi Zhang
In this paper, we explore a one-shot network quantization regime, named Elastic Quantization Neural Networks (EQ-Net), which aims to train a robust weight-sharing quantization supernet.
1 code implementation • ICCV 2023 • Haoyuan Wang, Xiaogang Xu, Ke Xu, Rynson WH. Lau
Neural Radiance Field (NeRF) is a promising approach for synthesizing novel views, given a set of images and the corresponding camera poses of a scene.
no code implementations • 19 Jul 2023 • Ke Xu, Jiangtao Wang, Hongyuan Zhu, Dingchang Zheng
Therefore, considerable efforts have been made to address the challenge of insufficient data in deep learning by leveraging SSL algorithms.
no code implementations • 27 Jun 2023 • Ke Xu, He Chen, Chenshu Wu
Considering the limited number of paths in physical environments, we formulate the multipath delay estimation as a sparse recovery problem.
no code implementations • 27 Jun 2023 • Ke Xu, Rui Zhang, He Chen
This paper considers a radio-frequency (RF)-based simultaneous localization and source-seeking (SLASS) problem in multi-robot systems, where multiple robots jointly localize themselves and an RF source using distance-only measurements extracted from RF signals and then control themselves to approach the source.
1 code implementation • 24 May 2023 • He Zhu, Chong Zhang, JunJie Huang, Junran Wu, Ke Xu
Hierarchical text classification (HTC) is a challenging subtask of multi-label classification as the labels form a complex hierarchical structure.
no code implementations • 22 May 2023 • Zekun Wang, Ge Zhang, Kexin Yang, Ning Shi, Wangchunshu Zhou, Shaochun Hao, Guangzheng Xiong, Yizhi Li, Mong Yuan Sim, Xiuying Chen, Qingqing Zhu, Zhenzhu Yang, Adam Nik, Qi Liu, Chenghua Lin, Shi Wang, Ruibo Liu, Wenhu Chen, Ke Xu, Dayiheng Liu, Yike Guo, Jie Fu
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within the field of NLP, aimed at addressing limitations in existing frameworks while aligning with the ultimate goals of artificial intelligence.
1 code implementation • 8 May 2023 • Junran Wu, Xueyuan Chen, Bowen Shi, Shangzhe Li, Ke Xu
In contrastive learning, the choice of ``view'' controls the information that the representation captures and influences the performance of the model.
no code implementations • 22 Apr 2023 • Xingyu Peng, Zhenkun Zhou, Chong Zhang, Ke Xu
Public opinion is a crucial factor in shaping political decision-making.
3 code implementations • 18 Apr 2023 • Zheng Lian, Haiyang Sun, Licai Sun, Kang Chen, Mingyu Xu, Kexin Wang, Ke Xu, Yu He, Ying Li, Jinming Zhao, Ye Liu, Bin Liu, Jiangyan Yi, Meng Wang, Erik Cambria, Guoying Zhao, Björn W. Schuller, JianHua Tao
The first Multimodal Emotion Recognition Challenge (MER 2023) was successfully held at ACM Multimedia.
1 code implementation • 6 Apr 2023 • Ziwei Fan, Ke Xu, Zhang Dong, Hao Peng, Jiawei Zhang, Philip S. Yu
Moreover, we show that the inclusion of user-user and item-item correlations can improve recommendations for users with both abundant and insufficient interactions.
no code implementations • 13 Mar 2023 • Ziniu Li, Ke Xu, Liu Liu, Lanqing Li, Deheng Ye, Peilin Zhao
To address this issue, we propose an alternative framework that involves a human supervising the RL models and providing additional feedback in the online deployment phase.
no code implementations • 19 Feb 2023 • Ke Xu, Guangyan Zhou
In this paper, by constructing extremely hard examples of CSP (with large domains) and SAT (with long clauses), we prove that such examples cannot be solved without exhaustive search, which is stronger than P $\neq$ NP.
1 code implementation • 17 Feb 2023 • Qiying Yu, Yang Liu, Yimu Wang, Ke Xu, Jingjing Liu
In this work, we propose Contrastive Representation Ensemble and Aggregation for Multimodal FL (CreamFL), a multimodal federated learning framework that enables training larger server models from clients with heterogeneous model architectures and data modalities, while only communicating knowledge on public dataset.
no code implementations • 3 Feb 2023 • Jie Hu, Ke Xu, Luping Xiang, Kun Yang
Integrated data and energy transfer (IDET) is an advanced technology for enabling energy sustainability for massively deployed low-power electronic consumption components.
no code implementations • 30 Jan 2023 • Xiaoyang Zheng, Zilong Wang, Ke Xu, Sen Li, Tao Zhuang, Qingwen Liu, Xiaoyi Zeng
Given a user query, the retrieval phase returns a subset of candidate products for the following ranking phase.
no code implementations • 20 Jan 2023 • Chong Zhang, Zhenkun Zhou, Xingyu Peng, Ke Xu
Subsequently, we propose a bipartite graph neural network model, DoubleH, which aims to better utilize homogeneous and heterogeneous information in user stance detection tasks.
no code implementations • 9 Jan 2023 • Yuhao Liu, Qing Guo, Lan Fu, Zhanghan Ke, Ke Xu, Wei Feng, Ivor W. Tsang, Rynson W. H. Lau
Extensive experiments on three shadow removal benchmarks demonstrate that our method outperforms existing shadow removal methods, and our StructNet can be integrated with existing methods to boost their performances further.
no code implementations • ICCV 2023 • Ke Xu, Gerhard Petrus Hancke, Rynson W.H. Lau
In this paper, we propose a novel neural approach to harmonize the image colors in a camera-independent color space, in which color values are proportional to the scene radiance.
1 code implementation • ICCV 2023 • Jiayu Sun, Ke Xu, Youwei Pang, Lihe Zhang, Huchuan Lu, Gerhard Hancke, Rynson Lau
In this paper, we propose a novel method to detect shadows from raw images.
no code implementations • ICCV 2023 • Han Fang, Jiyi Zhang, Yupeng Qiu, Ke Xu, Chengfang Fang, Ee-Chien Chang
In this paper, we take the role of investigators who want to trace the attack and identify the source, that is, the particular model which the adversarial examples are generated from.
no code implementations • 1 Dec 2022 • Tianyu Xia, Shuheng Shen, Su Yao, Xinyi Fu, Ke Xu, Xiaolong Xu, Xing Fu
As one way to implement privacy-preserving AI, differentially private learning is a framework that enables AI models to use differential privacy (DP).
no code implementations • 30 Nov 2022 • Xuekui Zhang, Yuying Huang, Ke Xu, Li Xing
Full electronic automation in stock exchanges has recently become popular, generating high-frequency intraday data and motivating the development of near real-time price forecasting methods.
no code implementations • 17 Nov 2022 • Jiaheng Liu, Tong He, Honghui Yang, Rui Su, Jiayi Tian, Junran Wu, Hongcheng Guo, Ke Xu, Wanli Ouyang
Previous top-performing methods for 3D instance segmentation often maintain inter-task dependencies and the tendency towards a lack of robustness.
1 code implementation • 7 Nov 2022 • Andrey Ignatov, Radu Timofte, Shuai Liu, Chaoyu Feng, Furui Bai, Xiaotao Wang, Lei Lei, Ziyao Yi, Yan Xiang, Zibin Liu, Shaoqing Li, Keming Shi, Dehui Kong, Ke Xu, Minsu Kwon, Yaqi Wu, Jiesi Zheng, Zhihao Fan, Xun Wu, Feng Zhang, Albert No, Minhyeok Cho, Zewen Chen, Xiaze Zhang, Ran Li, Juan Wang, Zhiming Wang, Marcos V. Conde, Ui-Jin Choi, Georgy Perevozchikov, Egor Ershov, Zheng Hui, Mengchuan Dong, Xin Lou, Wei Zhou, Cong Pang, Haina Qin, Mingxuan Cai
The role of mobile cameras increased dramatically over the past few years, leading to more and more research in automatic image quality enhancement and RAW photo processing.
2 code implementations • ECCV 2022 • Haoyuan Wang, Ke Xu, and Rynson W.H. Lau
Existing image enhancement methods are typically designed to address either the over- or under-exposure problem in the input image.
Ranked #1 on
Image Enhancement
on Exposure-Errors
no code implementations • 19 Oct 2022 • Chengqian Gao, Ke Xu, Liu Liu, Deheng Ye, Peilin Zhao, Zhiqiang Xu
A promising paradigm for offline reinforcement learning (RL) is to constrain the learned policy to stay close to the dataset behaviors, known as policy constraint offline RL.
no code implementations • 8 Oct 2022 • Jiahui Chen, Yi Zhao, Qi Li, Xuewei Feng, Ke Xu
Deep learning (DL) methods have been widely applied to anomaly-based network intrusion detection system (NIDS) to detect malicious traffic.
1 code implementation • 30 Aug 2022 • Zhifeng Xie, Sen Wang, Ke Xu, Zhizhong Zhang, Xin Tan, Yuan Xie, Lizhuang Ma
Based on this, we propose to exploit the image frequency distributions for night-time scene parsing.
no code implementations • 11 Aug 2022 • Ke Xu, Jianqiao Wangni, Yifan Zhang, Deheng Ye, Jiaxiang Wu, Peilin Zhao
Therefore, a threshold quantization strategy with a relatively small error is adopted in QCMD adagrad and QRDA adagrad to improve the signal-to-noise ratio and preserve the sparsity of the model.
no code implementations • 5 Jul 2022 • Ke Xu, Yao Xiao, Zhaoheng Zheng, Kaijie Cai, Ram Nevatia
Despite the diversity in attack patterns, adversarial patches tend to be highly textured and different in appearance from natural images.
2 code implementations • 4 Jul 2022 • Zhanghan Ke, Chunyi Sun, Lei Zhu, Ke Xu, Rynson W. H. Lau
Unlike prior methods that are based on black-box autoencoders, Harmonizer contains a neural network for filter argument prediction and several white-box filters (based on the predicted arguments) for image harmonization.
Ranked #5 on
Image Harmonization
on iHarmony4
1 code implementation • 26 Jun 2022 • Junran Wu, Xueyuan Chen, Ke Xu, Shangzhe Li
In addition to SEP, we further design two classification models, SEP-G and SEP-N for graph classification and node classification, respectively.
1 code implementation • 6 Jun 2022 • Junran Wu, Shangzhe Li, Jianhao Li, YiCheng Pan, Ke Xu
Inspired by structural entropy on graphs, we transform the data sample from graphs to coding trees, which is a simpler but essential structure for graph data.
no code implementations • 16 May 2022 • Shibo Feng, Chunyan Miao, Ke Xu, Jiaxiang Wu, Pengcheng Wu, Yang Zhang, Peilin Zhao
The probability prediction of multivariate time series is a notoriously challenging but practical task.
no code implementations • 12 Apr 2022 • Jiaheng Liu, Haoyu Qin, Yichao Wu, Jinyang Guo, Ding Liang, Ke Xu
In this work, we observe that mutual relation knowledge between samples is also important to improve the discriminative ability of the learned representation of the student model, and propose an effective face recognition distillation method called CoupleFace by additionally introducing the Mutual Relation Distillation (MRD) into existing distillation framework.
1 code implementation • CVPR 2022 • Xin Tian, Ke Xu, Xin Yang, Lin Du, BaoCai Yin, Rynson W. H. Lau
We observe that spatial attention works concurrently with object-based attention in the human visual recognition system.
1 code implementation • CVPR 2022 • Zhengyang Feng, Shaohua Guo, Xin Tan, Ke Xu, Min Wang, Lizhuang Ma
This paper presents a novel parametric curve-based method for lane detection in RGB images.
Ranked #2 on
Lane Detection
on LLAMAS
1 code implementation • 13 Jan 2022 • Xingbo Wang, Furui Cheng, Yong Wang, Ke Xu, Jiang Long, Hong Lu, Huamin Qu
Natural language interfaces (NLIs) provide users with a convenient way to interactively analyze data through natural language queries.
1 code implementation • CVPR 2022 • Shaohua Guo, Liang Liu, Zhenye Gan, Yabiao Wang, Wuhao Zhang, Chengjie Wang, Guannan Jiang, Wei zhang, Ran Yi, Lizhuang Ma, Ke Xu
The huge burden of computation and memory are two obstacles in ultra-high resolution image segmentation.
no code implementations • 5 Dec 2021 • Guoquan Xu, Hezhi Cao, Yifan Zhang, Yanxin Ma, Jianwei Wan, Ke Xu
Transformer plays an increasingly important role in various computer vision areas and remarkable achievements have also been made in point cloud analysis.
no code implementations • 5 Dec 2021 • Guoquan Xu, Hezhi Cao, Yifan Zhang, Jianwei Wan, Ke Xu, Yanxin Ma
Attention mechanism plays a more and more important role in point cloud analysis and channel attention is one of the hotspots.
no code implementations • 3 Dec 2021 • Zheng Dong, Ke Xu, Ziheng Duan, Hujun Bao, Weiwei Xu, Rynson W. H. Lau
Our key idea is to exploit the complementary properties of depth denoising and 3D reconstruction, for learning a two-scale PIFu representation to reconstruct high-frequency facial details and consistent bodies separately.
no code implementations • 30 Nov 2021 • Jiyi Zhang, Han Fang, Wesley Joon-Wie Tann, Ke Xu, Chengfang Fang, Ee-Chien Chang
We point out that by distributing different copies of the model to different buyers, we can mitigate the attack such that adversarial samples found on one copy would not work on another copy.
no code implementations • 23 Nov 2021 • Xingkai Zheng, Xirui Li, Ke Xu, Xinghao Jiang, Tanfeng Sun
Most existing gait identification methods extract features from gait videos and identify a probe sample by a query in the gallery.
no code implementations • 19 Nov 2021 • Xin Tian, Ke Xu, Xin Yang, BaoCai Yin, Rynson W. H. Lau
However, it is non-trivial to use only class labels to learn instance-aware saliency information, as salient instances with high semantic affinities may not be easily separated by the labels.
no code implementations • 15 Oct 2021 • Chengqian Gao, Ke Xu, Kuangqi Zhou, Lanqing Li, Xueqian Wang, Bo Yuan, Peilin Zhao
To alleviate the action distribution shift problem in extracting RL policy from static trajectories, we propose Value Penalized Q-learning (VPQ), an uncertainty-based offline RL algorithm.
2 code implementations • COLING 2022 • Chong Zhang, He Zhu, Xingyu Peng, Junran Wu, Ke Xu
Inspired by the structural entropy, we construct the coding tree of the graph by minimizing the structural entropy and propose HINT, which aims to make full use of the hierarchical information contained in the text for the task of text classification.
1 code implementation • EMNLP 2021 • Canwen Xu, Wangchunshu Zhou, Tao Ge, Ke Xu, Julian McAuley, Furu Wei
Recent studies on compression of pretrained language models (e. g., BERT) usually use preserved accuracy as the metric for evaluation.
1 code implementation • 5 Sep 2021 • Junran Wu, Jianhao Li, YiCheng Pan, Ke Xu
We then present an implementation of the scheme in a tree kernel and a convolutional network to perform graph classification.
no code implementations • 20 Aug 2021 • Guoquan Xu, Hezhi Cao, Yifan Zhang, Jianwei Wan, Ke Xu, Yanxin Ma
To handle this prob-lem, a feature representation learning method, named Dual-Neighborhood Deep Fusion Network (DNDFN), is proposed to serve as an improved point cloud encoder for the task of non-idealized point cloud classification.
1 code implementation • 28 Jun 2021 • Chuanpu Fu, Qi Li, Meng Shen, Ke Xu
To this end, we propose Whisper, a realtime ML based malicious traffic detection system that achieves both high accuracy and high throughput by utilizing frequency domain features.
no code implementations • 27 Jun 2021 • Guangmeng Zhou, Ke Xu, Qi Li, Yang Liu, Yi Zhao
In a highly heterogeneous environment, AdaptCL achieves a training speedup of 6. 2x with a slight loss of accuracy.
no code implementations • Findings (ACL) 2021 • Yaru Hao, Li Dong, Hangbo Bao, Ke Xu, Furu Wei
Moreover, we propose to use a focal loss for the generator in order to relieve oversampling of correct tokens as replacements.
2 code implementations • 2021 IEEE 37th International Conference on Data Engineering 2021 • Yansheng Wang, Yongxin Tong, Dingyuan Shi, Ke Xu
Traditional learning-to-rank (LTR) models are usually trained in a centralized approach based upon a large amount of data.
1 code implementation • 4 Jun 2021 • Junran Wu, Ke Xu, Xueyuan Chen, Shangzhe Li, Jichang Zhao
Then, structural information, referring to associations among temporal points and the node weights, is extracted from the mapped graphs to resolve the problems regarding long-range dependencies and the chaotic property.
1 code implementation • ICLR 2022 • Yinpeng Dong, Ke Xu, Xiao Yang, Tianyu Pang, Zhijie Deng, Hang Su, Jun Zhu
In this paper, we explore the memorization effect in adversarial training (AT) for promoting a deeper understanding of model capacity, convergence, generalization, and especially robust overfitting of the adversarially trained models.
1 code implementation • ACL 2021 • JunJie Huang, Duyu Tang, Linjun Shou, Ming Gong, Ke Xu, Daxin Jiang, Ming Zhou, Nan Duan
Finding codes given natural language query isb eneficial to the productivity of software developers.
1 code implementation • 21 Apr 2021 • Cho-Ying Wu, Ke Xu, Chin-Cheng Hsu, Ulrich Neumann
This work focuses on the analysis that whether 3D face models can be learned from only the speech inputs of speakers.
1 code implementation • NAACL 2021 • Canwen Xu, Wangchunshu Zhou, Tao Ge, Ke Xu, Julian McAuley, Furu Wei
Cant is important for understanding advertising, comedies and dog-whistle politics.
no code implementations • 6 Jan 2021 • Peng Yue, Jingping Xiao, Ke Xu, Yiyu Lu, Dewei Peng
Hitherto, separation and transition problems have not been described accurately in mathematical terms, leading to design errors and prediction problems in fluid machine engineering.
Fluid Dynamics
1 code implementation • EMNLP 2021 • Wangchunshu Zhou, Tao Ge, Canwen Xu, Ke Xu, Furu Wei
In this paper, we generalize text infilling (e. g., masked language models) by proposing Sequence Span Rewriting (SSR) as a self-supervised sequence-to-sequence (seq2seq) pre-training objective.
no code implementations • ICCV 2021 • Lei Zhu, Ke Xu, Zhanghan Ke, Rynson W.H. Lau
These two phenomenons reveal that deep shadow detectors heavily depend on the intensity cue, which we refer to as intensity bias.
1 code implementation • Findings (ACL) 2021 • Zhongli Li, Qingyu Zhou, Chao Li, Ke Xu, Yunbo Cao
Pre-trained Transformer-based neural language models, such as BERT, have achieved remarkable results on varieties of NLP tasks.
1 code implementation • ICCV 2021 • Zheng Dong, Ke Xu, Yin Yang, Hujun Bao, Weiwei Xu, Rynson W. H. Lau
It is beneficial to strong reflection detection and substantially improves the quality of reflection removal results.
no code implementations • 10 Dec 2020 • Tiancheng Huang, Ke Xu, Donglin Wang
Domain adaptation using graph-structured networks learns label-discriminative and network-invariant node embeddings by sharing graph parameters.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Yaru Hao, Li Dong, Furu Wei, Ke Xu
The recently introduced pre-trained language model BERT advances the state-of-the-art on many NLP tasks through the fine-tuning approach, but few studies investigate how the fine-tuning process improves the model performance on downstream tasks.
no code implementations • 27 Nov 2020 • Meng Shen, Hao Yu, Liehuang Zhu, Ke Xu, Qi Li, Xiaojiang Du
Deep neural networks (DNNs) have been increasingly used in face recognition (FR) systems.
no code implementations • 5 Nov 2020 • Jun Liu, Ke Xu, Guangyan Zhou
The second moment method has always been an effective tool to lower bound the satisfiability threshold of many random constraint satisfaction problems.
no code implementations • 29 Sep 2020 • Xin Tian, Ke Xu, Xin Yang, Bao-Cai Yin, Rynson W. H. Lau
Inspired by this insight, we propose to use class and subitizing labels as weak supervision for the SID problem.
1 code implementation • 20 Jul 2020 • Brian Barr, Ke Xu, Claudio Silva, Enrico Bertini, Robert Reilly, C. Bayan Bruss, Jason D. Wittenbach
In data science, there is a long history of using synthetic data for method development, feature selection and feature engineering.
2 code implementations • IJCAI 2020 • Ke Xu, Yifan Zhang, Deheng Ye, Peilin Zhao, Mingkui Tan
One of the key issues is how to represent the non-stationary price series of assets in a portfolio, which is important for portfolio decisions.
1 code implementation • NeurIPS 2020 • Wangchunshu Zhou, Canwen Xu, Tao Ge, Julian McAuley, Ke Xu, Furu Wei
In this paper, we propose Patience-based Early Exit, a straightforward yet effective inference method that can be used as a plug-and-play technique to simultaneously improve the efficiency and robustness of a pretrained language model (PLM).
no code implementations • CVPR 2020 • Ke Xu, Xin Yang, Baocai Yin, Rynson W.H. Lau
While concurrently enhancing a low-light image and removing its noise is ill-posed, we observe that noise exhibits different levels of contrast in different frequency layers, and it is much easier to detect noise in the lowfrequency layer than in the high one.
no code implementations • 27 May 2020 • Ke Xu, Anton S. Tremsin, Jiaqi Li, Daniela M. Ushizima, Catherine A. Davy, Amine Bouterf, Ying Tsun Su, Milena Marroccoli, Anna Maria Mauro, Massimo Osanna, Antonio Telesca, Paulo J. M. Monteiro
In the present work, samples were drilled from the "Hospitium" in Pompeii and were analyzed by synchrotron microtomography (uCT) and neutron radiography to study how the microstructure, including the presence of induced cracks, affects their water adsorption.
Applied Physics Materials Science Geophysics
1 code implementation • ACL 2020 • Zhongli Li, Wenhui Wang, Li Dong, Furu Wei, Ke Xu
Our approach outperforms previous unsupervised approaches by a large margin and is competitive with early supervised models.
Ranked #188 on
Question Answering
on SQuAD1.1
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Wangchunshu Zhou, Tao Ge, Ke Xu, Furu Wei, Ming Zhou
In this paper, we introduce DropHead, a structured dropout method specifically designed for regularizing the multi-head attention mechanism, which is a key component of transformer, a state-of-the-art model for various NLP tasks.
2 code implementations • 23 Apr 2020 • Yaru Hao, Li Dong, Furu Wei, Ke Xu
The great success of Transformer-based models benefits from the powerful multi-head self-attention mechanism, which learns token dependencies and encodes contextual information from the input.
no code implementations • 15 Mar 2020 • Xin Tan, Ke Xu, Ying Cao, Yiheng Zhang, Lizhuang Ma, Rynson W. H. Lau
Although huge progress has been made on scene analysis in recent years, most existing works assume the input images to be in day-time with good lighting conditions.
no code implementations • 12 Feb 2020 • Wangchunshu Zhou, Ke Xu
While able to be trained in a fully self-supervised fashion, our model can be further fine-tuned with a little amount of human preference annotation to better imitate human judgment.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Wangchunshu Zhou, Tao Ge, Ke Xu
PBD copies the corresponding representation of source tokens to the decoder as pseudo future context to enable the decoder to attends to its bi-directional context.
no code implementations • ICLR 2020 • Wangchunshu Zhou, Tao Ge, Ke Xu, Furu Wei, Ming Zhou
Conventional Generative Adversarial Networks (GANs) for text generation tend to have issues of reward sparsity and mode collapse that affect the quality and diversity of generated samples.
1 code implementation • 9 Nov 2019 • Ke Xu, Kaiyu Guan, Jian Peng, Yunan Luo, Sibo Wang
The average accuracy is 93. 56%, compared with 85. 36% from CFMask.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Wangchunshu Zhou, Tao Ge, Chang Mu, Ke Xu, Furu Wei, Ming Zhou
The poor translation model resembles the ESL (English as a second language) learner and tends to generate translations of low quality in terms of fluency and grammatical correctness, while the good translation model generally generates fluent and grammatically correct translations.
3 code implementations • 20 Oct 2019 • Mengqi Zhang, Shu Wu, Meng Gao, Xin Jiang, Ke Xu, Liang Wang
The other is Dot-Product Attention mechanism, which draws on the Transformer net to explicitly model the effect of historical sessions on the current session.
no code implementations • 12 Oct 2019 • Peter Du, Zhe Huang, Tianqi Liu, Ke Xu, Qichao Gao, Hussein Sibai, Katherine Driggs-Campbell, Sayan Mitra
As autonomous systems begin to operate amongst humans, methods for safe interaction must be investigated.
Test
Robotics
Multiagent Systems
Signal Processing
1 code implementation • ICCV 2019 • Xin Yang, Haiyang Mei, Ke Xu, Xiaopeng Wei, Bao-Cai Yin, Rynson W. H. Lau
To the best of our knowledge, this is the first work to address the mirror segmentation problem with a computational approach.
no code implementations • 23 Aug 2019 • Xin Yang, Haiyang Mei, Jiqing Zhang, Ke Xu, Bao-Cai Yin, Qiang Zhang, Xiaopeng Wei
Recently, single-image super-resolution has made great progress owing to the development of deep convolutional neural networks (CNNs).
no code implementations • IJCNLP 2019 • Yaru Hao, Li Dong, Furu Wei, Ke Xu
Language model pre-training, such as BERT, has achieved remarkable results in many NLP tasks.
no code implementations • 14 Jul 2019 • Ke Xu, Martin D. Gould, Sam D. Howison
We study the multi-level order-flow imbalance (MLOFI), which is a vector quantity that measures the net flow of buy and sell orders at different price levels in a limit order book (LOB).
1 code implementation • ACL 2019 • Wangchunshu Zhou, Tao Ge, Ke Xu, Furu Wei, Ming Zhou
Our approach first applies dropout to the target word{'}s embedding for partially masking the word, allowing BERT to take balanced consideration of the target word{'}s semantics and contexts for proposing substitute candidates, and then validates the candidates based on their substitution{'}s influence on the global contextualized representation of the sentence.
no code implementations • ICLR 2019 • Ke Xu, Xiao-Yun Wang, Qun Jia, Jianjing An, Dong Wang
Therefore, accumulating the saliency of the filter over the entire data set can provide more accurate guidance for pruning.
no code implementations • ICLR 2019 • Qi Tan, Pingzhong Tang, Ke Xu, Weiran Shen, Song Zuo
Generative neural networks map a standard, possibly distribution to a complex high-dimensional distribution, which represents the real world data set.
2 code implementations • CVPR 2019 • Tianyu Wang, Xin Yang, Ke Xu, Shaozhe Chen, Qiang Zhang, Rynson Lau
Second, to better cover the stochastic distribution of real rain streaks, we propose a novel SPatial Attentive Network (SPANet) to remove rain streaks in a local-to-global manner.
Ranked #3 on
Single Image Deraining
on RainCityscapes
no code implementations • NeurIPS 2018 • Xin Yang, Ke Xu, Shaozhe Chen, Shengfeng He, Baocai Yin Yin, Rynson Lau
Our aim is to discover the most informative sequence of regions for user input in order to produce a good alpha matte with minimum labeling efforts.
3 code implementations • 17 Nov 2018 • Bryan A. Plummer, Kevin J. Shih, Yichen Li, Ke Xu, Svetlana Lazebnik, Stan Sclaroff, Kate Saenko
Most existing work that grounds natural language phrases in images starts with the assumption that the phrase in question is relevant to the image.
no code implementations • CVPR 2018 • Xin Yang, Ke Xu, Yibing Song, Qiang Zhang, Xiaopeng Wei, Rynson Lau
Given an input LDR image, we first reconstruct the missing details in the HDR domain.
no code implementations • 31 May 2017 • Junping Zhou, Huanyao Sun, Feifei Ma, Jian Gao, Ke Xu, Minghao Yin
We introduce a diversified top-k partial MaxSAT problem, a combination of partial MaxSAT problem and enumeration problem.
no code implementations • EACL 2017 • Li Dong, Shaohan Huang, Furu Wei, Mirella Lapata, Ming Zhou, Ke Xu
This paper presents an attention-enhanced attribute-to-sequence model to generate product reviews for given attribute information, such as user, product, and rating.
no code implementations • NeurIPS 2016 • Xinran He, Ke Xu, David Kempe, Yan Liu
We establish both proper and improper PAC learnability of influence functions under randomly missing observations.
no code implementations • 17 Dec 2014 • Yuan Zuo, Jichang Zhao, Ke Xu
The short text has been the prevalent format for information of Internet in recent decades, especially with the development of online social media, whose millions of users generate a vast number of short messages everyday.
no code implementations • CL 2015 • Li Dong, Furu Wei, Shujie Liu, Ming Zhou, Ke Xu
Unlike previous works that employ syntactic parsing results for sentiment analysis, we develop a statistical parser to directly analyze the sentiment structure of a sentence.