no code implementations • WMT (EMNLP) 2021 • Longyue Wang, Mu Li, Fangxu Liu, Shuming Shi, Zhaopeng Tu, Xing Wang, Shuangzhi Wu, Jiali Zeng, Wen Zhang
Based on our success in the last WMT, we continuously employed advanced techniques such as large batch training, data selection and data filtering.
1 code implementation • COLING 2022 • Zhongjian Miao, Xiang Li, Liyan Kang, Wen Zhang, Chulun Zhou, Yidong Chen, Bin Wang, Min Zhang, Jinsong Su
Most existing methods on robust neural machine translation (NMT) construct adversarial examples by injecting noise into authentic examples and indiscriminately exploit two types of examples.
1 code implementation • COLING 2022 • Zezhong Xu, Peng Ye, Hui Chen, Meng Zhao, Huajun Chen, Wen Zhang
Based on this idea, we propose a transformer-based rule mining approach, Ruleformer.
no code implementations • IWSLT (ACL) 2022 • Bao Guo, Mengge Liu, Wen Zhang, Hexuan Chen, Chang Mu, Xiang Li, Jianwei Cui, Bin Wang, Yuhang Guo
Our system is built based on the Transformer model with novel techniques borrowed from our recent research work.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+5
1 code implementation • 3 Mar 2023 • Wen Zhang, Yushan Zhu, Mingyang Chen, Yuxia Geng, Yufeng Huang, Yajing Xu, Wenting Song, Huajun Chen
Through experiments, we justify that the pretrained KGTransformer could be used off the shelf as a general and effective KRF module across KG-related tasks.
no code implementations • 2 Mar 2023 • Mengge Liu, Wen Zhang, Xiang Li, Jian Luan, Bin Wang, Yuhang Guo, Shuoying Chen
Simultaneous machine translation (SimulMT) models start translation before the end of the source sentence, making the translation monotonically aligned with the source sentence.
no code implementations • 3 Feb 2023 • Mingyang Chen, Wen Zhang, Yuxia Geng, Zezhong Xu, Jeff Z. Pan, Huajun Chen
In this paper, we use a set of general terminologies to unify these methods and refer to them as Knowledge Extrapolation.
1 code implementation • 3 Feb 2023 • Mingyang Chen, Wen Zhang, Zhen Yao, Yushan Zhu, Yang Gao, Jeff Z. Pan, Huajun Chen
In our proposed model, Entity-Agnostic Representation Learning (EARL), we only learn the embeddings for a small set of entities and refer to them as reserved entities.
1 code implementation • 3 Jan 2023 • Zhen Yao, Wen Zhang, Mingyang Chen, Yufeng Huang, Yi Yang, Huajun Chen
And in AnKGE, we train an analogy function for each level of analogical inference with the original element embedding from a well-trained KGE model as input, which outputs the analogical object embedding.
no code implementations • 29 Dec 2022 • Zhuo Chen, Jiaoyan Chen, Wen Zhang, Lingbing Guo, Yin Fang, Yufeng Huang, Yuxia Geng, Jeff Z. Pan, Wenting Song, Huajun Chen
As an important variant of entity alignment (EA), multi-modal entity alignment (MMEA) aims to discover identical entities across different knowledge graphs (KGs) with relevant images attached.
1 code implementation • 20 Oct 2022 • Zhuo Chen, Wen Zhang, Yufeng Huang, Mingyang Chen, Yuxia Geng, Hongtao Yu, Zhen Bi, Yichi Zhang, Zhen Yao, Wenting Song, Xinliang Wu, Yi Yang, Mingyi Chen, Zhaoyang Lian, YingYing Li, Lei Cheng, Huajun Chen
In this work, we share our experience on tele-knowledge pre-training for fault analysis, a crucial task in telecommunication applications that requires a wide range of knowledge normally found in both machine log data and product documents.
1 code implementation • 8 Oct 2022 • Yuxia Geng, Jiaoyan Chen, Jeff Z. Pan, Mingyang Chen, Song Jiang, Wen Zhang, Huajun Chen
Subgraph reasoning with message passing is a promising and popular solution.
no code implementations • 21 Sep 2022 • Sahand Sabour, Wen Zhang, Xiyao Xiao, Yuwei Zhang, Yinhe Zheng, Jiaxin Wen, Jialu Zhao, Minlie Huang
In this study, we analyze the effectiveness of Emohaa in reducing symptoms of mental distress.
no code implementations • 19 Sep 2022 • Zezhong Xu, Wen Zhang, Peng Ye, Hui Chen, Huajun Chen
In this work, we propose a Neural and Symbolic Entangled framework (ENeSy) for complex query answering, which enables the neural and symbolic reasoning to enhance each other to alleviate the cascading error and KG incompleteness.
no code implementations • 15 Sep 2022 • Yichi Zhang, Wen Zhang
Twins negative sampling is suitable for multimodal scenarios and could align different embeddings for entities.
no code implementations • 19 Aug 2022 • Yufeng Huang, Zhuo Chen, Wen Zhang, Jiaoyan Chen, Jeff Z. Pan, Zhen Yao, Yujie Xie, Huajun Chen
In typical multi-modal data with text and image, previous approaches do not make full use of the fine-grained semantics of the image, especially in conjunction with the semantics of the text and do not fully consider modeling the relationship between fine-grained image information and target, which leads to insufficient use of image and inadequate to identify fine-grained aspects and opinions.
1 code implementation • 26 Jul 2022 • Zhuo Chen, Yufeng Huang, Jiaoyan Chen, Yuxia Geng, Yin Fang, Jeff Pan, Ningyu Zhang, Wen Zhang
Visual question answering (VQA) often requires an understanding of visual concepts and language semantics, which relies on external knowledge.
Ranked #11 on
Visual Question Answering (VQA)
on OK-VQA
1 code implementation • 4 Jul 2022 • Zhuo Chen, Yufeng Huang, Jiaoyan Chen, Yuxia Geng, Wen Zhang, Yin Fang, Jeff Z. Pan, Huajun Chen
Specifically, we (1) developed a cross-modal semantic grounding network to investigate the model's capability of disentangling semantic attributes from the images; (2) applied an attribute-level contrastive learning strategy to further enhance the model's discrimination on fine-grained visual characteristics against the attribute co-occurrence and imbalance; (3) proposed a multi-task learning policy for considering multi-model objectives.
Ranked #1 on
Zero-Shot Learning
on CUB-200-2011
1 code implementation • 8 Jun 2022 • Yuxia Geng, Jiaoyan Chen, Wen Zhang, Yajing Xu, Zhuo Chen, Jeff Z. Pan, Yufeng Huang, Feiyu Xiong, Huajun Chen
In this paper, we focus on ontologies for augmenting ZSL, and propose to learn disentangled ontology embeddings guided by ontology properties to capture and utilize more fine-grained class relationships in different aspects.
1 code implementation • 10 May 2022 • Mingyang Chen, Wen Zhang, Zhen Yao, Xiangnan Chen, Mengxiao Ding, Fei Huang, Huajun Chen
We study the knowledge extrapolation problem to embed new components (i. e., entities and relations) that come with emerging knowledge graphs (KGs) in the federated setting.
1 code implementation • 22 Mar 2022 • Zhaoyang Chu, Shichao Liu, Wen Zhang
The identification of drug-target binding affinity (DTA) has attracted increasing attention in the drug discovery process due to the more specific interpretation than binary interaction prediction.
no code implementations • 2 Mar 2022 • Wen Zhang, Chi-Man Wong, Ganqinag Ye, Bo Wen, Hongting Zhou, Wei zhang, Huajun Chen
On the one hand, it could provide item knowledge services in a uniform way with service vectors for embedding-based and item-knowledge-related task models without accessing triple data.
1 code implementation • 25 Feb 2022 • Wen Zhang, Xiangnan Chen, Zhen Yao, Mingyang Chen, Yushan Zhu, Hongtao Yu, Yufeng Huang, Zezhong Xu, Yajing Xu, Ningyu Zhang, Zonggang Yuan, Feiyu Xiong, Huajun Chen
NeuralKG is an open-source Python-based library for diverse representation learning of knowledge graphs.
no code implementations • 15 Feb 2022 • Wen Zhang, Jiaoyan Chen, Juan Li, Zezhong Xu, Jeff Z. Pan, Huajun Chen
Knowledge graph (KG) reasoning is becoming increasingly popular in both academia and industry.
no code implementations • 7 Feb 2022 • Wen Zhang, Wenlu Wang, Mehdi Sookhak, Chen Pan
Due to the sustainable power supply and environment-friendly features, self-powered IoT devices have been increasingly employed in various fields such as providing observation data in remote areas, especially in rural areas or post-disaster scenarios.
1 code implementation • 10 Jan 2022 • Ningyu Zhang, Xin Xu, Liankuan Tao, Haiyang Yu, Hongbin Ye, Shuofei Qiao, Xin Xie, Xiang Chen, Zhoubo Li, Lei LI, Xiaozhuan Liang, Yunzhi Yao, Shumin Deng, Peng Wang, Wen Zhang, Zhenru Zhang, Chuanqi Tan, Qiang Chen, Feiyu Xiong, Fei Huang, Guozhou Zheng, Huajun Chen
We present an open-source and extensible knowledge extraction toolkit DeepKE, supporting complicated low-resource, document-level and multimodal scenarios in the knowledge base population.
Attribute Extraction
Cross-Domain Named Entity Recognition
+4
no code implementations • CVPR 2022 • Xingxing Zou, Kaicheng Pang, Wen Zhang, Waikeung Wong
To date, it is the first work to address the AI model's aesthetic ability with detailed characterization based on the professional fashion domain knowledge.
no code implementations • 18 Dec 2021 • Jiaoyan Chen, Yuxia Geng, Zhuo Chen, Jeff Z. Pan, Yuan He, Wen Zhang, Ian Horrocks, Huajun Chen
Machine learning especially deep neural networks have achieved great success but many of them often rely on a number of labeled samples for supervision.
no code implementations • 16 Dec 2021 • Hongzhun Wang, Feng Huang, Wen Zhang
More importantly, HampDTI identifies the important meta-paths for DTI prediction, which could explain how drugs connect with targets in HNs.
no code implementations • 16 Dec 2021 • Wen Zhang, Shumin Deng, Mingyang Chen, Liang Wang, Qiang Chen, Feiyu Xiong, Xiangwen Liu, Huajun Chen
We first identity three important desiderata for e-commerce KG systems: 1) attentive reasoning, reasoning over a few target relations of more concerns instead of all; 2) explanation, providing explanations for a prediction to help both users and business operators understand why the prediction is made; 3) transferable rules, generating reusable rules to accelerate the deployment of a KG to new systems.
no code implementations • 8 Dec 2021 • Ganqiang Ye, Wen Zhang, Zhen Bi, Chi Man Wong, Chen Hui, Huajun Chen
Representation learning models for Knowledge Graphs (KG) have proven to be effective in encoding structural information and performing reasoning over KGs.
1 code implementation • 1 Dec 2021 • Yin Fang, Qiang Zhang, Haihong Yang, Xiang Zhuang, Shumin Deng, Wen Zhang, Ming Qin, Zhuo Chen, Xiaohui Fan, Huajun Chen
To address these issues, we construct a Chemical Element Knowledge Graph (KG) to summarize microscopic associations between elements and propose a novel Knowledge-enhanced Contrastive Learning (KCL) framework for molecular representation learning.
1 code implementation • 27 Oct 2021 • Mingyang Chen, Wen Zhang, Yushan Zhu, Hongting Zhou, Zonggang Yuan, Changliang Xu, Huajun Chen
In this paper, to achieve inductive knowledge graph embedding, we propose a model MorsE, which does not learn embeddings for entities but learns transferable meta-knowledge that can be used to produce entity embeddings.
no code implementations • 29 Sep 2021 • Wen Zhang, Mingyang Chen, Zezhong Xu, Yushan Zhu, Huajun Chen
KGExplainer is a multi-hop reasoner learning latent rules for link prediction and is encouraged to behave similarly to KGEs during prediction through knowledge distillation.
1 code implementation • 20 Aug 2021 • Yushan Zhu, Huaixiao Tou, Wen Zhang, Ganqiang Ye, Hui Chen, Ningyu Zhang, Huajun Chen
In this paper, we address multi-modal pretraining of product data in the field of E-commerce.
no code implementations • 2 May 2021 • Wen Zhang, Chi-Man Wong, Ganqiang Ye, Bo Wen, Wei zhang, Huajun Chen
As a backbone for online shopping platforms, we built a billion-scale e-commerce product knowledge graph for various item knowledge services such as item recommendation.
no code implementations • 30 Apr 2021 • Chi-Man Wong, Fan Feng, Wen Zhang, Chi-Man Vong, Hui Chen, Yichi Zhang, Peng He, Huan Chen, Kun Zhao, Huajun Chen
We first construct a billion-scale conversation knowledge graph (CKG) from information about users, items and conversations, and then pretrain CKG by introducing knowledge graph embedding method and graph convolution network to encode semantic and structural information respectively. To make the CTR prediction model sensible of current state of users and the relationship between dialogues and items, we introduce user-state and dialogue-interaction representations based on pre-trained CKG and propose K-DCN. In K-DCN, we fuse the user-state representation, dialogue-interaction representation and other normal feature representations via deep cross network, which will give the rank of candidate items to be recommended. We experimentally prove that our proposal significantly outperforms baselines and show it's real application in Alime.
no code implementations • 31 Dec 2020 • Shuai Liu, Xinran Xu, Zhihao Yang, Xiaohan Zhao, Wen Zhang
The computational experiments show that EPIHC outperforms the existing state-of-the-art EPI prediction methods on the benchmark datasets and chromosome-split datasets, and the study reveal that the communicative learning module can bring explicit information about EPIs, which is ignore by CNN.
no code implementations • 25 Nov 2020 • Deheng Ye, Guibin Chen, Peilin Zhao, Fuhao Qiu, Bo Yuan, Wen Zhang, Sheng Chen, Mingfei Sun, Xiaoqian Li, Siqin Li, Jing Liang, Zhenjie Lian, Bei Shi, Liang Wang, Tengfei Shi, Qiang Fu, Wei Yang, Lanxiao Huang
Unlike prior attempts, we integrate the macro-strategy and the micromanagement of MOBA-game-playing into neural networks in a supervised and end-to-end manner.
no code implementations • NeurIPS 2020 • Deheng Ye, Guibin Chen, Wen Zhang, Sheng Chen, Bo Yuan, Bo Liu, Jia Chen, Zhao Liu, Fuhao Qiu, Hongsheng Yu, Yinyuting Yin, Bei Shi, Liang Wang, Tengfei Shi, Qiang Fu, Wei Yang, Lanxiao Huang, Wei Liu
However, existing work falls short in handling the raw game complexity caused by the explosion of agent combinations, i. e., lineups, when expanding the hero pool in case that OpenAI's Dota AI limits the play to a pool of only 17 heroes.
no code implementations • COLING 2020 • Juan Li, Ruoxu Wang, Ningyu Zhang, Wen Zhang, Fan Yang, Huajun Chen
To recognize unseen relations at test time, we explore the problem of zero-shot relation classification.
2 code implementations • 24 Oct 2020 • Mingyang Chen, Wen Zhang, Zonggang Yuan, Yantao Jia, Huajun Chen
Knowledge graphs (KGs) consisting of triples are always incomplete, so it's important to do Knowledge Graph Completion (KGC) by predicting missing triples.
no code implementations • 13 Sep 2020 • Yushan Zhu, Wen Zhang, Mingyang Chen, Hui Chen, Xu Cheng, Wei zhang, Huajun Chen
In DualDE, we propose a soft label evaluation mechanism to adaptively assign different soft label and hard label weights to different triples, and a two-stage distillation approach to improve the student's acceptance of the teacher.
1 code implementation • 2 Sep 2020 • Wen Zhang, Lingfei Deng, Lei Zhang, Dongrui Wu
Transfer learning (TL) utilizes data or knowledge from one or more source domains to facilitate the learning in a target domain.
no code implementations • 19 Jul 2020 • Wen Zhang, Liang Zhan, Paul Thompson, Yalin Wang
The higher-order network mappings from brain structural networks to functional networks are learned in the node domain.
1 code implementation • 1 May 2020 • Junyou Li, Gong Cheng, Qingxia Liu, Wen Zhang, Evgeny Kharlamov, Kalpa Gunaratna, Huajun Chen
In a large-scale knowledge graph (KG), an entity is often described by a large number of triple-structured facts.
1 code implementation • 24 Feb 2020 • Liang Mi, Tianshu Yu, Jose Bento, Wen Zhang, Baoxin Li, Yalin Wang
We propose to compute Wasserstein barycenters (WBs) by solving for Monge maps with variational principle.
1 code implementation • 1 Dec 2019 • Wen Zhang, Dongrui Wu
Many existing domain adaptation approaches are based on the joint MMD, which is computed as the (weighted) sum of the marginal distribution discrepancy and the conditional distribution discrepancy; however, a more natural metric may be their joint probability distribution discrepancy.
1 code implementation • 30 Nov 2019 • Yang Feng, Wanying Xie, Shuhao Gu, Chenze Shao, Wen Zhang, Zhengxin Yang, Dong Yu
Neural machine translation models usually adopt the teacher forcing strategy for training which requires the predicted sequence matches ground truth word by word and forces the probability of each prediction to approach a 0-1 distribution.
1 code implementation • 13 Nov 2019 • Feng Huang, Xiang Yue, Zhankun Xiong, Zhouxin Yu, Wen Zhang
To this end, we innovatively represent miRNA-disease-type triplets as a tensor and introduce Tensor Decomposition methods to solve the prediction task.
no code implementations • 5 Nov 2019 • Yong Shan, Yang Feng, Jinchao Zhang, Fandong Meng, Wen Zhang
Generally, Neural Machine Translation models generate target words in a left-to-right (L2R) manner and fail to exploit any future (right) semantics information, which usually produces an unbalanced translation.
1 code implementation • 1 Nov 2019 • Guangyan Zhang, Ziru Liu, Jichen Dai, Zilan Yu, Shuai Liu, Wen Zhang
However, most of the existing methods are designed for lncRNAs in animal systems, and only a few methods focus on the plant lncRNA identification.
no code implementations • 21 Oct 2019 • Wen Zhang, Dengji Zhao, Han-Yu Chen
Redistribution mechanisms have been proposed for more efficient resource allocation but not for profit.
1 code implementation • 14 Oct 2019 • Wen Zhang, Dongrui Wu
Experiments on four EEG datasets from two different BCI paradigms demonstrated that MEKT outperformed several state-of-the-art transfer learning approaches, and DTE can reduce more than half of the computational cost when the number of source subjects is large, with little sacrifice of classification accuracy.
no code implementations • 13 Sep 2019 • Wen Zhang, Yalin Wang
Our model is a two-stage deep network which contains a coarse parcellation network with a U-shape structure and a refinement network to fine-tune the coarse results.
1 code implementation • IJCNLP 2019 • Mingyang Chen, Wen Zhang, Wei zhang, Qiang Chen, Huajun Chen
Link prediction is an important way to complete knowledge graphs (KGs), while embedding-based methods, effective for link prediction in KGs, perform poorly on relations that only have a few associative triples.
4 code implementations • 12 Jun 2019 • Xiang Yue, Zhen Wang, Jingong Huang, Srinivasan Parthasarathy, Soheil Moosavinasab, Yungui Huang, Simon M. Lin, Wen Zhang, Ping Zhang, Huan Sun
Our experimental results demonstrate that the recent graph embedding methods achieve promising results and deserve more attention in the future biomedical graph analysis.
no code implementations • ACL 2019 • Wen Zhang, Yang Feng, Fandong Meng, Di You, Qun Liu
Neural Machine Translation (NMT) generates target words sequentially in the way of predicting the next word conditioned on the context words.
no code implementations • 5 Jun 2019 • Di Wang, Qi Wu, Wen Zhang
This paper takes a deep learning approach to understand consumer credit risk when e-commerce platforms issue unsecured credit to finance customers' purchase.
no code implementations • 14 May 2019 • Tianyi Zhang, Dengji Zhao, Wen Zhang, Xuming He
We consider a fixed-price mechanism design setting where a seller sells one item via a social network, but the seller can only directly communicate with her neighbours initially.
no code implementations • 14 May 2019 • Wen Zhang, Yao Zhang, Dengji Zhao
We consider a requester who acquires a set of data (e. g. images) that is not owned by one party.
no code implementations • 21 Mar 2019 • Wen Zhang, Bibek Paudel, Liang Wang, Jiaoyan Chen, Hai Zhu, Wei zhang, Abraham Bernstein, Huajun Chen
We also evaluate the efficiency of rule learning and quality of rules from IterE compared with AMIE+, showing that IterE is capable of generating high quality rules more efficiently.
no code implementations • 12 Mar 2019 • Wen Zhang, Bibek Paudel, Wei zhang, Abraham Bernstein, Huajun Chen
Knowledge graph embedding aims to learn distributed representations for entities and relations, and is proven to be effective in many applications.
no code implementations • 6 Mar 2019 • Wen Zhang, Kai Shu, Huan Liu, Yalin Wang
In particular, we provide a principled approach to jointly capture local and global information in the user-user social graph and propose the framework {\m}, which jointly learning user representations for user identity linkage.
no code implementations • 2 Jan 2019 • Xingjian Du, Mengyao Zhu, Xuan Shi, Xinpeng Zhang, Wen Zhang, Jingdong Chen
The experiments comparing ourCSM based end-to-end model with other methods are conductedto confirm that the CSM accelerate the model training andhave significant improvements in speech quality.
1 code implementation • 2 Dec 2018 • Liang Mi, Wen Zhang, Yalin Wang
We propose to align distributional data from the perspective of Wasserstein means.
no code implementations • EMNLP 2018 • Guanying Wang, Wen Zhang, Ruoxu Wang, Yalin Zhou, Xi Chen, Wei zhang, Hai Zhu, Huajun Chen
This paper proposes a label-free distant supervision method, which makes no use of the relation labels under this inadequate assumption, but only uses the prior knowledge derived from the KG to supervise the learning of the classifier directly and softly.
no code implementations • EMNLP 2018 • Wen Zhang, Liang Huang, Yang Feng, Lei Shen, Qun Liu
Although neural machine translation has achieved promising results, it suffers from slow translation speed.
2 code implementations • ECCV 2018 • Liang Mi, Wen Zhang, Xianfeng GU, Yalin Wang
We propose a new clustering method based on optimal transportation.
no code implementations • COLING 2018 • Wen Zhang, Jiawei Hu, Yang Feng, Qun Liu
Although neural machine translation with the encoder-decoder framework has achieved great success recently, it still suffers drawbacks of forgetting distant information, which is an inherent disadvantage of recurrent neural network structure, and disregarding relationship between source words during encoding step.
no code implementations • ICCV 2017 • Liang Mi, Wen Zhang, Junwei Zhang, Yonghui Fan, Dhruman Goradia, Kewei Chen, Eric M. Reiman, Xianfeng GU, Yalin Wang
We compute the OT from each image to a template and measure the Wasserstein distance between them.
no code implementations • 12 Sep 2017 • Wen Zhang, Jiawei Hu, Yang Feng, Qun Liu
Although neural machine translation (NMT) with the encoder-decoder framework has achieved great success in recent times, it still suffers from some drawbacks: RNNs tend to forget old information which is often useful and the encoder only operates through words without considering word relationship.
no code implementations • 6 Sep 2017 • Wen Zhang, Jiawei Hu, Yang Feng, Qun Liu
Even though sequence-to-sequence neural machine translation (NMT) model have achieved state-of-art performance in the recent fewer years, but it is widely concerned that the recurrent neural network (RNN) units are very hard to capture the long-distance state information, which means RNN can hardly find the feature with long term dependency as the sequence becomes longer.
no code implementations • 22 Sep 2016 • Yueming Sun, Ye Yang, He Zhang, Wen Zhang, Qing Wang
[Conclusions]: The approach of using ontology could effectively and efficiently support the conducting of systematic literature review.
no code implementations • 15 Aug 2016 • Elena Garces, Jose I. Echevarria, Wen Zhang, Hongzhi Wu, Kun Zhou, Diego Gutierrez
We present a method to automatically decompose a light field into its intrinsic shading and albedo components.
no code implementations • CVPR 2016 • Jie Shi, Wen Zhang, Yalin Wang
Experimental results demonstrate that our method may be used as an effective shape index, which outperforms some other standard shape measures in our AD versus healthy control classification study.