no code implementations • 24 May 2023 • Jin Xu, Emilien Dupont, Kaspar Märtens, Tom Rainforth, Yee Whye Teh
We introduce Markov Neural Processes (MNPs), a new class of Stochastic Processes (SPs) which are constructed by stacking sequences of neural parameterised Markov transition operators in function space.
no code implementations • 10 Apr 2023 • Jin Xu, Yangning Li, Xiangjin Xie, Yinghui Li, Niu Hu, Haitao Zheng, Yong Jiang
To improve the exploitation of the structural information, we propose a novel entity alignment framework called Weakly-Optimal Graph Contrastive Learning (WOGCL), which is refined on three dimensions : (i) Model.
no code implementations • 4 Apr 2023 • Yihua Ma, Zhifeng Yuan, Yu Xin, Jiang Hua, Guanghui Yu, Jin Xu, Liujun Hu
OTFDM uses a 2D dot-product channel model to cope with doubly-selectivity.
no code implementations • 20 Feb 2023 • Xiang Sun, Jin Xu, Junjie Zhou
When the contest technology in each battle is of Tullock form, a surprising neutrality result holds within the class of semi-symmetric conflict network structures: both the aggregate actions and equilibrium payoffs under two regimes are the same.
1 code implementation • 17 Feb 2023 • Jin Xu, Gary Geng, Nhan D. Nguyen, Carmen Perena-Cortes, Claire Samuels, Herbert M. Sauro
A unique feature of the tool is the extensive Python plugin API, where third-party developers can include new functionality.
no code implementations • 8 Dec 2022 • Jianhao Yan, Jin Xu, Fandong Meng, Jie zhou, Yue Zhang
In this work, we show that the issue arises from the un-consistency of label smoothing on the token-level and sequence-level distributions.
no code implementations • 14 Nov 2022 • Enqiang Zhu, Xianhang Luo, Chanjuan Liu, Xiaolong Shi, Jin Xu
Although DNA computing has been exploited to solve various intractable computational problems, such as the Hamiltonian path problem, SAT problem, and graph coloring problem, there has been little discussion of designing universal DNA computing-based models, which can solve a class of problems.
1 code implementation • 26 Oct 2022 • Huayang Li, Deng Cai, Jin Xu, Taro Watanabe
The combination of $n$-gram and neural LMs not only allows the neural part to focus on the deeper understanding of language but also provides a flexible way to customize an LM by switching the underlying $n$-gram model without changing the neural model.
2 code implementations • 6 Jun 2022 • Jin Xu, Xiaojiang Liu, Jianhao Yan, Deng Cai, Huayang Li, Jian Li
While large-scale neural language models, such as GPT2 and BART, have achieved impressive results on various text generation tasks, they tend to get stuck in undesirable sentence-level loops with maximization-based decoding algorithms (\textit{e. g.}, greedy search).
no code implementations • 21 May 2022 • Jin Xu, Weiqi Wang, Zheming Gao, Haochen Luo, Qian Wu
We then propose a delay prediction model based on non-homogeneous Markov chains.
no code implementations • 28 Apr 2022 • Jin Xu, Chi Hong, Jiyue Huang, Lydia Y. Chen, Jérémie Decouchant
Recent reconstruction attacks apply a gradient inversion optimization on the gradient update of a single minibatch to reconstruct the private data used by clients during training.
1 code implementation • 26 Apr 2022 • Jin Xu, Jessie Jiang, Herbert M. Sauro
One of the extensions for SBML is the SBML Layout and Render package.
no code implementations • Findings (ACL) 2022 • Ruoxi Xu, Hongyu Lin, Meng Liao, Xianpei Han, Jin Xu, Wei Tan, Yingfei Sun, Le Sun
Events are considered as the fundamental building blocks of the world.
no code implementations • 15 Mar 2022 • Jialong Tang, Hongyu Lin, Meng Liao, Yaojie Lu, Xianpei Han, Le Sun, Weijian Xie, Jin Xu
In this paper, we propose a new \textbf{scene-wise} paradigm for procedural text understanding, which jointly tracks states of all entities in a scene-by-scene manner.
no code implementations • 26 Feb 2022 • Ciaran Welsh, Jin Xu, Lucian Smith, Matthias König, Kiri Choi, Herbert M. Sauro
Motivation: This paper presents libRoadRunner 2. 0, an extensible, high-performance, cross-platform, open-source software library for the simulation and analysis of models expressed using Systems Biology Markup Language SBML).
no code implementations • 17 Feb 2022 • Yao Yao, Junyi Shen, Jin Xu, Bin Zhong, Li Xiao
Based on FixMatch, where a pseudo label is generated from a weakly-augmented sample to teach the prediction on a strong augmentation of the same input sample, CLS allows the creation of both pseudo and complementary labels to support both positive and negative learning.
1 code implementation • 7 Feb 2022 • Jinpeng Wang, Bin Chen, Dongliang Liao, Ziyun Zeng, Gongfu Li, Shu-Tao Xia, Jin Xu
By performing Asymmetric-Quantized Contrastive Learning (AQ-CL) across views, HCQ aligns texts and videos at coarse-grained and multiple fine-grained levels.
no code implementations • NeurIPS 2021 • Jiawei Chen, Xu Tan, Yichong Leng, Jin Xu, Guihua Wen, Tao Qin, Tie-Yan Liu
Experiments on LJSpeech datasets demonstrate that Speech-T 1) is more robust than the attention based autoregressive TTS model due to its inherent monotonic alignments between text and speech; 2) naturally supports streaming TTS with good voice quality; and 3) enjoys the benefit of joint modeling TTS and ASR in a single network.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+1
1 code implementation • 25 Nov 2021 • Jin Xu, Mingjian Chen, Jianqiang Huang, Xingyuan Tang, Ke Hu, Jian Li, Jia Cheng, Jun Lei
Graph Neural Networks (GNNs) have become increasingly popular and achieved impressive results in many graph-based applications.
1 code implementation • Findings (EMNLP) 2021 • Yichong Leng, Xu Tan, Rui Wang, Linchen Zhu, Jin Xu, Wenjie Liu, Linquan Liu, Tao Qin, Xiang-Yang Li, Edward Lin, Tie-Yan Liu
Although multiple candidates are generated by an ASR system through beam search, current error correction approaches can only correct one sentence at a time, failing to leverage the voting effect from multiple candidates to better detect and correct error tokens.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+1
no code implementations • NeurIPS Workshop AI4Scien 2021 • Kehang Han, Steven Kearnes, Jin Xu, Wen Torng, JW Feng
DNA-Encoded Libraries (DEL thereafter) data, often with millions of data points, enables large deep learning models to make real contributions in the drug discovery process (e. g., hit-finding).
no code implementations • 29 Aug 2021 • Jin Xu, Xu Tan, Kaitao Song, Renqian Luo, Yichong Leng, Tao Qin, Tie-Yan Liu, Jian Li
In this paper, we investigate the interference issue by sampling different child models and calculating the gradient similarity of shared operators, and observe: 1) the interference on a shared operator between two child models is positively correlated with the number of different operators; 2) the interference is smaller when the inputs and outputs of the shared operator are more similar.
1 code implementation • ACL 2021 • Boxi Cao, Hongyu Lin, Xianpei Han, Le Sun, Lingyong Yan, Meng Liao, Tong Xue, Jin Xu
Previous literatures show that pre-trained masked language models (MLMs) such as BERT can achieve competitive factual knowledge extraction performance on some datasets, indicating that MLMs can potentially be a reliable knowledge source.
1 code implementation • ACL 2021 • Yaojie Lu, Hongyu Lin, Jin Xu, Xianpei Han, Jialong Tang, Annan Li, Le Sun, Meng Liao, Shaoyi Chen
Event extraction is challenging due to the complex structure of event records and the semantic gap between text and event.
Ranked #3 on
Event Extraction
on ACE2005
1 code implementation • ACL 2021 • Jialong Tang, Hongyu Lin, Meng Liao, Yaojie Lu, Xianpei Han, Le Sun, Weijian Xie, Jin Xu
Current event-centric knowledge graphs highly rely on explicit connectives to mine relations between events.
1 code implementation • NeurIPS 2021 • Jin Xu, Hyunjik Kim, Tom Rainforth, Yee Whye Teh
We use these layers to construct group equivariant autoencoders (GAEs) that allow us to learn low-dimensional equivariant representations.
no code implementations • 10 Jun 2021 • Liang Zeng, Jin Xu, Zijun Yao, Yanqiao Zhu, Jian Li
Extensive experiments on node classification, graph classification, and edge prediction demonstrate the effectiveness of AKE-GNN.
no code implementations • 30 May 2021 • Jin Xu, Xu Tan, Renqian Luo, Kaitao Song, Jian Li, Tao Qin, Tie-Yan Liu
The technical challenge of NAS-BERT is that training a big supernet on the pre-training task is extremely costly.
1 code implementation • NeurIPS 2021 • Yichong Leng, Xu Tan, Linchen Zhu, Jin Xu, Renqian Luo, Linquan Liu, Tao Qin, Xiang-Yang Li, Ed Lin, Tie-Yan Liu
A straightforward solution to reduce latency, inspired by non-autoregressive (NAR) neural machine translation, is to use an NAR sequence generation model for ASR error correction, which, however, comes at the cost of significantly increased ASR error rate.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+3
no code implementations • 25 Feb 2021 • Linghui Meng, Jin Xu, Xu Tan, Jindong Wang, Tao Qin, Bo Xu
In this paper, we propose MixSpeech, a simple yet effective data augmentation method based on mixup for automatic speech recognition (ASR).
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+2
no code implementations • 1 Jan 2021 • Jin Xu, Xu Tan, Renqian Luo, Kaitao Song, Li Jian, Tao Qin, Tie-Yan Liu
NAS-BERT trains a big supernet on a carefully designed search space containing various architectures and outputs multiple compressed models with adaptive sizes and latency.
no code implementations • WMT (EMNLP) 2020 • Jin Xu, Yinuo Guo, Junfeng Hu
Copying mechanism has been commonly used in neural paraphrasing networks and other text generation tasks, in which some important words in the input sequence are preserved in the output sequence.
no code implementations • 27 Aug 2020 • Shuo Yu, Feng Xia, Jin Xu, Zhikui Chen, Ivan Lee
In order to assess the efficiency of the proposed framework, four popular network representation algorithms are modified and examined.
no code implementations • 13 Aug 2020 • Yiru Wang, Shen Huang, Gongfu Li, Qiang Deng, Dongliang Liao, Pengda Si, Yujiu Yang, Jin Xu
The automatic quality assessment of self-media online articles is an urgent and new issue, which is of great value to the online recommendation and search.
no code implementations • 9 Aug 2020 • Jin Xu, Shuo Yu, Ke Sun, Jing Ren, Ivan Lee, Shirui Pan, Feng Xia
Therefore, in graph learning tasks of social networks, the identification and utilization of multivariate relationship information are more important.
no code implementations • 9 Aug 2020 • Jin Xu, Xu Tan, Yi Ren, Tao Qin, Jian Li, Sheng Zhao, Tie-Yan Liu
However, there are more than 6, 000 languages in the world and most languages are lack of speech training data, which poses significant challenges when building TTS and ASR systems for extremely low-resource languages.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+3
2 code implementations • ECCV 2020 • Sheng Jin, Lumin Xu, Jin Xu, Can Wang, Wentao Liu, Chen Qian, Wanli Ouyang, Ping Luo
This paper investigates the task of 2D human whole-body pose estimation, which aims to localize dense landmarks on the entire human body including face, hands, body, and feet.
Ranked #6 on
2D Human Pose Estimation
on COCO-WholeBody
no code implementations • 10 Jun 2020 • Shray Bansal, Jin Xu, Ayanna Howard, Charles Isbell
We showed that using a Bayesian approach to infer the equilibrium enables the robot to complete the task with less than half the number of collisions while also reducing the task execution time as compared to the best baseline.
1 code implementation • 8 Jun 2020 • Mingjian Chen, Xu Tan, Yi Ren, Jin Xu, Hao Sun, Sheng Zhao, Tao Qin, Tie-Yan Liu
Transformer-based text to speech (TTS) model (e. g., Transformer TTS~\cite{li2019neural}, FastSpeech~\cite{ren2019fastspeech}) has shown the advantages of training and inference efficiency over RNN-based model (e. g., Tacotron~\cite{shen2018natural}) due to its parallel computation in training and/or inference.
1 code implementation • 28 Dec 2019 • Yaqing Wang, Weifeng Yang, Fenglong Ma, Jin Xu, Bin Zhong, Qiang Deng, Jing Gao
In order to tackle this challenge, we propose a reinforced weakly-supervised fake news detection framework, i. e., WeFEND, which can leverage users' reports as weak supervision to enlarge the amount of training data for fake news detection.
1 code implementation • ICML 2020 • Jin Xu, Jean-Francois Ton, Hyunjik Kim, Adam R. Kosiorek, Yee Whye Teh
We develop a functional encoder-decoder approach to supervised meta-learning, where labeled data is encoded into an infinite-dimensional functional representation rather than a finite-dimensional one.
no code implementations • 11 Nov 2019 • Junyi Shen, Hankz Hankui Zhuo, Jin Xu, Bin Zhong, Sinno Jialin Pan
However, based on our experiments, a policy learned by VINs still fail to generalize well on the domain whose action space and feature space are not identical to those in the domain where it is trained.
1 code implementation • 30 Nov 2018 • Mingyuan Ma, Sen Na, Hongyu Wang, Congzhou Chen, Jin Xu
First, we build an interaction behavior graph for multi-level and multi-category data.
1 code implementation • 10 Aug 2018 • Jin Xu, Junghyo Jo
To answer this fundamental question, we examined whether the affinity-based discrimination of peptide sequences is learnable and generalizable by artificial neural networks (ANNs) that process the digital information of receptors and peptides.
Cell Behavior Quantitative Methods
no code implementations • 15 Jun 2018 • Jin Xu, Yee Whye Teh
We develop a method for user-controllable semantic image inpainting: Given an arbitrary set of observed pixels, the unobserved pixels can be imputed in a user-controllable range of possibilities, each of which is semantically coherent and locally consistent with the observed pixels.
1 code implementation • 13 Dec 2017 • Jin Xu, Junghyo Jo
This finite set of receptors must inevitably be cross-reactive to multiple pathogens while retaining high specificity to different pathogens.
Cell Behavior
no code implementations • 19 Sep 2014 • Jin Xu, Haibo He, Hong Man
The classification accuracy and reconstruction error are used to evaluate the proposed dictionary learning method.