no code implementations • 13 Apr 2022 • Xinyi Yu, Xiaowei Wang, Mingyang Zhang, Jintao Rong, Linlin Ou
Therefore, in this work we design a search space that covers almost all re-parameterization operations.
no code implementations • 21 Feb 2021 • Yixuan Liu, Hu Wang, Xiaowei Wang, Xiaoyue Sun, Liuyue Jiang, Minhui Xue
Therefore, a purify-trained classifier is designed to obtain the distribution and generate the calibrated rewards.
no code implementations • 10 Feb 2021 • Yunfei Chu, Xiaowei Wang, Jianxin Ma, Kunyang Jia, Jingren Zhou, Hongxia Yang
To bridge this gap, we propose an Inductive GRanger cAusal modeling (InGRA) framework for inductive Granger causality learning and common causal structure detection on multivariate time series, which exploits the shared commonalities underlying the different individuals.
no code implementations • 27 Jan 2021 • Xiaowei Wang
In the second section of this paper, we provide some estimates of the upper and lower bound of the value $J_{3}$, which involves the generalized Beukers integral and is related to $\zeta(5)$.
Number Theory 11J72, 11J82, 11M06
no code implementations • ICLR 2020 • Baichuan Yuan, Xiaowei Wang, Jianxin Ma, Chang Zhou, Andrea L. Bertozzi, Hongxia Yang
To bridge this gap, we introduce a declustering based hidden variable model that leads to an efficient inference procedure via a variational autoencoder (VAE).
no code implementations • 22 Feb 2020 • Qian Zhang, Wei Feng, Liang Wan, Fei-Peng Tian, Xiaowei Wang, Ping Tan
Besides, we also theoretically prove the invariance of our ALR approach to the ambiguity of normal and lighting decomposition.
no code implementations • 25 Sep 2019 • Yunfei Chu, Xiaowei Wang, Chunyan Feng, Jianxin Ma, Jingren Zhou, Hongxia Yang
Granger causal structure reconstruction is an emerging topic that can uncover causal relationship behind multivariate time series data.
1 code implementation • 2 Jun 2019 • Zhengxiao Du, Xiaowei Wang, Hongxia Yang, Jingren Zhou, Jie Tang
Our approach is based on the insight that having a good generalization from a few examples relies on both a generic model initialization and an effective strategy for adapting this model to newly arising tasks.
no code implementations • 9 May 2018 • Charles Eckert, Xiaowei Wang, Jingcheng Wang, Arun Subramaniyan, Ravi Iyer, Dennis Sylvester, David Blaauw, Reetuparna Das
This paper presents the Neural Cache architecture, which re-purposes cache structures to transform them into massively parallel compute units capable of running inferences for Deep Neural Networks.