no code implementations • 10 Feb 2023 • Weichao Zhao, Hezhen Hu, Wengang Zhou, Jiaxin Shi, Houqiang Li
In this work, we are dedicated to leveraging the BERT pre-training success and modeling the domain-specific statistics to fertilize the sign language recognition~(SLR) model.
no code implementations • 7 Dec 2022 • Zhongwei Wan, Yichun Yin, Wei zhang, Jiaxin Shi, Lifeng Shang, Guangyong Chen, Xin Jiang, Qun Liu
Recently, domain-specific PLMs have been proposed to boost the task performance of specific domains (e. g., biomedical and computer science) by continuing to pre-train general PLMs with domain-specific corpora.
no code implementations • 17 Nov 2022 • Jiaxin Shi, Lester Mackey
We provide a first finite-particle convergence rate for Stein variational gradient descent (SVGD).
1 code implementation • 23 Oct 2022 • Zhijie Deng, Jiaxin Shi, Hao Zhang, Peng Cui, Cewu Lu, Jun Zhu
In this paper, we introduce a scalable method for learning structured, adaptive-length deep representations.
1 code implementation • 24 May 2022 • Lunyiu Nie, Shulin Cao, Jiaxin Shi, Jiuding Sun, Qi Tian, Lei Hou, Juanzi Li, Jidong Zhai
Subject to the huge semantic gap between natural and formal languages, neural semantic parsing is typically bottlenecked by its complexity of dealing with both input semantics and output syntax.
no code implementations • 24 May 2022 • Feilong Chen, Xiuyi Chen, Jiaxin Shi, Duzhen Zhang, Jianlong Chang, Qi Tian
It also achieves about +4. 9 AR on COCO and +3. 8 AR on Flickr30K than LightingDot and achieves comparable performance with the state-of-the-art (SOTA) fusion-based model METER.
1 code implementation • 30 Apr 2022 • Zhijie Deng, Jiaxin Shi, Jun Zhu
Learning the principal eigenfunctions of an integral operator defined by a kernel and a data distribution is at the core of many machine learning problems.
1 code implementation • 19 Feb 2022 • Jiaxin Shi, Yuhao Zhou, Jessica Hwang, Michalis K. Titsias, Lester Mackey
Gradient estimation -- approximating the gradient of an expectation with respect to the parameters of a distribution -- is central to the solution of many machine learning problems.
no code implementations • 28 Jan 2022 • Boda Lin, Zijun Yao, Jiaxin Shi, Shulin Cao, Binghao Tang, Si Li, Yong Luo, Juanzi Li, Lei Hou
To remedy these drawbacks, we propose to achieve universal and schema-free Dependency Parsing (DP) via Sequence Generation (SG) DPSG by utilizing only the pre-trained language model (PLM) without any auxiliary structures or parsing algorithms.
1 code implementation • pproximateinference AABI Symposium 2022 • Michalis K. Titsias, Jiaxin Shi
We introduce a variance reduction technique for score function estimators that makes use of double control variates.
1 code implementation • ACL 2022 • Shulin Cao, Jiaxin Shi, Zijun Yao, Xin Lv, Jifan Yu, Lei Hou, Juanzi Li, Zhiyuan Liu, Jinghui Xiao
In this paper, we propose the approach of program transfer, which aims to leverage the valuable program annotations on the rich-resourced KBs as external supervision signals to aid program induction for the low-resourced KBs that lack program annotations.
1 code implementation • ACL 2021 • Fangwei Zhu, Shangqing Tu, Jiaxin Shi, Juanzi Li, Lei Hou, Tong Cui
Wikipedia abstract generation aims to distill a Wikipedia abstract from web sources and has met significant success by adopting multi-document summarization techniques.
1 code implementation • ICLR 2022 • Jiaxin Shi, Chang Liu, Lester Mackey
We introduce a new family of particle evolution samplers suitable for constrained domains and non-Euclidean geometries.
2 code implementations • 10 Jun 2021 • Shengyang Sun, Jiaxin Shi, Andrew Gordon Wilson, Roger Grosse
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
1 code implementation • EMNLP 2021 • Jiaxin Shi, Shulin Cao, Lei Hou, Juanzi Li, Hanwang Zhang
Multi-hop Question Answering (QA) is a challenging task because it requires precise reasoning with entity relations at every step towards the answer.
no code implementations • pproximateinference AABI Symposium 2021 • Shengyang Sun, Jiaxin Shi, Roger Baker Grosse
Equivalences between infinite neural networks and Gaussian processes have been established for explaining the functional prior and training dynamics of deep learning models.
1 code implementation • ACL 2022 • Shulin Cao, Jiaxin Shi, Liangming Pan, Lunyiu Nie, Yutong Xiang, Lei Hou, Juanzi Li, Bin He, Hanwang Zhang
To this end, we introduce KQA Pro, a dataset for Complex KBQA including ~120K diverse natural language questions.
1 code implementation • ICML 2020 • Yuhao Zhou, Jiaxin Shi, Jun Zhu
Estimating the score, i. e., the gradient of log density function, from a set of samples generated by an unknown distribution is a fundamental task in inference and learning of probabilistic models that involve flexible yet intractable densities.
6 code implementations • CVPR 2020 • Kaihua Tang, Yulei Niu, Jianqiang Huang, Jiaxin Shi, Hanwang Zhang
Today's scene graph generation (SGG) task is still far from practical, mainly due to the severe training bias, e. g., collapsing diverse "human walk on / sit on / lay on beach" into "human on beach".
Ranked #1 on
Scene Graph Generation
on Visual Genome
1 code implementation • IJCNLP 2019 • Chengjiang Li, Yixin Cao, Lei Hou, Jiaxin Shi, Juanzi Li, Tat-Seng Chua
Specifically, as for the knowledge embedding model, we utilize TransE to implicitly complete two KGs towards consistency and learn relational constraints between entities.
1 code implementation • pproximateinference AABI Symposium 2019 • Jiaxin Shi, Michalis K. Titsias, andriy mnih
We introduce a new interpretation of sparse variational approximations for Gaussian processes using inducing points, which can lead to more scalable algorithms than previous methods.
2 code implementations • 27 May 2019 • Jiaxin Shi, Mohammad Emtiyaz Khan, Jun Zhu
Inference in Gaussian process (GP) models is computationally challenging for large data, and often difficult to approximate with a small number of inducing points.
6 code implementations • 17 May 2019 • Yang Song, Sahaj Garg, Jiaxin Shi, Stefano Ermon
However, it has been so far limited to simple, shallow models or low-dimensional data, due to the difficulty of computing the Hessian of log-density functions.
2 code implementations • ICLR 2019 • Shengyang Sun, Guodong Zhang, Jiaxin Shi, Roger Grosse
We introduce functional variational Bayesian neural networks (fBNNs), which maximize an Evidence Lower BOund (ELBO) defined directly on stochastic processes, i. e. distributions over functions.
2 code implementations • CVPR 2019 • Jiaxin Shi, Hanwang Zhang, Juanzi Li
We aim to dismantle the prevalent black-box neural architectures used in complex visual reasoning tasks, into the proposed eXplainable and eXplicit Neural Modules (XNMs), which advance beyond existing neural module networks towards using scene graphs --- objects as nodes and the pairwise relationships as edges --- for explainable and explicit reasoning with structured knowledge.
Ranked #10 on
Visual Question Answering
on CLEVR
1 code implementation • 6 Nov 2018 • Jiaxin Shi, Chen Liang, Lei Hou, Juanzi Li, Zhiyuan Liu, Hanwang Zhang
We propose DeepChannel, a robust, data-efficient, and interpretable neural model for extractive document summarization.
2 code implementations • 6 Nov 2018 • Jiaxin Shi, Lei Hou, Juanzi Li, Zhiyuan Liu, Hanwang Zhang
Sentence embedding is an effective feature representation for most deep learning-based NLP tasks.
1 code implementation • NeurIPS 2018 • Yucen Luo, Tian Tian, Jiaxin Shi, Jun Zhu, Bo Zhang
We propose a new approach that includes a deep generative model (DGM) to characterize low-level features of the data, and a statistical relational model for noisy pairwise annotations on its subset.
3 code implementations • ICML 2018 • Jiaxin Shi, Shengyang Sun, Jun Zhu
Recently there have been increasing interests in learning and inference with implicit distributions (i. e., distributions without tractable densities).
no code implementations • ICML 2018 • Jingwei Zhuo, Chang Liu, Jiaxin Shi, Jun Zhu, Ning Chen, Bo Zhang
Stein variational gradient descent (SVGD) is a recently proposed particle-based Bayesian inference method, which has attracted a lot of interest due to its remarkable approximation ability and particle efficiency compared to traditional variational inference and Markov Chain Monte Carlo methods.
no code implementations • IJCNLP 2017 • Yixin Cao, Jiaxin Shi, Juanzi Li, Zhiyuan Liu, Chengjiang Li
To enhance the expression ability of distributional word representation learning model, many researchers tend to induce word senses through clustering, and learn multiple embedding vectors for each word, namely multi-prototype word embedding model.
1 code implementation • 18 Sep 2017 • Jiaxin Shi, Jianfei Chen, Jun Zhu, Shengyang Sun, Yucen Luo, Yihong Gu, Yuhao Zhou
In this paper we introduce ZhuSuan, a python probabilistic programming library for Bayesian deep learning, which conjoins the complimentary advantages of Bayesian methods and deep learning.
no code implementations • ICLR 2018 • Jiaxin Shi, Shengyang Sun, Jun Zhu
Recent progress in variational inference has paid much attention to the flexibility of variational posteriors.
no code implementations • 24 Apr 2016 • Mengchen Liu, Jiaxin Shi, Zhen Li, Chongxuan Li, Jun Zhu, Shixia Liu
Deep convolutional neural networks (CNNs) have achieved breakthrough performance in many pattern recognition tasks such as image classification.
no code implementations • 3 Dec 2015 • Jiaxin Shi, Jun Zhu
We present a new perspective on neural knowledge base (KB) embeddings, from which we build a framework that can model symbolic knowledge in the KB together with its learning process.