Search Results for author: Jingyang Li

Found 13 papers, 3 papers with code

Towards More Faithful Natural Language Explanation Using Multi-Level Contrastive Learning in VQA

1 code implementation21 Dec 2023 Chengen Lai, Shengli Song, Shiqi Meng, Jingyang Li, Sitong Yan, GuangNeng Hu

To address the above issues, we propose a novel self-supervised \textbf{M}ulti-level \textbf{C}ontrastive \textbf{L}earning based natural language \textbf{E}xplanation model (MCLE) for VQA with semantic-level, image-level, and instance-level factual and counterfactual samples.

Contrastive Learning counterfactual +3

Nonconvex Stochastic Bregman Proximal Gradient Method with Application to Deep Learning

no code implementations26 Jun 2023 Kuangyu Ding, Jingyang Li, Kim-Chuan Toh

Experimental results on representative benchmarks demonstrate the effectiveness and robustness of MSBPG in training neural networks.

Online Tensor Learning: Computational and Statistical Trade-offs, Adaptivity and Optimal Regret

no code implementations6 Jun 2023 Jian-Feng Cai, Jingyang Li, Dong Xia

Under the fixed step size regime, a fascinating trilemma concerning the convergence rate, statistical error rate, and regret is observed.

Computationally Efficient and Statistically Optimal Robust High-Dimensional Linear Regression

no code implementations10 May 2023 Yinan Shen, Jingyang Li, Jian-Feng Cai, Dong Xia

The algorithm is not only computationally efficient with linear convergence but also statistically optimal, be the noise Gaussian or heavy-tailed with a finite 1 + epsilon moment.

regression Vocal Bursts Intensity Prediction

Towards Generalized Open Information Extraction

no code implementations29 Nov 2022 Bowen Yu, Zhenyu Zhang, Jingyang Li, Haiyang Yu, Tingwen Liu, Jian Sun, Yongbin Li, Bin Wang

Open Information Extraction (OpenIE) facilitates the open-domain discovery of textual facts.

Open Information Extraction

Layout-Aware Information Extraction for Document-Grounded Dialogue: Dataset, Method and Demonstration

no code implementations14 Jul 2022 Zhenyu Zhang, Bowen Yu, Haiyang Yu, Tingwen Liu, Cheng Fu, Jingyang Li, Chengguang Tang, Jian Sun, Yongbin Li

In this paper, we propose a Layout-aware document-level Information Extraction dataset, LIE, to facilitate the study of extracting both structural and semantic knowledge from visually rich documents (VRDs), so as to generate accurate responses in dialogue systems.

Language Modelling

Computationally Efficient and Statistically Optimal Robust Low-rank Matrix and Tensor Estimation

no code implementations2 Mar 2022 Yinan Shen, Jingyang Li, Jian-Feng Cai, Dong Xia

Lastly, RsGrad is applicable for low-rank tensor estimation under heavy-tailed noise where a statistically optimal rate is attainable with the same phenomenon of dual-phase convergence, and a novel shrinkage-based second-order moment method is guaranteed to deliver a warm initialization.

Provable Tensor-Train Format Tensor Completion by Riemannian Optimization

no code implementations27 Aug 2021 Jian-Feng Cai, Jingyang Li, Dong Xia

In this paper, we provide, to our best knowledge, the first theoretical guarantees of the convergence of RGrad algorithm for TT-format tensor completion, under a nearly optimal sample size condition.

Matrix Completion Riemannian optimization

Rule-Guided Compositional Representation Learning on Knowledge Graphs

1 code implementation20 Nov 2019 Guanglin Niu, Yongfei Zhang, Bo Li, Peng Cui, Si Liu, Jingyang Li, Xiaowei Zhang

Representation learning on a knowledge graph (KG) is to embed entities and relations of a KG into low-dimensional continuous vector spaces.

Knowledge Graphs Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.