Search Results Shiliang Pu

Found 119 papers, 47 papers with code

Read Extensively, Focus Smartly: A Cross-document Semantic Enhancement Method for Visual Documents NER

no code implementations COLING 2022 Jun Zhao, Xin Zhao, WenYu Zhan, Tao Gui, Qi Zhang, Liang Qiao, Zhanzhan Cheng, ShiLiang Pu

To deal with this problem, this work proposes a cross-document semantic enhancement method, which consists of two modules: 1) To prevent distractions from irrelevant regions in the current document, we design a learnable attention mask mechanism, which is used to adaptively filter redundant information in the current document.

NER

Arbitrary-Scale Point Cloud Upsampling by Voxel-Based Network with Latent Geometric-Consistent Learning

1 code implementation8 Mar 2024 Hang Du, Xuejun Yan, Jingjing Wang, Di Xie, ShiLiang Pu

Recently, arbitrary-scale point cloud upsampling mechanism became increasingly popular due to its efficiency and convenience for practical applications.

point cloud upsampling

Learning Expressive And Generalizable Motion Features For Face Forgery Detection

no code implementations8 Mar 2024 Jingyi Zhang, Peng Zhang, Jingjing Wang, Di Xie, ShiLiang Pu

However, current sequence-based face forgery detection methods use general video classification networks directly, which discard the special and discriminative motion information for face manipulation detection.

Anomaly Detection Classification +1

"Lossless" Compression of Deep Neural Networks: A High-dimensional Neural Tangent Kernel Approach

1 code implementation1 Mar 2024 Lingyu Gu, Yongqi Du, Yuan Zhang, Di Xie, ShiLiang Pu, Robert C. Qiu, Zhenyu Liao

Modern deep neural networks (DNNs) are extremely powerful; however, this comes at the price of increased depth and having more parameters per layer, making their training and inference more computationally challenging.

Model Compression Quantization

LoRAMoE: Alleviate World Knowledge Forgetting in Large Language Models via MoE-Style Plugin

1 code implementation15 Dec 2023 Shihan Dou, Enyu Zhou, Yan Liu, Songyang Gao, Jun Zhao, Wei Shen, Yuhao Zhou, Zhiheng Xi, Xiao Wang, Xiaoran Fan, ShiLiang Pu, Jiang Zhu, Rui Zheng, Tao Gui, Qi Zhang, Xuanjing Huang

Supervised fine-tuning (SFT) is a crucial step for large language models (LLMs), enabling them to align with human instructions and enhance their capabilities in downstream tasks.

Language Modelling Multi-Task Learning +1

MProto: Multi-Prototype Network with Denoised Optimal Transport for Distantly Supervised Named Entity Recognition

1 code implementation12 Oct 2023 Shuhui Wu, Yongliang Shen, Zeqi Tan, Wenqi Ren, Jietian Guo, ShiLiang Pu, Weiming Lu

Distantly supervised named entity recognition (DS-NER) aims to locate entity mentions and classify their types with only knowledge bases or gazetteers and unlabeled corpus.

named-entity-recognition Named Entity Recognition +1

Accelerating Dynamic Network Embedding with Billions of Parameter Updates to Milliseconds

1 code implementation15 Jun 2023 Haoran Deng, Yang Yang, Jiahe Li, Haoyang Cai, ShiLiang Pu, Weihao Jiang

Network embedding, a graph representation learning method illustrating network topology by mapping nodes into lower-dimension vectors, is challenging to accommodate the ever-changing dynamic graphs in practice.

Graph Reconstruction Graph Representation Learning +3

Single Domain Dynamic Generalization for Iris Presentation Attack Detection

no code implementations22 May 2023 Yachun Li, Jingjing Wang, Yuhui Chen, Di Xie, ShiLiang Pu

To tackle the above issues, we propose a Single Domain Dynamic Generalization (SDDG) framework, which simultaneously exploits domain-invariant and domain-specific features on a per-sample basis and learns to generalize to various unseen domains with numerous natural images.

Domain Generalization Meta-Learning

Taxonomy Completion with Probabilistic Scorer via Box Embedding

1 code implementation18 May 2023 Wei Xue, Yongliang Shen, Wenqi Ren, Jietian Guo, ShiLiang Pu, Weiming Lu

Specifically, TaxBox consists of three components: (1) a graph aggregation module to leverage the structural information of the taxonomy and two lightweight decoders that map features to box embedding and capture complex relationships between concepts; (2) two probabilistic scorers that correspond to attachment and insertion operations and ensure the avoidance of pseudo-leaves; and (3) three learning objectives that assist the model in mapping concepts more granularly onto the box embedding space.

Multi-view Adversarial Discriminator: Mine the Non-causal Factors for Object Detection in Unseen Domains

1 code implementation CVPR 2023 Mingjun Xu, Lingyun Qin, WeiJie Chen, ShiLiang Pu, Lei Zhang

In this work, we present an idea to remove non-causal factors from common features by multi-view adversarial training on source domains, because we observe that such insignificant non-causal factors may still be significant in other latent spaces (views) due to the multi-mode structure of data.

Domain Generalization object-detection +1