Search Results for author: Xinyin Ma

Found 12 papers, 8 papers with code

SlimSAM: 0.1% Data Makes Segment Anything Slim

2 code implementations8 Dec 2023 Zigeng Chen, Gongfan Fang, Xinyin Ma, Xinchao Wang

To address this challenging trade-off, we introduce SlimSAM, a novel data-efficient SAM compression method that achieves superior performance with extremely less training data.

DeepCache: Accelerating Diffusion Models for Free

2 code implementations1 Dec 2023 Xinyin Ma, Gongfan Fang, Xinchao Wang

Diffusion models have recently gained unprecedented attention in the field of image synthesis due to their remarkable generative capabilities.

Denoising Image Generation

LLM-Pruner: On the Structural Pruning of Large Language Models

1 code implementation NeurIPS 2023 Xinyin Ma, Gongfan Fang, Xinchao Wang

With LLM being a general-purpose task solver, we explore its compression in a task-agnostic manner, which aims to preserve the multi-task solving and language generation ability of the original LLM.

Text Generation Zero-Shot Learning

Structural Pruning for Diffusion Models

1 code implementation NeurIPS 2023 Gongfan Fang, Xinyin Ma, Xinchao Wang

Generative modeling has recently undergone remarkable advancements, primarily propelled by the transformative implications of Diffusion Probabilistic Models (DPMs).

DepGraph: Towards Any Structural Pruning

1 code implementation CVPR 2023 Gongfan Fang, Xinyin Ma, Mingli Song, Michael Bi Mi, Xinchao Wang

Structural pruning enables model acceleration by removing structurally-grouped parameters from neural networks.

Network Pruning Neural Network Compression

Prompting to Distill: Boosting Data-Free Knowledge Distillation via Reinforced Prompt

no code implementations16 May 2022 Xinyin Ma, Xinchao Wang, Gongfan Fang, Yongliang Shen, Weiming Lu

Data-free knowledge distillation (DFKD) conducts knowledge distillation via eliminating the dependence of original training data, and has recently achieved impressive results in accelerating pre-trained language models.

Data-free Knowledge Distillation

MuVER: Improving First-Stage Entity Retrieval with Multi-View Entity Representations

1 code implementation EMNLP 2021 Xinyin Ma, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, Weiming Lu

Entity retrieval, which aims at disambiguating mentions to canonical entities from massive KBs, is essential for many tasks in natural language processing.

Entity Linking Entity Retrieval +1

Locate and Label: A Two-stage Identifier for Nested Named Entity Recognition

1 code implementation ACL 2021 Yongliang Shen, Xinyin Ma, Zeqi Tan, Shuai Zhang, Wen Wang, Weiming Lu

Although these methods have the innate ability to handle nested NER, they suffer from high computational cost, ignorance of boundary information, under-utilization of the spans that partially match with entities, and difficulties in long entity recognition.

Chinese Named Entity Recognition named-entity-recognition +3

A Trigger-Sense Memory Flow Framework for Joint Entity and Relation Extraction

1 code implementation25 Jan 2021 Yongliang Shen, Xinyin Ma, Yechun Tang, Weiming Lu

Joint entity and relation extraction framework constructs a unified model to perform entity recognition and relation extraction simultaneously, which can exploit the dependency between the two tasks to mitigate the error propagation problem suffered by the pipeline model.

 Ranked #1 on Relation Extraction on CoNLL04 (NER Micro F1 metric)

Joint Entity and Relation Extraction Reading Comprehension +2

Multi-hop Reading Comprehension across Documents with Path-based Graph Convolutional Network

no code implementations11 Jun 2020 Zeyun Tang, Yongliang Shen, Xinyin Ma, Wei Xu, Jiale Yu, Weiming Lu

Meanwhile, we propose Gated-RGCN to accumulate evidence on the path-based reasoning graph, which contains a new question-aware gating mechanism to regulate the usefulness of information propagating across documents and add question information during reasoning.

Multi-Hop Reading Comprehension

Cannot find the paper you are looking for? You can Submit a new open access paper.