Search Results for author: Ming Wang

Found 28 papers, 14 papers with code

Enhancing the Context Representation in Similarity-based Word Sense Disambiguation

no code implementations EMNLP 2021 Ming Wang, Jianzhang Zhang, Yinglin Wang

In previous similarity-based WSD systems, studies have allocated much effort on learning comprehensive sense embeddings using contextual representations and knowledge sources.

Sentence Word Sense Disambiguation

A Synset Relation-enhanced Framework with a Try-again Mechanism for Word Sense Disambiguation

no code implementations EMNLP 2020 Ming Wang, Yinglin Wang

Contextual embeddings are proved to be overwhelmingly effective to the task of Word Sense Disambiguation (WSD) compared with other sense representation techniques.

Relation Word Sense Disambiguation

DeFine: A Decomposed and Fine-Grained Annotated Dataset for Long-form Article Generation

no code implementations10 Mar 2025 Ming Wang, Fang Wang, Minghao Hu, Li He, Haiyang Wang, Jun Zhang, Tianwei Yan, Li Li, Zhunchen Luo, Wei Luo, Xiaoying Bai, Guotong Geng

Long-form article generation (LFAG) presents challenges such as maintaining logical consistency, comprehensive topic coverage, and narrative coherence across extended articles.

Form Retrieval +1

Benchmarking Post-Training Quantization in LLMs: Comprehensive Taxonomy, Unified Evaluation, and Comparative Analysis

no code implementations18 Feb 2025 Jiaqi Zhao, Ming Wang, Miao Zhang, Yuzhang Shang, Xuebo Liu, YaoWei Wang, Min Zhang, Liqiang Nie

Then, we conduct extensive experiments with the baseline within each class, covering models with various sizes (7B-70B), bitwidths, training levels (LLaMA1/2/3/3. 1), architectures (Mixtral, DeepSeekMoE and Mamba) and modality (LLaVA1. 5 and VILA1. 5) on a wide range of evaluation metrics. Through comparative analysis on the results, we summarize the superior of each PTQ strategy and modelsize-bitwidth trade-off considering the performance.

Benchmarking Mamba +1

PTQ1.61: Push the Real Limit of Extremely Low-Bit Post-Training Quantization Methods for Large Language Models

1 code implementation18 Feb 2025 Jiaqi Zhao, Miao Zhang, Ming Wang, Yuzhang Shang, Kaihao Zhang, Weili Guan, YaoWei Wang, Min Zhang

To explore the real limit of PTQ, we propose an extremely low-bit PTQ method called PTQ1. 61, which enables weight quantization to 1. 61-bit for the first time.

Binarization Quantization

Language Models as Continuous Self-Evolving Data Engineers

no code implementations19 Dec 2024 Peidong Wang, Ming Wang, ZhiMing Ma, Xiaocui Yang, Shi Feng, Daling Wang, Yifei Zhang

Large Language Models (LLMs) have demonstrated remarkable capabilities on various tasks, while the further evolvement is limited to the lack of high-quality training data.

Creating a Microstructure Latent Space with Rich Material Information for Multiphase Alloy Design

no code implementations4 Sep 2024 Xudong Ma, Yuqi Zhang, Chenchong Wang, Ming Wang, Mingxin Huang, Wei Xu

By integrating this deep learning model with a specific sampling strategy in the latent space, a novel, microstructure-centered algorithm for multiphase alloy design is developed.

Hierarchical Retrieval-Augmented Generation Model with Rethink for Multi-hop Question Answering

1 code implementation20 Aug 2024 XiaoMing Zhang, Ming Wang, Xiaocui Yang, Daling Wang, Shi Feng, Yifei Zhang

Multi-hop Question Answering (QA) necessitates complex reasoning by integrating multiple pieces of information to resolve intricate questions.

Multi-hop Question Answering Question Answering +1

FEEL: A Framework for Evaluating Emotional Support Capability with Large Language Models

1 code implementation23 Mar 2024 Huaiwen Zhang, Yu Chen, Ming Wang, Shi Feng

Emotional Support Conversation (ESC) is a typical dialogue that can effectively assist the user in mitigating emotional pressures.

Ensemble Learning

Is Mamba Effective for Time Series Forecasting?

1 code implementation17 Mar 2024 Zihan Wang, Fanheng Kong, Shi Feng, Ming Wang, Xiaocui Yang, Han Zhao, Daling Wang, Yifei Zhang

For TSF tasks, these characteristics enable Mamba to comprehend hidden patterns as the Transformer and reduce computational overhead compared to the Transformer.

Computational Efficiency Mamba +2

MM-BigBench: Evaluating Multimodal Models on Multimodal Content Comprehension Tasks

2 code implementations13 Oct 2023 Xiaocui Yang, Wenfang Wu, Shi Feng, Ming Wang, Daling Wang, Yang Li, Qi Sun, Yifei Zhang, XiaoMing Fu, Soujanya Poria

Consequently, our work complements research on the performance of MLLMs in multimodal comprehension tasks, achieving a more comprehensive and holistic evaluation of MLLMs.

multimodal interaction Multimodal Reasoning

G-STO: Sequential Main Shopping Intention Detection via Graph-Regularized Stochastic Transformer

no code implementations25 Jun 2023 Yuchen Zhuang, Xin Shen, Yan Zhao, Chaosheng Dong, Ming Wang, Jin Li, Chao Zhang

The detection of the underlying shopping intentions of users based on their historical interactions is a crucial aspect for e-commerce platforms, such as Amazon, to enhance the convenience and efficiency of their customers' shopping experiences.

Sequential Recommendation

Text Is All You Need: Learning Language Representations for Sequential Recommendation

1 code implementation23 May 2023 Jiacheng Li, Ming Wang, Jin Li, Jinmiao Fu, Xin Shen, Jingbo Shang, Julian McAuley

In this paper, we propose to model user preferences and item features as language representations that can be generalized to new items and datasets.

All Representation Learning +2

Binary stochasticity enabled highly efficient neuromorphic deep learning achieves better-than-software accuracy

no code implementations25 Apr 2023 Yang Li, Wei Wang, Ming Wang, Chunmeng Dou, Zhengyu Ma, Huihui Zhou, Peng Zhang, Nicola Lepri, Xumeng Zhang, Qing Luo, Xiaoxin Xu, Guanhua Yang, Feng Zhang, Ling Li, Daniele Ielmini, Ming Liu

We propose a binary stochastic learning algorithm that modifies all elementary neural network operations, by introducing (i) stochastic binarization of both the forwarding signals and the activation function derivatives, (ii) signed binarization of the backpropagating errors, and (iii) step-wised weight updates.

Binarization Deep Learning

Evidence-based Match-status-Aware Gait Recognition for Out-of-Gallery Gait Identification

no code implementations15 Nov 2022 Heming Du, Chen Liu, Ming Wang, Lincheng Li, Shunli Zhang, Xin Yu

We measure the uncertainty and predict the match status of the recognition results, and thus determine whether the probe is an OOG query. To the best of our knowledge, our method is the first attempt to tackle OOG queries in gait recognition.

Gait Identification Gait Recognition +1

Deep Baseline Network for Time Series Modeling and Anomaly Detection

no code implementations10 Sep 2022 Cheng Ge, Xi Chen, Ming Wang, Jin Wang

By using this deep network, we can easily locate the baseline position and then provide reliable and interpretable anomaly detection result.

Anomaly Detection Time Series +1

GaitGL: Learning Discriminative Global-Local Feature Representations for Gait Recognition

2 code implementations2 Aug 2022 Beibei Lin, Shunli Zhang, Ming Wang, Lincheng Li, Xin Yu

GFR extractor aims to extract contextual information, e. g., the relationship among various body parts, and the mask-based LFR extractor is presented to exploit the detailed posture changes of local regions.

Gait Recognition

GaitStrip: Gait Recognition via Effective Strip-based Feature Representations and Multi-Level Framework

1 code implementation8 Mar 2022 Ming Wang, Beibei Lin, Xianda Guo, Lincheng Li, Zheng Zhu, Jiande Sun, Shunli Zhang, Xin Yu

ECM consists of the Spatial-Temporal feature extractor (ST), the Frame-Level feature extractor (FL) and SPB, and has two obvious advantages: First, each branch focuses on a specific representation, which can be used to improve the robustness of the network.

Gait Recognition

Word Sense Disambiguation: Towards Interactive Context Exploitation from Both Word and Sense Perspectives

1 code implementation ACL 2021 Ming Wang, Yinglin Wang

Lately proposed Word Sense Disambiguation (WSD) systems have approached the estimated upper bound of the task on standard evaluation benchmarks.

Sentence Word Sense Disambiguation

Word Sense Disambiguation: A comprehensive knowledge exploitation framework

no code implementations29 Feb 2020 Yinglin Wang, Ming Wang, Hamido Fujita

Word Sense Disambiguation (WSD) has been a basic and on-going issue since its introduction in natural language processing (NLP) community.

graph construction Information Retrieval +6

Cannot find the paper you are looking for? You can Submit a new open access paper.