no code implementations • EMNLP 2021 • Ming Wang, Jianzhang Zhang, Yinglin Wang
In previous similarity-based WSD systems, studies have allocated much effort on learning comprehensive sense embeddings using contextual representations and knowledge sources.
no code implementations • EMNLP 2020 • Ming Wang, Yinglin Wang
Contextual embeddings are proved to be overwhelmingly effective to the task of Word Sense Disambiguation (WSD) compared with other sense representation techniques.
no code implementations • 10 Mar 2025 • Ming Wang, Fang Wang, Minghao Hu, Li He, Haiyang Wang, Jun Zhang, Tianwei Yan, Li Li, Zhunchen Luo, Wei Luo, Xiaoying Bai, Guotong Geng
Long-form article generation (LFAG) presents challenges such as maintaining logical consistency, comprehensive topic coverage, and narrative coherence across extended articles.
no code implementations • 18 Feb 2025 • Jiaqi Zhao, Ming Wang, Miao Zhang, Yuzhang Shang, Xuebo Liu, YaoWei Wang, Min Zhang, Liqiang Nie
Then, we conduct extensive experiments with the baseline within each class, covering models with various sizes (7B-70B), bitwidths, training levels (LLaMA1/2/3/3. 1), architectures (Mixtral, DeepSeekMoE and Mamba) and modality (LLaVA1. 5 and VILA1. 5) on a wide range of evaluation metrics. Through comparative analysis on the results, we summarize the superior of each PTQ strategy and modelsize-bitwidth trade-off considering the performance.
1 code implementation • 18 Feb 2025 • Jiaqi Zhao, Miao Zhang, Ming Wang, Yuzhang Shang, Kaihao Zhang, Weili Guan, YaoWei Wang, Min Zhang
To explore the real limit of PTQ, we propose an extremely low-bit PTQ method called PTQ1. 61, which enables weight quantization to 1. 61-bit for the first time.
no code implementations • 19 Dec 2024 • Peidong Wang, Ming Wang, ZhiMing Ma, Xiaocui Yang, Shi Feng, Daling Wang, Yifei Zhang
Large Language Models (LLMs) have demonstrated remarkable capabilities on various tasks, while the further evolvement is limited to the lack of high-quality training data.
1 code implementation • 20 Sep 2024 • Ming Wang, Yuanzhong Liu, Xiaoyu Liang, YiJie Huang, Daling Wang, Xiaocui Yang, Sijia Shen, Shi Feng, XiaoMing Zhang, Chaofeng Guan, Yifei Zhang
LLMs have demonstrated commendable performance across diverse domains.
no code implementations • 4 Sep 2024 • Xudong Ma, Yuqi Zhang, Chenchong Wang, Ming Wang, Mingxin Huang, Wei Xu
By integrating this deep learning model with a specific sampling strategy in the latent space, a novel, microstructure-centered algorithm for multiphase alloy design is developed.
1 code implementation • 20 Aug 2024 • XiaoMing Zhang, Ming Wang, Xiaocui Yang, Daling Wang, Shi Feng, Yifei Zhang
Multi-hop Question Answering (QA) necessitates complex reasoning by integrating multiple pieces of information to resolve intricate questions.
1 code implementation • 23 Mar 2024 • Huaiwen Zhang, Yu Chen, Ming Wang, Shi Feng
Emotional Support Conversation (ESC) is a typical dialogue that can effectively assist the user in mitigating emotional pressures.
1 code implementation • 17 Mar 2024 • Zihan Wang, Fanheng Kong, Shi Feng, Ming Wang, Xiaocui Yang, Han Zhao, Daling Wang, Yifei Zhang
For TSF tasks, these characteristics enable Mamba to comprehend hidden patterns as the Transformer and reduce computational overhead compared to the Transformer.
Ranked #60 on
Time Series Forecasting
on ETTh1 (336) Multivariate
4 code implementations • 26 Feb 2024 • Ming Wang, Yuanzhong Liu, Xiaoyu Liang, Songlian Li, YiJie Huang, XiaoMing Zhang, Sijia Shen, Chaofeng Guan, Daling Wang, Shi Feng, Huaiwen Zhang, Yifei Zhang, Minghui Zheng, Chi Zhang
Experiments illustrate that LangGPT significantly enhances the performance of LLMs.
2 code implementations • 13 Oct 2023 • Xiaocui Yang, Wenfang Wu, Shi Feng, Ming Wang, Daling Wang, Yang Li, Qi Sun, Yifei Zhang, XiaoMing Fu, Soujanya Poria
Consequently, our work complements research on the performance of MLLMs in multimodal comprehension tasks, achieving a more comprehensive and holistic evaluation of MLLMs.
1 code implementation • 28 Sep 2023 • Ming Wang, Daling Wang, Wenfang Wu, Shi Feng, Yifei Zhang
The application of CEs encounters two main challenges: general user preferences and variable ML systems.
1 code implementation • 29 Jul 2023 • Ming Wang, Wenfang Wu, Chongyun Gao, Daling Wang, Shi Feng, Yifei Zhang
Large language models (LLMs) have received increasing attention.
no code implementations • 25 Jun 2023 • Yuchen Zhuang, Xin Shen, Yan Zhao, Chaosheng Dong, Ming Wang, Jin Li, Chao Zhang
The detection of the underlying shopping intentions of users based on their historical interactions is a crucial aspect for e-commerce platforms, such as Amazon, to enhance the convenience and efficiency of their customers' shopping experiences.
1 code implementation • 23 May 2023 • Jiacheng Li, Ming Wang, Jin Li, Jinmiao Fu, Xin Shen, Jingbo Shang, Julian McAuley
In this paper, we propose to model user preferences and item features as language representations that can be generalized to new items and datasets.
no code implementations • 25 Apr 2023 • Yang Li, Wei Wang, Ming Wang, Chunmeng Dou, Zhengyu Ma, Huihui Zhou, Peng Zhang, Nicola Lepri, Xumeng Zhang, Qing Luo, Xiaoxin Xu, Guanhua Yang, Feng Zhang, Ling Li, Daniele Ielmini, Ming Liu
We propose a binary stochastic learning algorithm that modifies all elementary neural network operations, by introducing (i) stochastic binarization of both the forwarding signals and the activation function derivatives, (ii) signed binarization of the backpropagating errors, and (iii) step-wised weight updates.
no code implementations • ICCV 2023 • Ming Wang, Xianda Guo, Beibei Lin, Tian Yang, Zheng Zhu, Lincheng Li, Shunli Zhang, Xin Yu
This is the first framework on gait recognition that is designed to focus on the extraction of dynamic features.
no code implementations • 15 Nov 2022 • Heming Du, Chen Liu, Ming Wang, Lincheng Li, Shunli Zhang, Xin Yu
We measure the uncertainty and predict the match status of the recognition results, and thus determine whether the probe is an OOG query. To the best of our knowledge, our method is the first attempt to tackle OOG queries in gait recognition.
no code implementations • 10 Sep 2022 • Cheng Ge, Xi Chen, Ming Wang, Jin Wang
By using this deep network, we can easily locate the baseline position and then provide reliable and interpretable anomaly detection result.
2 code implementations • 2 Aug 2022 • Beibei Lin, Shunli Zhang, Ming Wang, Lincheng Li, Xin Yu
GFR extractor aims to extract contextual information, e. g., the relationship among various body parts, and the mask-based LFR extractor is presented to exploit the detailed posture changes of local regions.
1 code implementation • 30 Apr 2022 • Chengyu Wang, Minghui Qiu, Chen Shi, Taolin Zhang, Tingting Liu, Lei LI, Jianing Wang, Ming Wang, Jun Huang, Wei Lin
The success of Pre-Trained Models (PTMs) has reshaped the development of Natural Language Processing (NLP).
1 code implementation • 8 Mar 2022 • Ming Wang, Beibei Lin, Xianda Guo, Lincheng Li, Zheng Zhu, Jiande Sun, Shunli Zhang, Xin Yu
ECM consists of the Spatial-Temporal feature extractor (ST), the Frame-Level feature extractor (FL) and SPB, and has two obvious advantages: First, each branch focuses on a specific representation, which can be used to improve the robustness of the network.
no code implementations • 24 Nov 2021 • Riya Tyagi, Tanish Tyagi, Ming Wang, Lujin Zhang
Parkinson's disease (PD) is debilitating, progressive, and clinically marked by motor symptoms.
1 code implementation • ACL 2021 • Ming Wang, Yinglin Wang
Lately proposed Word Sense Disambiguation (WSD) systems have approached the estimated upper bound of the task on standard evaluation benchmarks.
no code implementations • 29 Feb 2020 • Yinglin Wang, Ming Wang, Hamido Fujita
Word Sense Disambiguation (WSD) has been a basic and on-going issue since its introduction in natural language processing (NLP) community.
Ranked #1 on
Word Sense Disambiguation
on Knowledge-based:
no code implementations • SEMEVAL 2017 • Ming Wang, Biao Chu, Qingxun Liu, Xiaobing Zhou
Sentiment analysis is one of the central issues in Natural Language Processing and has become more and more important in many fields.