no code implementations • ACL 2022 • Zheng Li, Zijian Wang, Ming Tan, Ramesh Nallapati, Parminder Bhatia, Andrew Arnold, Bing Xiang, Dan Roth
Empirical analyses show that, despite the challenging nature of generative tasks, we were able to achieve a 16. 5x model footprint compression ratio with little performance drop relative to the full-precision counterparts on multiple summarization and QA datasets.
no code implementations • 24 Feb 2022 • Zhize Wu, Huanyi Li, XiaoFeng Wang, Zijun Wu, Le Zou, Lixiang Xu, Ming Tan
Household garbage images are usually faced with complex backgrounds, variable illuminations, diverse angles, and changeable shapes, which bring a great difficulty in garbage image classification.
no code implementations • EMNLP (spnlp) 2020 • Ke Tran, Ming Tan
Finally, we use an auxiliary parser (AP) to filter the generated utterances.
no code implementations • 12 Mar 2020 • Zhize Wu, Thomas Weise, Le Zou, Fei Sun, Ming Tan
Differing from the previous studies, we propose a new method called Denoising Autoencoder with Temporal and Categorical Constraints (DAE_CTC)} to study the skeletal representation in a view of skeleton reconstruction.
no code implementations • IJCNLP 2019 • Ming Tan, Dakuo Wang, Yupeng Gao, Haoyu Wang, Saloni Potdar, Xiaoxiao Guo, Shiyu Chang, Mo Yu
In multi-party chat, it is common for multiple conversations to occur concurrently, leading to intermingled conversation threads in chat logs.
1 code implementation • IJCNLP 2019 • Ming Tan, Yang Yu, Haoyu Wang, Dakuo Wang, Saloni Potdar, Shiyu Chang, Mo Yu
Out-of-domain (OOD) detection for low-resource text classification is a realistic but understudied task.
no code implementations • 4 Jun 2019 • Dakuo Wang, Haoyu Wang, Mo Yu, Zahra Ashktorab, Ming Tan
We cross-referenced 117 project teams and their team-based Slack channels and identified 57 teams that appeared in both datasets, then we built a regression model to reveal the relationship between these group communication styles and the project team performance.
1 code implementation • ACL 2019 • Haoyu Wang, Ming Tan, Mo Yu, Shiyu Chang, Dakuo Wang, Kun Xu, Xiaoxiao Guo, Saloni Potdar
Most approaches to extraction multiple relations from a paragraph require multiple passes over the paragraph.
Ranked #10 on
Relation Extraction
on SemEval-2010 Task 8
no code implementations • COLING 2016 • Lidan Wang, Ming Tan, Jiawei Han
In this paper, we propose an extremely efficient hybrid model (FastHybrid) that tackles the problem from both an accuracy and scalability point of view.
3 code implementations • 11 Feb 2016 • Cicero dos Santos, Ming Tan, Bing Xiang, Bo-Wen Zhou
In this work, we propose Attentive Pooling (AP), a two-way attention mechanism for discriminative model training.
Ranked #2 on
Question Answering
on SemEvalCQA
2 code implementations • 12 Nov 2015 • Ming Tan, Cicero dos Santos, Bing Xiang, Bo-Wen Zhou
One direction is to define a more composite representation for questions and answers by combining convolutional neural network with the basic framework.
no code implementations • NeurIPS 2013 • Shaodan Zhai, Tian Xia, Ming Tan, Shaojun Wang
We propose a boosting method, DirectBoost, a greedy coordinate descent algorithm that builds an ensemble classifier of weak classifiers through directly minimizing empirical classification error over labeled training examples; once the training classification error is reduced to a local coordinatewise minimum, DirectBoost runs a greedy coordinate ascent algorithm that continuously adds weak classifiers to maximize any targeted arbitrarily defined margins until reaching a local coordinatewise maximum of the margins in a certain sense.