no code implementations • 31 Oct 2023 • Wei Zhu, Ming Tan
Prompt tuning prepends a soft prompt to the input embeddings or hidden states and only optimizes the prompt to adapt pretrained models (PTMs) to downstream tasks.
no code implementations • 5 Jul 2023 • Prateek Yadav, Qing Sun, Hantian Ding, Xiaopeng Li, Dejiao Zhang, Ming Tan, Xiaofei Ma, Parminder Bhatia, Ramesh Nallapati, Murali Krishna Ramanathan, Mohit Bansal, Bing Xiang
Large-scale code generation models such as Codex and CodeT5 have achieved impressive performance.
1 code implementation • 13 Apr 2023 • Wenbin Zou, Hongxia Gao, Liang Chen, Yunchen Zhang, Mingchao Jiang, Zhongxin Yu, Ming Tan
Stereo image super-resolution aims to improve the quality of high-resolution stereo image pairs by exploiting complementary information across views.
no code implementations • 30 Jan 2023 • Zhize Wu, Changjiang Du, Le Zou, Ming Tan, Tong Xu, Fan Cheng, Fudong Nian, Thomas Weise
This so-called domain shift leads to a significant performance drop in image classification.
1 code implementation • 26 Oct 2022 • Ben Athiwaratkun, Sanjay Krishna Gouda, Zijian Wang, Xiaopeng Li, Yuchen Tian, Ming Tan, Wasi Uddin Ahmad, Shiqi Wang, Qing Sun, Mingyue Shang, Sujan Kumar Gonugondla, Hantian Ding, Varun Kumar, Nathan Fulton, Arash Farahani, Siddhartha Jain, Robert Giaquinto, Haifeng Qian, Murali Krishna Ramanathan, Ramesh Nallapati, Baishakhi Ray, Parminder Bhatia, Sudipta Sengupta, Dan Roth, Bing Xiang
Using these benchmarks, we are able to assess the performance of code generation models in a multi-lingual fashion, and discovered generalization ability of language models on out-of-domain languages, advantages of multi-lingual models over mono-lingual, the ability of few-shot prompting to teach the model new languages, and zero-shot translation abilities even on mono-lingual settings.
no code implementations • 3 Oct 2022 • Nihal Jain, Dejiao Zhang, Wasi Uddin Ahmad, Zijian Wang, Feng Nan, Xiaopeng Li, Ming Tan, Ramesh Nallapati, Baishakhi Ray, Parminder Bhatia, Xiaofei Ma, Bing Xiang
Specifically, we attain $44\%$ relative improvement on the Semantic Textual Similarity tasks and $34\%$ on Code-to-Code Search tasks.
3 code implementations • 23 Aug 2022 • Ren Yang, Radu Timofte, Qi Zhang, Lin Zhang, Fanglong Liu, Dongliang He, Fu Li, He Zheng, Weihang Yuan, Pavel Ostyakov, Dmitry Vyal, Magauiya Zhussip, Xueyi Zou, Youliang Yan, Lei LI, Jingzhu Tang, Ming Chen, Shijie Zhao, Yu Zhu, Xiaoran Qin, Chenghua Li, Cong Leng, Jian Cheng, Claudio Rota, Marco Buzzelli, Simone Bianco, Raimondo Schettini, Dafeng Zhang, Feiyu Huang, Shizhuo Liu, Xiaobing Wang, Zhezhu Jin, Bingchen Li, Xin Li, Mingxi Li, Ding Liu, Wenbin Zou, Peijie Dong, Tian Ye, Yunchen Zhang, Ming Tan, Xin Niu, Mustafa Ayazoglu, Marcos Conde, Ui-Jin Choi, Zhuang Jia, Tianyu Xu, Yijian Zhang, Mao Ye, Dengyan Luo, Xiaofeng Pan, Liuhan Peng
The homepage of this challenge is at https://github. com/RenYang-home/AIM22_CompressSR.
2 code implementations • ACL 2022 • Zheng Li, Zijian Wang, Ming Tan, Ramesh Nallapati, Parminder Bhatia, Andrew Arnold, Bing Xiang, Dan Roth
Empirical analyses show that, despite the challenging nature of generative tasks, we were able to achieve a 16. 5x model footprint compression ratio with little performance drop relative to the full-precision counterparts on multiple summarization and QA datasets.
no code implementations • 24 Feb 2022 • Zhize Wu, Huanyi Li, XiaoFeng Wang, Zijun Wu, Le Zou, Lixiang Xu, Ming Tan
Household garbage images are usually faced with complex backgrounds, variable illuminations, diverse angles, and changeable shapes, which bring a great difficulty in garbage image classification.
no code implementations • EMNLP (spnlp) 2020 • Ke Tran, Ming Tan
Finally, we use an auxiliary parser (AP) to filter the generated utterances.
no code implementations • 12 Mar 2020 • Zhize Wu, Thomas Weise, Le Zou, Fei Sun, Ming Tan
Differing from the previous studies, we propose a new method called Denoising Autoencoder with Temporal and Categorical Constraints (DAE_CTC)} to study the skeletal representation in a view of skeleton reconstruction.
no code implementations • IJCNLP 2019 • Ming Tan, Dakuo Wang, Yupeng Gao, Haoyu Wang, Saloni Potdar, Xiaoxiao Guo, Shiyu Chang, Mo Yu
In multi-party chat, it is common for multiple conversations to occur concurrently, leading to intermingled conversation threads in chat logs.
1 code implementation • IJCNLP 2019 • Ming Tan, Yang Yu, Haoyu Wang, Dakuo Wang, Saloni Potdar, Shiyu Chang, Mo Yu
Out-of-domain (OOD) detection for low-resource text classification is a realistic but understudied task.
no code implementations • 4 Jun 2019 • Dakuo Wang, Haoyu Wang, Mo Yu, Zahra Ashktorab, Ming Tan
We cross-referenced 117 project teams and their team-based Slack channels and identified 57 teams that appeared in both datasets, then we built a regression model to reveal the relationship between these group communication styles and the project team performance.
1 code implementation • ACL 2019 • Haoyu Wang, Ming Tan, Mo Yu, Shiyu Chang, Dakuo Wang, Kun Xu, Xiaoxiao Guo, Saloni Potdar
Most approaches to extraction multiple relations from a paragraph require multiple passes over the paragraph.
Ranked #19 on
Relation Extraction
on SemEval-2010 Task 8
no code implementations • COLING 2016 • Lidan Wang, Ming Tan, Jiawei Han
In this paper, we propose an extremely efficient hybrid model (FastHybrid) that tackles the problem from both an accuracy and scalability point of view.
3 code implementations • 11 Feb 2016 • Cicero dos Santos, Ming Tan, Bing Xiang, Bo-Wen Zhou
In this work, we propose Attentive Pooling (AP), a two-way attention mechanism for discriminative model training.
Ranked #2 on
Question Answering
on SemEvalCQA
2 code implementations • 12 Nov 2015 • Ming Tan, Cicero dos Santos, Bing Xiang, Bo-Wen Zhou
One direction is to define a more composite representation for questions and answers by combining convolutional neural network with the basic framework.
no code implementations • NeurIPS 2013 • Shaodan Zhai, Tian Xia, Ming Tan, Shaojun Wang
We propose a boosting method, DirectBoost, a greedy coordinate descent algorithm that builds an ensemble classifier of weak classifiers through directly minimizing empirical classification error over labeled training examples; once the training classification error is reduced to a local coordinatewise minimum, DirectBoost runs a greedy coordinate ascent algorithm that continuously adds weak classifiers to maximize any targeted arbitrarily defined margins until reaching a local coordinatewise maximum of the margins in a certain sense.