Search Results for author: Weishi Wang

Found 5 papers, 2 papers with code

Response Selection for Multi-Party Conversations with Dynamic Topic Tracking

no code implementations EMNLP 2020 Weishi Wang, Steven C.H. Hoi, Shafiq Joty

While participants in a multi-party multi-turn conversation simultaneously engage in multiple conversation topics, existing response selection methods are developed mainly focusing on a two-party single-conversation scenario.

Disentanglement Multi-Task Learning +1

RAP-Gen: Retrieval-Augmented Patch Generation with CodeT5 for Automatic Program Repair

no code implementations12 Sep 2023 Weishi Wang, Yue Wang, Shafiq Joty, Steven C. H. Hoi

Automatic program repair (APR) is crucial to reduce manual debugging efforts for developers and improve software reliability.

Language Modelling Program Repair +1

Retrieving Multimodal Information for Augmented Generation: A Survey

no code implementations20 Mar 2023 Ruochen Zhao, Hailin Chen, Weishi Wang, Fangkai Jiao, Xuan Long Do, Chengwei Qin, Bosheng Ding, Xiaobao Guo, Minzhi Li, Xingxuan Li, Shafiq Joty

As Large Language Models (LLMs) become popular, there emerged an important trend of using multimodality to augment the LLMs' generation ability, which enables LLMs to better interact with the world.

Retrieval

xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval

3 code implementations6 Mar 2023 Mohammad Abdullah Matin Khan, M Saiful Bari, Xuan Long Do, Weishi Wang, Md Rizwan Parvez, Shafiq Joty

Recently, pre-trained large language models (LLMs) have shown impressive abilities in generating codes from natural language descriptions, repairing buggy codes, translating codes between languages, and retrieving relevant code segments.

Program Repair Retrieval

CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation

5 code implementations EMNLP 2021 Yue Wang, Weishi Wang, Shafiq Joty, Steven C. H. Hoi

We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers.

Clone Detection Code Summarization +4

Cannot find the paper you are looking for? You can Submit a new open access paper.