Search Results for author: Jianing Wang

Found 27 papers, 13 papers with code

A Survey of Neural Code Intelligence: Paradigms, Advances and Beyond

1 code implementation21 Mar 2024 Qiushi Sun, Zhirui Chen, Fangzhi Xu, Kanzhi Cheng, Chang Ma, Zhangyue Yin, Jianing Wang, Chengcheng Han, Renyu Zhu, Shuai Yuan, Qipeng Guo, Xipeng Qiu, Pengcheng Yin, XiaoLi Li, Fei Yuan, Lingpeng Kong, Xiang Li, Zhiyong Wu

Building on our examination of the developmental trajectories, we further investigate the emerging synergies between code intelligence and broader machine intelligence, uncovering new cross-domain opportunities and illustrating the substantial influence of code intelligence across various domains.

CoRAL: Collaborative Retrieval-Augmented Large Language Models Improve Long-tail Recommendation

no code implementations11 Mar 2024 Junda Wu, Cheng-Chun Chang, Tong Yu, Zhankui He, Jianing Wang, Yupeng Hou, Julian McAuley

Based on the retrieved user-item interactions, the LLM can analyze shared and distinct preferences among users, and summarize the patterns indicating which types of users would be attracted by certain items.

Recommendation Systems Reinforcement Learning (RL) +1

InstructGraph: Boosting Large Language Models via Graph-centric Instruction Tuning and Preference Alignment

1 code implementation13 Feb 2024 Jianing Wang, Junda Wu, Yupeng Hou, Yao Liu, Ming Gao, Julian McAuley

In this paper, we propose InstructGraph, a framework that empowers LLMs with the abilities of graph reasoning and generation by instruction tuning and preference alignment.


Uncertainty-aware Parameter-Efficient Self-training for Semi-supervised Language Understanding

1 code implementation19 Oct 2023 Jianing Wang, Qiushi Sun, Nuo Chen, Chengyu Wang, Jun Huang, Ming Gao, Xiang Li

The recent success of large pre-trained language models (PLMs) heavily hinges on massive labeled data, which typically produces inferior performance in low-resource scenarios.

Prompting Large Language Models with Chain-of-Thought for Few-Shot Knowledge Base Question Generation

no code implementations12 Oct 2023 Yuanyuan Liang, Jianing Wang, Hanlun Zhu, Lei Wang, Weining Qian, Yunshi Lan

Inspired by Chain-of-Thought (CoT) prompting, which is an in-context learning strategy for reasoning, we formulate KBQG task as a reasoning problem, where the generation of a complete question is splitted into a series of sub-question generation.

In-Context Learning Question Generation +1

Knowledgeable In-Context Tuning: Exploring and Exploiting Factual Knowledge for In-Context Learning

no code implementations26 Sep 2023 Jianing Wang, Chengyu Wang, Chuanqi Tan, Jun Huang, Ming Gao

Large language models (LLMs) enable in-context learning (ICL) by conditioning on a few labeled training examples as a text-based prompt, eliminating the need for parameter updates and achieving competitive performance.

Few-Shot Learning In-Context Learning +3

TransPrompt v2: A Transferable Prompting Framework for Cross-task Text Classification

no code implementations29 Aug 2023 Jianing Wang, Chengyu Wang, Cen Chen, Ming Gao, Jun Huang, Aoying Zhou

We propose TransPrompt v2, a novel transferable prompting framework for few-shot learning across similar or distant text classification tasks.

Few-Shot Learning Few-Shot Text Classification +1

Boosting Language Models Reasoning with Chain-of-Knowledge Prompting

no code implementations10 Jun 2023 Jianing Wang, Qiushi Sun, Nuo Chen, Xiang Li, Ming Gao

To mitigate this brittleness, we propose a novel Chain-of-Knowledge (CoK) prompting, where we aim at eliciting LLMs to generate explicit pieces of knowledge evidence in the form of structure triple.

Arithmetic Reasoning

TransCoder: Towards Unified Transferable Code Representation Learning Inspired by Human Skills

1 code implementation23 May 2023 Qiushi Sun, Nuo Chen, Jianing Wang, Xiang Li, Ming Gao

To tackle the issue, in this paper, we present TransCoder, a unified Transferable fine-tuning strategy for Code representation learning.

Clone Detection Code Summarization +2

HugNLP: A Unified and Comprehensive Library for Natural Language Processing

2 code implementations28 Feb 2023 Jianing Wang, Nuo Chen, Qiushi Sun, Wenkang Huang, Chengyu Wang, Ming Gao

In this paper, we introduce HugNLP, a unified and comprehensive library for natural language processing (NLP) with the prevalent backend of HuggingFace Transformers, which is designed for NLP researchers to easily utilize off-the-shelf algorithms and develop novel methods with user-defined models and tasks in real-world scenarios.

Uncertainty-aware Self-training for Low-resource Neural Sequence Labeling

no code implementations17 Feb 2023 Jianing Wang, Chengyu Wang, Jun Huang, Ming Gao, Aoying Zhou

Neural sequence labeling (NSL) aims at assigning labels for input language tokens, which covers a broad range of applications, such as named entity recognition (NER) and slot filling, etc.

named-entity-recognition Named Entity Recognition +3

Value Enhancement of Reinforcement Learning via Efficient and Robust Trust Region Optimization

no code implementations5 Jan 2023 Chengchun Shi, Zhengling Qi, Jianing Wang, Fan Zhou

When the initial policy is consistent, under some mild conditions, our method will yield a policy whose value converges to the optimal one at a faster rate than the initial policy, achieving the desired ``value enhancement" property.

Decision Making reinforcement-learning +1

Knowledge Prompting in Pre-trained Language Model for Natural Language Understanding

1 code implementation16 Oct 2022 Jianing Wang, Wenkang Huang, Qiuhui Shi, Hongbin Wang, Minghui Qiu, Xiang Li, Ming Gao

In this paper, to address these problems, we introduce a seminal knowledge prompting paradigm and further propose a knowledge-prompting-based PLM framework KP-PLM.

Language Modelling Natural Language Understanding

Revisiting and Advancing Chinese Natural Language Understanding with Accelerated Heterogeneous Knowledge Pre-training

1 code implementation11 Oct 2022 Taolin Zhang, Junwei DOng, Jianing Wang, Chengyu Wang, Ang Wang, Yinghui Liu, Jun Huang, Yong Li, Xiaofeng He

Recently, knowledge-enhanced pre-trained language models (KEPLMs) improve context-aware representations via learning from structured relations in knowledge graphs, and/or linguistic knowledge from syntactic or dependency analysis.

Knowledge Graphs Language Modelling +2

Towards Unified Prompt Tuning for Few-shot Text Classification

1 code implementation11 May 2022 Jianing Wang, Chengyu Wang, Fuli Luo, Chuanqi Tan, Minghui Qiu, Fei Yang, Qiuhui Shi, Songfang Huang, Ming Gao

Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-shot text classification by employing task-specific prompts.

Few-Shot Learning Few-Shot Text Classification +4

Math-KG: Construction and Applications of Mathematical Knowledge Graph

1 code implementation8 May 2022 Jianing Wang

Recently, the explosion of online education platforms makes a success in encouraging us to easily access online education resources.


KECP: Knowledge Enhanced Contrastive Prompting for Few-shot Extractive Question Answering

1 code implementation6 May 2022 Jianing Wang, Chengyu Wang, Minghui Qiu, Qiuhui Shi, Hongbin Wang, Jun Huang, Ming Gao

Extractive Question Answering (EQA) is one of the most important tasks in Machine Reading Comprehension (MRC), which can be solved by fine-tuning the span selecting heads of Pre-trained Language Models (PLMs).

Contrastive Learning Extractive Question-Answering +5

Atlas-Based Segmentation of Intracochlear Anatomy in Metal Artifact Affected CT Images of the Ear with Co-trained Deep Neural Networks

no code implementations8 Jul 2021 Jianing Wang, Dingjie Su, Yubo Fan, Srijata Chakravorti, Jack H. Noble, Benoit M. Dawant

The segmentation of the ICA in the Post-CT images is subsequently obtained by transferring the predefined segmentation meshes of the ICA in the atlas image to the Post-CT images using the corresponding DDFs generated by the trained registration networks.


Non-Crossing Quantile Regression for Distributional Reinforcement Learning

no code implementations NeurIPS 2020 Fan Zhou, Jianing Wang, Xingdong Feng

Distributional reinforcement learning (DRL) estimates the distribution over future returns instead of the mean to more efficiently capture the intrinsic uncertainty of MDPs.

Atari Games Distributional Reinforcement Learning +3

RH-Net: Improving Neural Relation Extraction via Reinforcement Learning and Hierarchical Relational Searching

1 code implementation27 Oct 2020 Jianing Wang

We then propose the hierarchical relational searching module to share the semantics from correlative instances between data-rich and data-poor classes.

 Ranked #1 on Denoising on iris

Denoising reinforcement-learning +3

Validation of image-guided cochlear implant programming techniques

no code implementations23 Sep 2019 Yiyuan Zhao, Jianing Wang, Rui Li, Robert F. Labadie, Benoit M. Dawant, Jack H. Noble

In this article, we create a ground truth dataset with conventional CT and micro-CT images of 35 temporal bone specimens to both rigorously characterize the accuracy of these two steps and assess how inaccuracies in these steps affect the overall results.

Anatomy Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.