Search Results for author: Peiqi Wang

Found 10 papers, 0 papers with code

Diversity Measurement and Subset Selection for Instruction Tuning Datasets

no code implementations4 Feb 2024 Peiqi Wang, Yikang Shen, Zhen Guo, Matthew Stallone, Yoon Kim, Polina Golland, Rameswar Panda

Our experiments demonstrate that the proposed diversity measure in the normalized weight gradient space is correlated with downstream instruction-following performance.

Instruction Following Point Processes

Improving Small Language Models on PubMedQA via Generative Data Augmentation

no code implementations12 May 2023 Zhen Guo, Peiqi Wang, Yanwei Wang, Shangdi Yu

Large Language Models (LLMs) have made remarkable advancements in the field of natural language processing.

Data Augmentation Question Answering

Sample-Specific Debiasing for Better Image-Text Models

no code implementations25 Apr 2023 Peiqi Wang, Yingcheng Liu, Ching-Yun Ko, William M. Wells, Seth Berkowitz, Steven Horng, Polina Golland

Self-supervised representation learning on image-text data facilitates crucial medical applications, such as image classification, visual grounding, and cross-modal retrieval.

Contrastive Learning Cross-Modal Retrieval +4

Using Multiple Instance Learning to Build Multimodal Representations

no code implementations11 Dec 2022 Peiqi Wang, William M. Wells, Seth Berkowitz, Steven Horng, Polina Golland

Image-text multimodal representation learning aligns data across modalities and enables important medical applications, e. g., image classification, visual grounding, and cross-modal retrieval.

Contrastive Learning Cross-Modal Retrieval +5

QGAN: Quantize Generative Adversarial Networks to Extreme low-bits

no code implementations25 Sep 2019 Peiqi Wang, Yu Ji, Xinfeng Xie, Yongqiang Lyu, Dongsheng Wang, Yuan Xie

Despite the success in model reduction of convolutional neural networks (CNNs), neural network quantization methods have not yet been studied on GANs, which are mainly faced with the issues of both the effectiveness of quantization algorithms and the instability of training GAN models.

Quantization

FPSA: A Full System Stack Solution for Reconfigurable ReRAM-based NN Accelerator Architecture

no code implementations28 Jan 2019 Yu Ji, Youyang Zhang, Xinfeng Xie, Shuangchen Li, Peiqi Wang, Xing Hu, Youhui Zhang, Yuan Xie

In this paper, we propose a full system stack solution, composed of a reconfigurable architecture design, Field Programmable Synapse Array (FPSA) and its software system including neural synthesizer, temporal-to-spatial mapper, and placement & routing.

QGAN: Quantized Generative Adversarial Networks

no code implementations24 Jan 2019 Peiqi Wang, Dongsheng Wang, Yu Ji, Xinfeng Xie, Haoxuan Song, XuXin Liu, Yongqiang Lyu, Yuan Xie

The intensive computation and memory requirements of generative adversarial neural networks (GANs) hinder its real-world deployment on edge devices such as smartphones.

Quantization

Programmable Neural Network Trojan for Pre-Trained Feature Extractor

no code implementations23 Jan 2019 Yu Ji, Zixin Liu, Xing Hu, Peiqi Wang, Youhui Zhang

Existing studies have explored the outsourced training attack scenario and transfer learning attack scenario in some small datasets for specific domains, with limited numbers of fixed target classes.

Transfer Learning

HitNet: Hybrid Ternary Recurrent Neural Network

no code implementations NeurIPS 2018 Peiqi Wang, Xinfeng Xie, Lei Deng, Guoqi Li, Dongsheng Wang, Yuan Xie

For example, we improve the perplexity per word (PPW) of a ternary LSTM on Penn Tree Bank (PTB) corpus from 126 (the state-of-the-art result to the best of our knowledge) to 110. 3 with a full precision model in 97. 2, and a ternary GRU from 142 to 113. 5 with a full precision model in 102. 7.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.