Search Results for author: Jiawei Wu

Found 19 papers, 3 papers with code

VQSynery: Robust Drug Synergy Prediction With Vector Quantization Mechanism

no code implementations5 Mar 2024 Jiawei Wu, Mingyuan Yan, Dianbo Liu

The pursuit of optimizing cancer therapies is significantly advanced by the accurate prediction of drug synergy.

Quantization

GliDe with a CaPE: A Low-Hassle Method to Accelerate Speculative Decoding

no code implementations3 Feb 2024 Cunxiao Du, Jing Jiang, Xu Yuanchen, Jiawei Wu, Sicheng Yu, Yongqi Li, Shenggui Li, Kai Xu, Liqiang Nie, Zhaopeng Tu, Yang You

Speculative decoding is a relatively new decoding framework that leverages small and efficient draft models to reduce the latency of LLMs.

Class-Specific Distribution Alignment for Semi-Supervised Medical Image Classification

no code implementations29 Jul 2023 Zhongzheng Huang, Jiawei Wu, Tao Wang, Zuoyong Li, Anastasia Ioannou

Despite the success of deep neural networks in medical image classification, the problem remains challenging as data annotation is time-consuming, and the class distribution is imbalanced due to the relative scarcity of diseases.

Image Classification Semi-supervised Medical Image Classification

dugMatting: Decomposed-Uncertainty-Guided Matting

1 code implementation2 Jun 2023 Jiawei Wu, Changqing Zhang, Zuoyong Li, Huazhu Fu, Xi Peng, Joey Tianyi Zhou

Cutting out an object and estimating its opacity mask, known as image matting, is a key task in image and video editing.

Image Matting Video Editing

Continual Transfer Learning for Cross-Domain Click-Through Rate Prediction at Taobao

no code implementations11 Aug 2022 Lixin Liu, Yanling Wang, Tianming Wang, Dong Guan, Jiawei Wu, Jingxu Chen, Rong Xiao, Wenxiang Zhu, Fei Fang

Therefore, it is crucial to perform cross-domain CTR prediction to transfer knowledge from large domains to small domains to alleviate the data sparsity issue.

Click-Through Rate Prediction Recommendation Systems +1

Improving Robustness and Generality of NLP Models Using Disentangled Representations

no code implementations21 Sep 2020 Jiawei Wu, Xiaoya Li, Xiang Ao, Yuxian Meng, Fei Wu, Jiwei Li

We show that models trained with the proposed criteria provide better robustness and domain adaptation ability in a wide range of supervised learning tasks.

Domain Adaptation Representation Learning

Analyzing COVID-19 on Online Social Media: Trends, Sentiments and Emotions

no code implementations29 May 2020 Xiaoya Li, Mingxin Zhou, Jiawei Wu, Arianna Yuan, Fei Wu, Jiwei Li

At the time of writing, the ongoing pandemic of coronavirus disease (COVID-19) has caused severe impacts on society, economy and people's daily lives.

TWEETQA: A Social Media Focused Question Answering Dataset

no code implementations ACL 2019 Wenhan Xiong, Jiawei Wu, Hong Wang, Vivek Kulkarni, Mo Yu, Shiyu Chang, Xiaoxiao Guo, William Yang Wang

With social media becoming increasingly pop-ular on which lots of news and real-time eventsare reported, developing automated questionanswering systems is critical to the effective-ness of many applications that rely on real-time knowledge.

Question Answering

Self-Supervised Dialogue Learning

no code implementations ACL 2019 Jiawei Wu, Xin Wang, William Yang Wang

The sequential order of utterances is often meaningful in coherent dialogues, and the order changes of utterances could lead to low-quality and incoherent conversations.

Self-Supervised Learning

VATEX: A Large-Scale, High-Quality Multilingual Dataset for Video-and-Language Research

2 code implementations ICCV 2019 Xin Wang, Jiawei Wu, Junkun Chen, Lei LI, Yuan-Fang Wang, William Yang Wang

We also introduce two tasks for video-and-language research based on VATEX: (1) Multilingual Video Captioning, aimed at describing a video in various languages with a compact unified captioning model, and (2) Video-guided Machine Translation, to translate a source language description into the target language using the video information as additional spatiotemporal context.

Machine Translation Translation +3

Extract and Edit: An Alternative to Back-Translation for Unsupervised Neural Machine Translation

no code implementations NAACL 2019 Jiawei Wu, Xin Wang, William Yang Wang

The overreliance on large parallel corpora significantly limits the applicability of machine translation systems to the majority of language pairs.

Sentence Translation +1

Imposing Label-Relational Inductive Bias for Extremely Fine-Grained Entity Typing

1 code implementation NAACL 2019 Wenhan Xiong, Jiawei Wu, Deren Lei, Mo Yu, Shiyu Chang, Xiaoxiao Guo, William Yang Wang

Existing entity typing systems usually exploit the type hierarchy provided by knowledge base (KB) schema to model label correlations and thus improve the overall performance.

Entity Typing Inductive Bias

Learning to Compose Topic-Aware Mixture of Experts for Zero-Shot Video Captioning

no code implementations7 Nov 2018 Xin Wang, Jiawei Wu, Da Zhang, Yu Su, William Yang Wang

Although promising results have been achieved in video captioning, existing models are limited to the fixed inventory of activities in the training corpus, and do not generalize to open vocabulary scenarios.

Video Captioning

Reinforced Co-Training

no code implementations NAACL 2018 Jiawei Wu, Lei LI, William Yang Wang

However, the selection of samples in existing co-training methods is based on a predetermined policy, which ignores the sampling bias between the unlabeled and the labeled subsets, and fails to explore the data space.

Clickbait Detection General Classification +3

Knowledge Representation via Joint Learning of Sequential Text and Knowledge Graphs

no code implementations22 Sep 2016 Jiawei Wu, Ruobing Xie, Zhiyuan Liu, Maosong Sun

There are two main challenges for constructing knowledge representations from plain texts: (1) How to take full advantages of sequential contexts of entities in plain texts for KRL.

Informativeness Knowledge Graphs +4

Cannot find the paper you are looking for? You can Submit a new open access paper.