Search Results for author: Jinpeng Wang

Found 41 papers, 20 papers with code

RAT: Retrieval-Augmented Transformer for Click-Through Rate Prediction

1 code implementation2 Apr 2024 Yushen Li, Jinpeng Wang, Tao Dai, Jieming Zhu, Jun Yuan, Rui Zhang, Shu-Tao Xia

Predicting click-through rates (CTR) is a fundamental task for Web applications, where a key issue is to devise effective models for feature interactions.

Click-Through Rate Prediction Retrieval

Sequence-level Semantic Representation Fusion for Recommender Systems

1 code implementation28 Feb 2024 Lanling Xu, Zhen Tian, Bingqian Li, Junjie Zhang, Jinpeng Wang, Mingchen Cai, Wayne Xin Zhao

The core idea of our approach is to conduct a sequence-level semantic fusion approach by better integrating global contexts.

Sequential Recommendation

Prompting Large Language Models for Recommender Systems: A Comprehensive Framework and Empirical Analysis

no code implementations10 Jan 2024 Lanling Xu, Junjie Zhang, Bingqian Li, Jinpeng Wang, Mingchen Cai, Wayne Xin Zhao, Ji-Rong Wen

As for the use of LLMs as recommenders, we analyze the impact of public availability, tuning strategies, model architecture, parameter scale, and context length on recommendation results based on the classification of LLMs.

Prompt Engineering Recommendation Systems

Hypergraph-Guided Disentangled Spectrum Transformer Networks for Near-Infrared Facial Expression Recognition

no code implementations10 Dec 2023 Bingjun Luo, Haowen Wang, Jinpeng Wang, Junjie Zhu, Xibin Zhao, Yue Gao

With the strong robusticity on illumination variations, near-infrared (NIR) can be an effective and essential complement to visible (VIS) facial expression recognition in low lighting or complete darkness conditions.

Facial Expression Recognition Facial Expression Recognition (FER)

What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Instruction Tuning

1 code implementation2 Nov 2023 Yifan Du, Hangyu Guo, Kun Zhou, Wayne Xin Zhao, Jinpeng Wang, Chuyuan Wang, Mingchen Cai, Ruihua Song, Ji-Rong Wen

By conducting a comprehensive empirical study, we find that instructions focused on complex visual reasoning tasks are particularly effective in improving the performance of MLLMs on evaluation benchmarks.

Visual Reasoning Zero-shot Generalization

GMMFormer: Gaussian-Mixture-Model Based Transformer for Efficient Partially Relevant Video Retrieval

1 code implementation8 Oct 2023 Yuting Wang, Jinpeng Wang, Bin Chen, Ziyun Zeng, Shu-Tao Xia

Current PRVR methods adopt scanning-based clip construction to achieve explicit clip modeling, which is information-redundant and requires a large storage overhead.

Partially Relevant Video Retrieval Retrieval +1

PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine

1 code implementation23 Aug 2023 Chenrui Zhang, Lin Liu, Jinpeng Wang, Chuyuan Wang, Xiao Sun, Hongyu Wang, Mingchen Cai

Moreover, to enhance stability of the prompt effect evaluation, we propose a novel prompt bagging method involving forward and backward thinking, which is superior to majority voting and is beneficial for both feedback and weight calculation in boosting.

Ensemble Learning Hallucination

MISSRec: Pre-training and Transferring Multi-modal Interest-aware Sequence Representation for Recommendation

1 code implementation22 Aug 2023 Jinpeng Wang, Ziyun Zeng, Yunxiao Wang, Yuting Wang, Xingyu Lu, Tianxiang Li, Jun Yuan, Rui Zhang, Hai-Tao Zheng, Shu-Tao Xia

We propose MISSRec, a multi-modal pre-training and transfer learning framework for SR. On the user side, we design a Transformer-based encoder-decoder model, where the contextual encoder learns to capture the sequence-level multi-modal user interests while a novel interest-aware decoder is developed to grasp item-modality-interest relations for better sequence representation.

Contrastive Learning Sequential Recommendation +1

Evaluating Object Hallucination in Large Vision-Language Models

2 code implementations17 May 2023 YiFan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, Ji-Rong Wen

Despite the promising progress on LVLMs, we find that LVLMs suffer from the hallucination problem, i. e. they tend to generate objects that are inconsistent with the target images in the descriptions.

Hallucination Object

Keyword-Based Diverse Image Retrieval by Semantics-aware Contrastive Learning and Transformer

no code implementations6 May 2023 Minyi Zhao, Jinpeng Wang, Dongliang Liao, Yiru Wang, Huanzhong Duan, Shuigeng Zhou

On the one hand, standard retrieval systems are usually biased to common semantics and seldom exploit diversity-aware regularization in training, which makes it difficult to promote diversity by post-processing.

Contrastive Learning Image Retrieval +1

Contrastive Masked Autoencoders for Self-Supervised Video Hashing

1 code implementation21 Nov 2022 Yuting Wang, Jinpeng Wang, Bin Chen, Ziyun Zeng, Shutao Xia

To capture video semantic information for better hashing learning, we adopt an encoder-decoder structure to reconstruct the video from its temporal-masked frames.

Retrieval Video Retrieval +2

Hybrid Contrastive Quantization for Efficient Cross-View Video Retrieval

1 code implementation7 Feb 2022 Jinpeng Wang, Bin Chen, Dongliang Liao, Ziyun Zeng, Gongfu Li, Shu-Tao Xia, Jin Xu

By performing Asymmetric-Quantized Contrastive Learning (AQ-CL) across views, HCQ aligns texts and videos at coarse-grained and multiple fine-grained levels.

Contrastive Learning Quantization +4

Suppressing Static Visual Cues via Normalizing Flows for Self-Supervised Video Representation Learning

1 code implementation7 Dec 2021 Manlin Zhang, Jinpeng Wang, Andy J. Ma

By modelling static factors in a video as a random variable, the conditional distribution of each latent variable becomes shifted and scaled normal.

Contrastive Learning Representation Learning +1

Cross-Batch Negative Sampling for Training Two-Tower Recommenders

no code implementations28 Oct 2021 Jinpeng Wang, Jieming Zhu, Xiuqiang He

The two-tower architecture has been widely applied for learning item and user representations, which is important for large-scale recommender systems.

Recommendation Systems Vocal Bursts Valence Prediction

SimpleX: A Simple and Strong Baseline for Collaborative Filtering

1 code implementation26 Sep 2021 Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He

While many existing studies focus on the design of more powerful interaction encoders, the impacts of loss functions and negative sampling ratios have not yet been well explored.

Collaborative Filtering Recommendation Systems

Contrastive Quantization with Code Memory for Unsupervised Image Retrieval

1 code implementation11 Sep 2021 Jinpeng Wang, Ziyun Zeng, Bin Chen, Tao Dai, Shu-Tao Xia

The high efficiency in computation and storage makes hashing (including binary hashing and quantization) a common strategy in large-scale retrieval systems.

Contrastive Learning Deep Hashing +1

ST-PIL: Spatial-Temporal Periodic Interest Learning for Next Point-of-Interest Recommendation

no code implementations6 Apr 2021 Qiang Cui, Chenrui Zhang, Yafeng Zhang, Jinpeng Wang, Mingchen Cai

Specifically, in the long-term module, we learn the temporal periodic interest of daily granularity, then utilize intra-level attention to form long-term interest.

CANVASEMB: Learning Layout Representation with Large-scale Pre-training for Graphic Design

no code implementations1 Jan 2021 Yuxi Xie, Danqing Huang, Jinpeng Wang, Chin-Yew Lin

Layout representation, which models visual elements in a canvas and their inter-relations, plays a crucial role in graphic design intelligence.

Image Captioning Multi-Task Learning +1

Learning Semantic Correspondences from Noisy Data-text Pairs by Local-to-Global Alignments

no code implementations COLING 2020 Feng Nie, Jinpeng Wang, Chin-Yew Lin

Large-scale datasets recently proposed for generation contain loosely corresponding data text pairs, where part of spans in text cannot be aligned to its incomplete paired input.

Data-to-Text Generation

Removing the Background by Adding the Background: Towards Background Robust Self-supervised Video Representation Learning

2 code implementations CVPR 2021 Jinpeng Wang, Yuting Gao, Ke Li, Yiqi Lin, Andy J. Ma, Hao Cheng, Pai Peng, Feiyue Huang, Rongrong Ji, Xing Sun

Then we force the model to pull the feature of the distracting video and the feature of the original video closer, so that the model is explicitly restricted to resist the background influence, focusing more on the motion changes.

Representation Learning Self-Supervised Learning

Self-supervised Temporal Discriminative Learning for Video Representation Learning

1 code implementation5 Aug 2020 Jinpeng Wang, Yiqi Lin, Andy J. Ma, Pong C. Yuen

Without labelled data for network pretraining, temporal triplet is generated for each anchor video by using segment of the same or different time interval so as to enhance the capacity for temporal feature representation.

Action Recognition Representation Learning +1

Self-supervised learning using consistency regularization of spatio-temporal data augmentation for action recognition

1 code implementation5 Aug 2020 Jinpeng Wang, Yiqi Lin, Andy J. Ma

Self-supervised learning has shown great potentials in improving the deep learning model in an unsupervised manner by constructing surrogate supervision signals directly from the unlabeled data.

Action Recognition Data Augmentation +1

Improving Entity Linking by Modeling Latent Entity Type Information

no code implementations6 Jan 2020 Shuang Chen, Jinpeng Wang, Feng Jiang, Chin-Yew Lin

Existing state of the art neural entity linking models employ attention-based bag-of-words context model and pre-trained entity embeddings bootstrapped from word embeddings to assess topic level context compatibility.

Ranked #2 on Entity Disambiguation on AIDA-CoNLL (Micro-F1 metric)

Entity Disambiguation Entity Embeddings +3

An Encoder with non-Sequential Dependency for Neural Data-to-Text Generation

no code implementations WS 2019 Feng Nie, Jinpeng Wang, Rong pan, Chin-Yew Lin

Data-to-text generation aims to generate descriptions given a structured input data (i. e., a table with multiple records).

Data-to-Text Generation

A Simple Recipe towards Reducing Hallucination in Neural Surface Realisation

no code implementations ACL 2019 Feng Nie, Jin-Ge Yao, Jinpeng Wang, Rong pan, Chin-Yew Lin

Recent neural language generation systems often \textit{hallucinate} contents (i. e., producing irrelevant or contradicted facts), especially when trained on loosely corresponding pairs of the input structure and text.

Hallucination Text Generation

Operation-guided Neural Networks for High Fidelity Data-To-Text Generation

no code implementations EMNLP 2018 Feng Nie, Jinpeng Wang, Jin-Ge Yao, Rong pan, Chin-Yew Lin

Even though the generated texts are mostly fluent and informative, they often generate descriptions that are not consistent with the input structured data.

Data-to-Text Generation Quantization +1

Aggregated Semantic Matching for Short Text Entity Linking

no code implementations CONLL 2018 Feng Nie, Shuyan Zhou, Jing Liu, Jinpeng Wang, Chin-Yew Lin, Rong pan

The task of entity linking aims to identify concepts mentioned in a text fragments and link them to a reference knowledge base.

Card Games Entity Linking +2

Learning Latent Semantic Annotations for Grounding Natural Language to Structured Data

1 code implementation EMNLP 2018 Guanghui Qin, Jin-Ge Yao, Xuening Wang, Jinpeng Wang, Chin-Yew Lin

Previous work on grounded language learning did not fully capture the semantics underlying the correspondences between structured world state representations and texts, especially those between numerical values and lexical terms.

Grounded language learning Text Generation

Operations Guided Neural Networks for High Fidelity Data-To-Text Generation

1 code implementation8 Sep 2018 Feng Nie, Jinpeng Wang, Jin-Ge Yao, Rong pan, Chin-Yew Lin

Even though the generated texts are mostly fluent and informative, they often generate descriptions that are not consistent with the input structured data.

Data-to-Text Generation Quantization +1

Incorporating Consistency Verification into Neural Data-to-Document Generation

no code implementations15 Aug 2018 Feng Nie, Hailin Chen, Jinpeng Wang, Jin-Ge Yao, Chin-Yew Lin, Rong pan

Recent neural models for data-to-document generation have achieved remarkable progress in producing fluent and informative texts.

reinforcement-learning Reinforcement Learning (RL) +1

A Statistical Framework for Product Description Generation

no code implementations IJCNLP 2017 Jinpeng Wang, Yutai Hou, Jing Liu, Yunbo Cao, Chin-Yew Lin

We present in this paper a statistical framework that generates accurate and fluent product description from product attributes.

Attribute Data-to-Text Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.