no code implementations • 24 Jul 2024 • Yongqi Li, Hongru Cai, Wenjie Wang, Leigang Qu, Yinwei Wei, Wenjie Li, Liqiang Nie, Tat-Seng Chua
Despite its great potential, existing generative approaches are limited due to the following issues: insufficient visual information in identifiers, misalignment with high-level semantics, and learning gap towards the retrieval target.
no code implementations • 16 Jul 2024 • Xiaohao Liu, Jie Wu, Zhulin Tao, Yunshan Ma, Yinwei Wei, Tat-Seng Chua
Recent methods utilize multimodal information through sophisticated extractors for bundling, but remain limited by inferior semantic understanding, the restricted scope of knowledge, and an inability to handle cold-start issues.
no code implementations • 18 Jun 2024 • Tuan-Luc Huynh, Thuy-Trang Vu, Weiqing Wang, Yinwei Wei, Trung Le, Dragan Gasevic, Yuan-Fang Li, Thanh-Toan Do
Differentiable Search Index (DSI) utilizes Pre-trained Language Models (PLMs) for efficient document retrieval without relying on external indexes.
no code implementations • 25 Apr 2024 • Han Liu, Yinwei Wei, Xuemeng Song, Weili Guan, Yuan-Fang Li, Liqiang Nie
Multimodal recommendation aims to recommend user-preferred candidates based on her/his historically interacted items and associated multimodal information.
1 code implementation • 20 Apr 2024 • Jingqi Kang, Tongtong Wu, Jinming Zhao, Guitao Wang, Yinwei Wei, Hao Yang, Guilin Qi, Yuan-Fang Li, Gholamreza Haffari
To address the challenges of catastrophic forgetting and effective disentanglement, we propose a novel method, 'Double Mixture.'
1 code implementation • 30 Jan 2024 • Xinyu Lin, Wenjie Wang, Yongqi Li, Shuo Yang, Fuli Feng, Yinwei Wei, Tat-Seng Chua
To pursue the two objectives, we propose a novel data pruning method based on two scores, i. e., influence score and effort score, to efficiently identify the influential samples.
no code implementations • 16 Jan 2024 • Lidong Zeng, Zhedong Zheng, Yinwei Wei, Tat-Seng Chua
This paper delves into the text-guided image editing task, focusing on modifying a reference image according to user-specified textual feedback to embody specific attributes.
no code implementations • 22 Dec 2023 • Zhenyang Li, Fan Liu, Yinwei Wei, Zhiyong Cheng, Liqiang Nie, Mohan Kankanhalli
To obtain robust and independent representations for each factor associated with a specific attribute, we first disentangle the representations of features both within and across different modalities.
1 code implementation • 28 Nov 2023 • Yunshan Ma, Yingzhi He, Xiang Wang, Yinwei Wei, Xiaoyu Du, Yuyangzi Fu, Tat-Seng Chua
It does, however, have two limitations: 1) the two-view formulation does not fully exploit all the heterogeneous relations among users, bundles and items; and 2) the "early contrast and late fusion" framework is less effective in capturing user preference and difficult to generalize to multiple views.
1 code implementation • 28 Oct 2023 • Yunshan Ma, Xiaohao Liu, Yinwei Wei, Zhulin Tao, Xiang Wang, Tat-Seng Chua
Specifically, we use self-attention modules to combine the multimodal and multi-item features, and then leverage both item- and bundle-level contrastive learning to enhance the representation learning, thus to counter the modality missing, noise, and sparsity problems.
no code implementations • 13 Oct 2023 • Jiale Liu, Yu-Wei Zhan, Chong-Yu Zhang, Xin Luo, Zhen-Duo Chen, Yinwei Wei, Xin-Shun Xu
For FCIL, the local and global models may suffer from catastrophic forgetting on old classes caused by the arrival of new classes and the data distributions of clients are non-independent and identically distributed (non-iid).
1 code implementation • 8 Aug 2023 • Wei Ji, Xiangyan Liu, An Zhang, Yinwei Wei, Yongxin Ni, Xiang Wang
To be specific, we first introduce an ID-aware Multi-modal Transformer module in the item representation learning stage to facilitate information interaction among different features.
1 code implementation • 6 Aug 2023 • Peiguang Jing, Xianyi Liu, Ji Wang, Yinwei Wei, Liqiang Nie, Yuting Su
Emotion distribution learning has gained increasing attention with the tendency to express emotions through images.
1 code implementation • 20 Jul 2023 • Teng Sun, Juntong Ni, Wenjie Wang, Liqiang Jing, Yinwei Wei, Liqiang Nie
To this end, we propose a general debiasing framework based on Inverse Probability Weighting (IPW), which adaptively assigns small weights to the samples with larger bias (i. e., the severer spurious correlations).
1 code implementation • SIGIR '23: Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval 2023 • Yinwei Wei, Wenqi Liu, Fan Liu, Xiang Wang, Liqiang Nie, Tat-Seng Chua
Considering its challenges in effectiveness and efficiency, we propose a novel Transformer-based recommendation model, termed as Light Graph Transformer model (LightGT).
Ranked #1 on Multi-Media Recommendation on Kwai (Recall@10 metric)
no code implementations • 6 Jun 2023 • Bobo Li, Hao Fei, Fei Li, Shengqiong Wu, Lizi Liao, Yinwei Wei, Tat-Seng Chua, Donghong Ji
Conversation utterances are essentially organized and described by the underlying discourse, and thus dialogue disentanglement requires the full understanding and harnessing of the intrinsic discourse attribute.
no code implementations • 17 May 2023 • Xiaolin Chen, Xuemeng Song, Yinwei Wei, Liqiang Nie, Tat-Seng Chua
Thereafter, considering that the attribute knowledge and relation knowledge can benefit the responding to different levels of questions, we design a multi-level knowledge composition module in MDS-S2 to obtain the latent composed response representation.
1 code implementation • 15 Mar 2023 • Xiao Wang, Tian Gan, Yinwei Wei, Jianlong Wu, Dai Meng, Liqiang Nie
Existing methods mostly focus on analyzing video content, neglecting users' social influence and tag relation.
1 code implementation • 13 Jan 2023 • Han Liu, Yinwei Wei, Jianhua Yin, Liqiang Nie
Towards this end, existing methods tend to code users by modeling their Hamming similarities with the items they historically interact with, which are termed as the first-order similarities in this work.
no code implementations • 26 Dec 2022 • Wei Ji, Long Chen, Yinwei Wei, Yiming Wu, Tat-Seng Chua
In this work, we propose a novel multi-resolution temporal video sentence grounding network: MRTNet, which consists of a multi-modal feature encoder, a Multi-Resolution Temporal (MRT) module, and a predictor module.
1 code implementation • 22 Dec 2022 • Yali Du, Yinwei Wei, Wei Ji, Fan Liu, Xin Luo, Liqiang Nie
The booming development and huge market of micro-videos bring new e-commerce channels for merchants.
1 code implementation • 20 Dec 2022 • Yinwei Wei, Xiang Wang, Liqiang Nie, Shaoyu Li, Dingxian Wang, Tat-Seng Chua
Knowledge Graph (KG), as a side-information, tends to be utilized to supplement the collaborative filtering (CF) based recommendation model.
1 code implementation • 27 Sep 2022 • Fan Liu, Zhiyong Cheng, Huilin Chen, Yinwei Wei, Liqiang Nie, Mohan Kankanhalli
At the item level, a synthetic data generation module is proposed to generate a synthetic item corresponding to the selected item based on the user's preferences.
no code implementations • 21 Jul 2022 • Yudong Han, Jianhua Yin, Jianlong Wu, Yinwei Wei, Liqiang Nie
Visual Question Answering (VQA) is fundamentally compositional in nature, and many questions are simply answered by decomposing them into modular sub-problems.
1 code implementation • 25 Feb 2022 • Zhenyang Li, Yangyang Guo, Kejie Wang, Yinwei Wei, Liqiang Nie, Mohan Kankanhalli
Given that our framework is model-agnostic, we apply it to the existing popular baselines and validate its effectiveness on the benchmark dataset.
no code implementations • 24 Jan 2022 • Xue Dong, Xuemeng Song, Na Zheng, Yinwei Wei, Zhongzhou Zhao
Moreover, we can summarize a preferred attribute profile for each user, depicting his/her preferred item attributes.
1 code implementation • 12 Jul 2021 • Yinwei Wei, Xiang Wang, Qi Li, Liqiang Nie, Yan Li, Xuanping Li, Tat-Seng Chua
It aims to maximize the mutual dependencies between item content and collaborative signals.
1 code implementation • ACM International Conference on Multimedia 2019 • Yinwei Wei, Xiang Wang, Liqiang Nie, Xiangnan He, Richang Hong, Tat-Seng Chua
Existing works on multimedia recommendation largely exploit multi-modal contents to enrich item representations, while less effort is made to leverage information interchange between users and items to enhance user representations and further capture user's fine-grained preferences on different modalities.
Ranked #1 on Multi-Media Recommendation on MovieLens 10M
1 code implementation • 27 Aug 2019 • Yinwei Wei, Zhiyong Cheng, Xuzheng Yu, Zhou Zhao, Lei Zhu, Liqiang Nie
The hashtags, that a user provides to a post (e. g., a micro-video), are the ones which in her mind can well describe the post content where she is interested in.