1 code implementation • 3 Jan 2025 • Weizhi Zhang, Yuanchen Bei, Liangwei Yang, Henry Peng Zou, Peilin Zhou, Aiwei Liu, Yinghui Li, Hao Chen, Jianling Wang, Yu Wang, Feiran Huang, Sheng Zhou, Jiajun Bu, Allen Lin, James Caverlee, Fakhri Karray, Irwin King, Philip S. Yu
Cold-start problem is one of the long-standing challenges in recommender systems, focusing on accurately modeling new or interaction-limited users or items to provide better recommendations.
no code implementations • 19 Dec 2024 • Haoran Liu, Youzhi Luo, Tianxiao Li, James Caverlee, Martin Renqiang Min
We consider the conditional generation of 3D drug-like molecules with \textit{explicit control} over molecular properties such as drug-like properties (e. g., Quantitative Estimate of Druglikeness or Synthetic Accessibility score) and effectively binding to specific protein sites.
1 code implementation • 18 Dec 2024 • Xiangjue Dong, Maria Teleki, James Caverlee
Techniques that enhance inference through increased computation at test-time have recently gained attention.
1 code implementation • 30 Oct 2024 • Millennium Bismay, Xiangjue Dong, James Caverlee
This paper presents ReasoningRec, a reasoning-based recommendation framework that leverages Large Language Models (LLMs) to bridge the gap between recommendations and human-interpretable explanations.
1 code implementation • 6 Oct 2024 • Guanchu Wang, Yu-Neng Chuang, Ruixiang Tang, Shaochen Zhong, Jiayi Yuan, Hongye Jin, Zirui Liu, Vipin Chaudhary, Shuai Xu, James Caverlee, Xia Hu
To address this dilemma, we introduce TaylorMLP to protect the ownership of released LLMs and prevent their abuse.
1 code implementation • 27 Sep 2024 • Chengkai Liu, Jianling Wang, James Caverlee
Our theoretical analysis and experimental results show that the proposed model optimizing alignment and uniformity with the twin encoder contributes to better recommendation accuracy and training efficiency performance.
no code implementations • 18 Jul 2024 • Zhuoer Wang, Leonardo F. R. Ribeiro, Alexandros Papangelis, Rohan Mukherjee, Tzu-Yen Wang, Xinyan Zhao, Arijit Biswas, James Caverlee, Angeliki Metallinou
API call generation is the cornerstone of large language models' tool-using ability that provides access to the larger world.
1 code implementation • 18 Jun 2024 • Chengkai Liu, Jianghao Lin, Hanzhou Liu, Jianling Wang, James Caverlee
Sequential recommender systems aims to predict the users' next interaction through user behavior modeling with various operators like RNNs and attentions.
1 code implementation • 13 Apr 2024 • Jinhao Pan, Ziwei Zhu, Jianling Wang, Allen Lin, James Caverlee
In this paper, we identify two root causes of this mainstream bias: (i) discrepancy modeling, whereby CF algorithms focus on modeling mainstream users while neglecting niche users with unique preferences; and (ii) unsynchronized learning, where niche users require more training epochs than mainstream users to reach peak performance.
1 code implementation • 9 Mar 2024 • Rui Yang, Haoran Liu, Edison Marrese-Taylor, Qingcheng Zeng, Yu He Ke, Wanxin Li, Lechao Cheng, Qingyu Chen, James Caverlee, Yutaka Matsuo, Irene Li
In this work, we develop an augmented LLM framework, KG-Rank, which leverages a medical knowledge graph (KG) along with ranking and re-ranking techniques, to improve the factuality of long-form question answering (QA) in the medical domain.
2 code implementations • 6 Mar 2024 • Chengkai Liu, Jianghao Lin, Jianling Wang, Hanzhou Liu, James Caverlee
Sequential recommendation aims to estimate the dynamic user preferences and sequential dependencies among historical user behaviors.
no code implementations • 18 Feb 2024 • Jianling Wang, Haokai Lu, James Caverlee, Ed Chi, Minmin Chen
The reasoning and generalization capabilities of LLMs can help us better understand user preferences and item characteristics, offering exciting prospects to enhance recommendation systems.
1 code implementation • 17 Feb 2024 • Xiangjue Dong, Yibo Wang, Philip S. Yu, James Caverlee
Our experiments demonstrate that all tested LLMs exhibit explicit and/or implicit gender bias, even when gender stereotypes are not present in the inputs.
no code implementations • 15 Feb 2024 • Noveen Sachdeva, Benjamin Coleman, Wang-Cheng Kang, Jianmo Ni, Lichan Hong, Ed H. Chi, James Caverlee, Julian McAuley, Derek Zhiyuan Cheng
The training of large language models (LLMs) is expensive.
no code implementations • CVPR 2024 • Shubham Parashar, Zhiqiu Lin, Tian Liu, Xiangjue Dong, Yanan Li, Deva Ramanan, James Caverlee, Shu Kong
We address this by using large language models (LLMs) to count the number of pretraining texts that contain synonyms of these concepts.
no code implementations • 14 Nov 2023 • Yibo Wang, Xiangjue Dong, James Caverlee, Philip S. Yu
We further design a novel evaluation metric, the Non-detectable Attack Success Rate (NASR), which integrates both ASR and detectability for the attack task.
no code implementations • 1 Nov 2023 • Xiangjue Dong, Yibo Wang, Philip S. Yu, James Caverlee
Large Language Models (LLMs) can generate biased and toxic responses.
1 code implementation • 19 Oct 2023 • Xiangjue Dong, Ziwei Zhu, Zhuoer Wang, Maria Teleki, James Caverlee
Pre-trained Language Models are widely used in many important real-world applications.
no code implementations • 19 Oct 2023 • Zhuoer Wang, Yicheng Wang, Ziwei Zhu, James Caverlee
Question generation is a widely used data augmentation approach with extensive applications, and extracting qualified candidate answers from context passages is a critical step for most question generation systems.
no code implementations • 29 Aug 2023 • Haoran Liu, Bokun Wang, Jianling Wang, Xiangjue Dong, Tianbao Yang, James Caverlee
As powerful tools for representation learning on graphs, graph neural networks (GNNs) have played an important role in applications including social networks, recommendation systems, and online web services.
1 code implementation • 7 Jun 2023 • Xiangjue Dong, Yun He, Ziwei Zhu, James Caverlee
A key component of modern conversational systems is the Dialogue State Tracker (or DST), which models a user's goals and needs.
no code implementations • 11 May 2023 • Yingqiang Ge, Mostafa Rahmani, Athirai Irissappane, Jose Sepulveda, James Caverlee, Fei Wang
In real-world scenarios, most platforms collect both large-scale, naturally noisy implicit feedback and small-scale yet highly relevant explicit feedback.
no code implementations • 13 Feb 2023 • Allen Lin, Ziwei Zhu, Jianling Wang, James Caverlee
Conversational recommenders are emerging as a powerful tool to personalize a user's recommendation experience.
no code implementations • 26 Jan 2023 • Han Zhang, Ziwei Zhu, James Caverlee
However, most existing work focuses on a static setting or over a short-time window, leaving open questions about the long-term and dynamic impacts of news recommendations.
no code implementations • 25 Oct 2022 • Yin Zhang, Ruoxi Wang, Tiansheng Yao, Xinyang Yi, Lichan Hong, James Caverlee, Ed H. Chi, Derek Zhiyuan Cheng
In this work, we aim to improve tail item recommendations while maintaining the overall performance with less training and serving cost.
1 code implementation • 13 Oct 2022 • Xiangjue Dong, Jiaying Lu, Jianling Wang, James Caverlee
Through experiments, we validate the proposed QG model on both public datasets and a new WikiCQA dataset.
Ranked #2 on Open-Domain Question Answering on ELI5
no code implementations • 8 Aug 2022 • Allen Lin, Ziwei Zhu, Jianling Wang, James Caverlee
Conversational recommender systems have demonstrated great success.
no code implementations • 5 Aug 2022 • Allen Lin, Jianling Wang, Ziwei Zhu, James Caverlee
Conversational recommender systems (CRS) have shown great success in accurately capturing a user's current and detailed preference through the multi-round interaction cycle while effectively guiding users to a more personalized recommendation.
1 code implementation • 7 Jul 2022 • Ziwei Zhu, Yun He, Xing Zhao, James Caverlee
and how to debias in this long-term dynamic process?
1 code implementation • 14 Mar 2022 • Yun He, Xue Feng, Cheng Cheng, Geng Ji, Yunsong Guo, James Caverlee
Specifically, in each training iteration and adaptively for each part of the network, the gradient of an auxiliary loss is carefully reduced or enlarged to have a closer magnitude to the gradient of the target loss, preventing auxiliary tasks from being so strong that dominate the target task or too weak to help the target task.
no code implementations • 28 Dec 2021 • Jianling Wang, Kaize Ding, Ziwei Zhu, James Caverlee
Session-based recommender systems aim to improve recommendations in short-term sessions that can be found across many platforms.
1 code implementation • 18 Dec 2021 • Kaize Ding, Jianling Wang, James Caverlee, Huan Liu
Inspired by the extensive success of deep learning, graph neural networks (GNNs) have been proposed to learn expressive node representations and demonstrated promising performance in various graph learning tasks.
1 code implementation • 13 Jul 2021 • Jianling Wang, Kaize Ding, James Caverlee
A fundamental challenge for sequential recommenders is to capture the sequential patterns of users toward modeling how users transit among items.
no code implementations • ACL (ECNLP) 2021 • Monika Daryani, James Caverlee
Fake reviews and review manipulation are growing problems on online marketplaces globally.
no code implementations • 12 Jun 2021 • Kaize Ding, Jianling Wang, Jundong Li, James Caverlee, Huan Liu
Graphs are widely used to model the relational structure of data, and the research of graph machine learning (ML) has a wide spectrum of applications ranging from drug design in molecular graphs to friendship recommendation in social networks.
no code implementations • 8 Apr 2021 • Jian Wu, Rajal Nivargi, Sree Sai Teja Lanka, Arjun Manoj Menon, Sai Ajay Modukuri, Nishanth Nakshatri, Xin Wei, Zhuoer Wang, James Caverlee, Sarah M. Rajtmajer, C. Lee Giles
In this paper, we investigate prediction of the reproducibility of SBS papers using machine learning methods based on a set of features.
1 code implementation • 14 Mar 2021 • Ziwei Zhu, Jianling Wang, James Caverlee
This is paper is an extended and reorganized version of our SIGIR 2020~\cite{zhu2020measuring} paper.
no code implementations • WSDM 2021 • Ziwei Zhu, Yun He, Xing Zhao, Yin Zhang, Jianling Wang, James Caverlee
This paper connects equal opportunity to popularity bias in implicit recommenders to introduce the problem of popularity-opportunity bias.
1 code implementation • EMNLP 2020 • Yun He, Ziwei Zhu, Yin Zhang, Qin Chen, James Caverlee
Knowledge of a disease includes information of various aspects of the disease, such as signs and symptoms, diagnosis and treatment.
1 code implementation • EMNLP 2020 • Yun He, Zhuoer Wang, Yin Zhang, Ruihong Huang, James Caverlee
We present a new benchmark dataset called PARADE for paraphrase identification that requires specialized domain knowledge.
no code implementations • 6 Feb 2020 • Habeeb Hooshmand, James Caverlee
A large portion of the car-buying experience in the United States involves interactions at a car dealership.
1 code implementation • 30 Dec 2019 • Yun He, Yin Zhang, Weiwen Liu, James Caverlee
Complementary to methods that exploit specific content patterns (e. g., as in song-based playlists that rely on audio features), the proposed approach models the consistency of item lists based on human curation patterns, and so can be deployed across a wide range of varying item types (e. g., videos, images, books).
1 code implementation • 30 Dec 2019 • Yun He, Jianling Wang, Wei Niu, James Caverlee
User-generated item lists are a popular feature of many different platforms.
no code implementations • 9 Sep 2018 • Ziwei Zhu, Jianling Wang, Yin Zhang, James Caverlee
This paper highlights our ongoing efforts to create effective information curator recommendation models that can be personalized for individual users, while maintaining important fairness properties.
no code implementations • 28 Nov 2017 • Qingquan Song, Hancheng Ge, James Caverlee, Xia Hu
Tensor completion is a problem of filling the missing or unobserved entries of partially observed tensors.
no code implementations • RANLP 2017 • Wenlin Yao, Zeyu Dai, Ruihong Huang, James Caverlee
The lack of large realistic datasets presents a bottleneck in online deception detection studies.