1 code implementation • 18 Feb 2025 • Shuo Xing, Yuping Wang, Peiran Li, Ruizheng Bai, Yueqi Wang, Chengxuan Qian, Huaxiu Yao, Zhengzhong Tu
The emergence of large Vision Language Models (VLMs) has broadened the scope and capabilities of single-modal Large Language Models (LLMs) by integrating visual modalities, thereby unlocking transformative cross-modal applications in a variety of real-world scenarios.
1 code implementation • 25 Sep 2024 • Yueqi Wang, Zhenrui Yue, Huimin Zeng, Dong Wang, Julian McAuley
Our fMRLRec captures item features at different granularities, learning informative representations for efficient recommendation across multiple dimensions.
3 code implementations • 4 Jun 2024 • Yueqi Wang, Zhankui He, Zhenrui Yue, Julian McAuley, Dong Wang
In the context of sequential recommendation, a pivotal issue pertains to the comparative analysis between bi-directional/auto-encoding (AE) and uni-directional/auto-regressive (AR) attention mechanisms, where the conclusions regarding architectural and performance superiority remain inconclusive.
no code implementations • 25 May 2024 • Jianling Wang, Haokai Lu, Yifan Liu, He Ma, Yueqi Wang, Yang Gu, Shuzhou Zhang, Ningren Han, Shuchao Bi, Lexi Baugher, Ed Chi, Minmin Chen
Traditional recommendation systems are subject to a strong feedback loop by learning from and reinforcing past user-item interactions, which in turn limits the discovery of novel user interests.
1 code implementation • 3 Oct 2023 • Zhenrui Yue, Yueqi Wang, Zhankui He, Huimin Zeng, Julian McAuley, Dong Wang
State-of-the-art sequential recommendation relies heavily on self-attention-based recommender models.
no code implementations • 23 Aug 2023 • Yueqi Wang, Yoni Halpern, Shuo Chang, Jingchen Feng, Elaine Ya Le, Longfei Li, Xujian Liang, Min-Cheng Huang, Shane Li, Alex Beutel, Yaping Zhang, Shuchao Bi
In this work, we incorporate explicit and implicit negative user feedback into the training objective of sequential recommenders in the retrieval stage using a "not-to-recommend" loss function that optimizes for the log-likelihood of not recommending items with negative feedback.
no code implementations • 22 Feb 2023 • Junren Chen, Yueqi Wang, Michael K. Ng
Moreover, we extend our results to a low-rank regression model with matrix responses.
2 code implementations • 29 Oct 2020 • Yueqi Wang, Yoonho Lee, Pallab Basu, Juho Lee, Yee Whye Teh, Liam Paninski, Ari Pakman
While graph neural networks (GNNs) have been successful in encoding graph structures, existing GNN-based methods for community detection are limited by requiring knowledge of the number of communities in advance, in addition to lacking a proper probabilistic formulation to handle uncertainty.
3 code implementations • 9 Nov 2019 • Iddo Drori, Darshan Thaker, Arjun Srivatsa, Daniel Jeong, Yueqi Wang, Linyong Nan, Fan Wu, Dimitri Leggas, Jinhao Lei, Weiyi Lu, Weilong Fu, Yuan Gao, Sashank Karri, Anand Kannan, Antonio Moretti, Mohammed AlQuraishi, Chen Keasar, Itsik Pe'er
Our dataset consists of amino acid sequences, Q8 secondary structures, position specific scoring matrices, multiple sequence alignment co-evolutionary features, backbone atom distance matrices, torsion angles, and 3D coordinates.
no code implementations • pproximateinference AABI Symposium 2019 • Ari Pakman, Yueqi Wang, Liam Paninski
We introduce a neural architecture to perform amortized approximate Bayesian inference over latent random permutations of two sets of objects.
1 code implementation • NeurIPS Workshop Neuro_AI 2019 • Yueqi Wang, Ari Pakman, Catalin Mitelut, JinHyung Lee, Liam Paninski
We present a novel approach to spike sorting for high-density multielectrode probes using the Neural Clustering Process (NCP), a recently introduced neural architecture that performs scalable amortized approximate Bayesian inference for efficient probabilistic clustering.
5 code implementations • ICML 2020 • Ari Pakman, Yueqi Wang, Catalin Mitelut, JinHyung Lee, Liam Paninski
Probabilistic clustering models (or equivalently, mixture models) are basic building blocks in countless statistical models and involve latent random variables over discrete spaces.
2 code implementations • 17 Nov 2018 • Iddo Drori, Isht Dwivedi, Pranav Shrestha, Jeffrey Wan, Yueqi Wang, Yunchu He, Anthony Mazza, Hugh Krogh-Freeman, Dimitri Leggas, Kendal Sandridge, Linyong Nan, Kaveri Thakoor, Chinmay Joshi, Sonam Goenka, Chen Keasar, Itsik Pe'er
In the spirit of reproducible research we make our data, models and code available, aiming to set a gold standard for purity of training and testing sets.