no code implementations • ICML 2020 • Shuang Li, Lu Wang, Ruizhi Zhang, xiaofu Chang, Xuqin Liu, Yao Xie, Yuan Qi, Le Song
We propose a modeling framework for event data, which excels in small data regime with the ability to incorporate domain knowledge.
no code implementations • 17 Nov 2023 • Yichi Zhang, Shiyao Hu, Chen Jiang, Yuan Cheng, Yuan Qi
The introduction of the Segment Anything Model (SAM) has marked a significant advancement in prompt-driven image segmentation.
1 code implementation • 12 Nov 2023 • Qiang Zhou, Zhibin Wang, Wei Chu, Yinghui Xu, Hao Li, Yuan Qi
Our experiments demonstrate that preserving the positional information of visual embeddings through the pool-adapter is particularly beneficial for tasks like visual grounding.
no code implementations • 25 Oct 2023 • Xiaohui Zhong, Lei Chen, Jun Liu, Chensen Lin, Yuan Qi, Hao Li
State-of-the-art ML-based weather forecast models, such as FuXi, have demonstrated superior statistical forecast performance in comparison to the high-resolution forecasts (HRES) of the European Centre for Medium-Range Weather Forecasts (ECMWF).
no code implementations • 20 Sep 2023 • Chen Jiang, Hong Liu, Xuzheng Yu, Qing Wang, Yuan Cheng, Jia Xu, Zhongyi Liu, Qingpei Guo, Wei Chu, Ming Yang, Yuan Qi
We thereby present a new Triplet Partial Margin Contrastive Learning (TPM-CL) module to construct partial order triplet samples by automatically generating fine-grained hard negatives for matched text-video pairs.
Ranked #3 on
Video Retrieval
on MSR-VTT-1kA
no code implementations • 22 Jun 2023 • Lei Chen, Xiaohui Zhong, Feng Zhang, Yuan Cheng, Yinghui Xu, Yuan Qi, Hao Li
Over the past few years, due to the rapid development of machine learning (ML) models for weather forecasting, state-of-the-art ML models have shown superior performance compared to the European Centre for Medium-Range Weather Forecasts (ECMWF)'s high-resolution forecast (HRES) in 10-day forecasts at a spatial resolution of 0. 25 degree.
1 code implementation • 21 Mar 2023 • Yiqi Liu, Yuan Qi
We discuss estimating conditional treatment effects in regression discontinuity designs with multiple scores.
no code implementations • 16 Sep 2022 • Jiyan Zhang, Yue Xue, Yuan Qi, Jiale Wang
A new algorithm called accelerated projection-based consensus (APC) has recently emerged as a promising approach to solve large-scale systems of linear equations in a distributed fashion.
no code implementations • 3 Nov 2021 • Ke Tu, Peng Cui, Daixin Wang, Zhiqiang Zhang, Jun Zhou, Yuan Qi, Wenwu Zhu
Knowledge graph is generally incorporated into recommender systems to improve overall performance.
no code implementations • 3 Jul 2021 • Hui Li, Xing Fu, Ruofan Wu, Jinyu Xu, Kai Xiao, xiaofu Chang, Weiqiang Wang, Shuai Chen, Leilei Shi, Tao Xiong, Yuan Qi
Deep learning provides a promising way to extract effective representations from raw data in an end-to-end fashion and has proven its effectiveness in various domains such as computer vision, natural language processing, etc.
no code implementations • 1 Jan 2021 • Yan Feng, Tao Xiong, Ruofan Wu, Yuan Qi
We also initialize a discussion about the role of quantization and perturbation in FL algorithm design with privacy and communication constraints.
no code implementations • 1 Jan 2021 • Tao Xiong, Liang Zhu, Ruofan Wu, Yuan Qi
Specifically, we allow every node in the original graph to interact with a group of memory nodes.
no code implementations • EMNLP 2020 • Kunlong Chen, Weidi Xu, Xingyi Cheng, Zou Xiaochuan, Yuyu Zhang, Le Song, Taifeng Wang, Yuan Qi, Wei Chu
Numerical reasoning over texts, such as addition, subtraction, sorting and counting, is a challenging machine reading comprehension task, since it requires both natural language understanding and arithmetic computation.
Ranked #1 on
Question Answering
on DROP Test
no code implementations • 8 Aug 2020 • Dongbo Xi, Bowen Song, Fuzhen Zhuang, Yongchun Zhu, Shuai Chen, Tianyi Zhang, Yuan Qi, Qing He
In this paper, we propose the Dual Importance-aware Factorization Machines (DIFM), which exploits the internal field information among users' behavior sequence from dual perspectives, i. e., field value variations and field interactions simultaneously for fraud detection.
2 code implementations • NeurIPS 2020 • Ziqi Liu, Zhengwei Wu, Zhiqiang Zhang, Jun Zhou, Shuang Yang, Le Song, Yuan Qi
However, due to the intractable computation of optimal sampling distribution, these sampling algorithms are suboptimal for GCNs and are not applicable to more general graph neural networks (GNNs) where the message aggregator contains learned weights rather than fixed weights, such as Graph Attention Networks (GAT).
Ranked #1 on
Node Property Prediction
on ogbn-proteins
no code implementations • 19 May 2020 • Shijun Wang, Baocheng Zhu, Chen Li, Mingzhe Wu, James Zhang, Wei Chu, Yuan Qi
In this paper, We propose a general Riemannian proximal optimization algorithm with guaranteed convergence to solve Markov decision process (MDP) problems.
no code implementations • 19 May 2020 • Shijun Wang, Baocheng Zhu, Lintao Ma, Yuan Qi
In this paper, we consider optimizing a smooth, convex, lower semicontinuous function in Riemannian space with constraints.
1 code implementation • ACL 2020 • Xingyi Cheng, Weidi Xu, Kunlong Chen, Shaohua Jiang, Feng Wang, Taifeng Wang, Wei Chu, Yuan Qi
This paper proposes to incorporate phonological and visual similarity knowledge into language models for CSC via a specialized graph convolutional network (SpellGCN).
no code implementations • 19 Apr 2020 • Chao Qu, Hui Li, Chang Liu, Junwu Xiong, James Zhang, Wei Chu, Weiqiang Wang, Yuan Qi, Le Song
We propose a \emph{collaborative} multi-agent reinforcement learning algorithm named variational policy propagation (VPP) to learn a \emph{joint} policy through the interactions over agents.
Multi-agent Reinforcement Learning
reinforcement-learning
+2
no code implementations • 1 Apr 2020 • Jianbin Lin, Zhiqiang Zhang, Jun Zhou, Xiaolong Li, Jingli Fang, Yanming Fang, Quan Yu, Yuan Qi
Considering the above challenges and the special scenario in Ant Financial, we try to incorporate default prediction with network information to alleviate the cold-start problem.
no code implementations • 12 Mar 2020 • Zhigang Dai, Jinhua Fu, Qile Zhu, Hengbin Cui, Xiaolong Li, Yuan Qi
We revise the attention distribution to focus on the local and contextual semantic information by incorporating the relative position information between utterances.
no code implementations • 10 Mar 2020 • Jianbin Lin, Daixin Wang, Lu Guan, Yin Zhao, Binqiang Zhao, Jun Zhou, Xiaolong Li, Yuan Qi
However, due to the huge number of users and items, the diversity and dynamic property of the user interest, how to design a scalable recommendation system, which is able to efficiently produce effective and diverse recommendation results on billion-scale scenarios, is still a challenging and open problem for existing methods.
no code implementations • 10 Mar 2020 • Yankun Ren, Jianbin Lin, Siliang Tang, Jun Zhou, Shuang Yang, Yuan Qi, Xiang Ren
It can attack text classification models with a higher success rate than existing methods, and provide acceptable quality for humans in the meantime.
no code implementations • 5 Mar 2020 • Chaochao Chen, Jun Zhou, Bingzhe Wu, Wenjin Fang, Li Wang, Yuan Qi, Xiaolin Zheng
Meanwhile, the public data need to be accessed by all the users are kept by the recommender to reduce the storage costs of users' devices.
no code implementations • 5 Mar 2020 • Cen Chen, Chen Liang, Jianbin Lin, Li Wang, Ziqi Liu, Xinxing Yang, Xiukun Wang, Jun Zhou, Yang Shuang, Yuan Qi
The insurance industry has been creating innovative products around the emerging online shopping activities.
no code implementations • 2 Mar 2020 • Liang Jiang, Zujie Wen, Zhongping Liang, Yafang Wang, Gerard de Melo, Zhe Li, Liangzhuang Ma, Jiaxing Zhang, Xiaolong Li, Yuan Qi
The long-term teacher draws on snapshots from several epochs ago in order to provide steadfast guidance and to guarantee teacher--student differences, while the short-term one yields more up-to-date cues with the goal of enabling higher-quality updates.
1 code implementation • 28 Feb 2020 • Daixin Wang, Jianbin Lin, Peng Cui, Quanhui Jia, Zhen Wang, Yanming Fang, Quan Yu, Jun Zhou, Shuang Yang, Yuan Qi
Additionally, among the network, only very few of the users are labelled, which also poses a great challenge for only utilizing labeled data to achieve a satisfied performance on fraud detection.
no code implementations • 27 Feb 2020 • Chen Liang, Ziqi Liu, Bin Liu, Jun Zhou, Xiaolong Li, Shuang Yang, Yuan Qi
In order to detect and prevent fraudulent insurance claims, we developed a novel data-driven procedure to identify groups of organized fraudsters, one of the major contributions to financial losses, by learning network information.
no code implementations • 27 Feb 2020 • Ziqi Liu, Dong Wang, Qianyu Yu, Zhiqiang Zhang, Yue Shen, Jian Ma, Wenliang Zhong, Jinjie Gu, Jun Zhou, Shuang Yang, Yuan Qi
In this paper, we present a graph representation learning method atop of transaction networks for merchant incentive optimization in mobile payment marketing.
no code implementations • 27 Feb 2020 • Chaochao Chen, Ziqi Liu, Jun Zhou, Xiaolong Li, Yuan Qi, Yujing Jiao, Xingyu Zhong
By analyzing the data, we have two main observations, i. e., sales seasonality after we group different groups of retails and a Tweedie distribution after we transform the sales (target to forecast).
1 code implementation • ICLR 2020 • Yuyu Zhang, Xinshi Chen, Yuan Yang, Arun Ramamurthy, Bo Li, Yuan Qi, Le Song
In this paper, we explore the combination of MLNs and GNNs, and use graph neural networks for variational inference in MLN.
no code implementations • 25 Sep 2019 • xiaofu Chang, Jianfeng Wen, Xuqin Liu, Yanming Fang, Le Song, Yuan Qi
To model the dependency between latent dynamic representations of each node, we define a mixture of temporal cascades in which a node's neural representation depends on not only this node's previous representations but also the previous representations of related nodes that have interacted with this node.
no code implementations • 18 Jun 2019 • Shaosheng Cao, Xinxing Yang, Cen Chen, Jun Zhou, Xiaolong Li, Yuan Qi
With the explosive growth of e-commerce and the booming of e-payment, detecting online transaction fraud in real time has become increasingly important to Fintech business.
no code implementations • 5 Jun 2019 • Yuyu Zhang, Xinshi Chen, Yuan Yang, Arun Ramamurthy, Bo Li, Yuan Qi, Le Song
Effectively combining logic reasoning and probabilistic inference has been a long-standing goal of machine learning: the former has the ability to generalize with small training data, while the latter provides a principled framework for dealing with noisy data.
no code implementations • 7 Feb 2019 • Romain Lopez, Chenchen Li, Xiang Yan, Junwu Xiong, Michael. I. Jordan, Yuan Qi, Le Song
We address a practical problem ubiquitous in modern marketing campaigns, in which a central agent tries to learn a policy for allocating strategic financial incentives to customers and observes only bandit feedback.
no code implementations • NeurIPS 2019 • Chao Qu, Shie Mannor, Huan Xu, Yuan Qi, Le Song, Junwu Xiong
To the best of our knowledge, it is the first MARL algorithm with convergence guarantee in the control, off-policy and non-linear function approximation setting.
Multi-agent Reinforcement Learning
reinforcement-learning
+1
no code implementations • ICLR 2020 • Hui Li, Kailiang Hu, Zhibang Ge, Tao Jiang, Yuan Qi, Le Song
Counterfactual Regret Minimization (CRF) is a fundamental and effective technique for solving Imperfect Information Games (IIG).
1 code implementation • 27 Dec 2018 • Xinshi Chen, Shuang Li, Hui Li, Shaohua Jiang, Yuan Qi, Le Song
There are great interests as well as many challenges in applying reinforcement learning (RL) to recommendation systems.
Model-based Reinforcement Learning
Recommendation Systems
+2
1 code implementation • 26 Nov 2018 • Chenchen Li, Xiang Yan, Xiaotie Deng, Yuan Qi, Wei Chu, Le Song, Junlong Qiao, Jianshan He, Junwu Xiong
Uplift modeling aims to directly model the incremental impact of a treatment on an individual response.
no code implementations • 23 Aug 2018 • Chenchen Li, Xiang Yan, Xiaotie Deng, Yuan Qi, Wei Chu, Le Song, Junlong Qiao, Jianshan He, Junwu Xiong
Then we develop a variant of Latent Dirichlet Allocation (LDA) to infer latent variables under the current market environment, which represents the preferences of customers and strategies of competitors.
3 code implementations • 3 Feb 2018 • Ziqi Liu, Chaochao Chen, Longfei Li, Jun Zhou, Xiaolong Li, Le Song, Yuan Qi
We present, GeniePath, a scalable approach for learning adaptive receptive fields of neural networks defined on permutation invariant graph data.
no code implementations • ICML 2017 • Hao Peng, Shandian Zhe, Yuan Qi
Gaussian processes (GPs) are powerful non-parametric function estimators.
no code implementations • NeurIPS 2016 • Shandian Zhe, Kai Zhang, Pengyuan Wang, Kuang-Chih Lee, Zenglin Xu, Yuan Qi, Zoubin Ghahramani
Tensor factorization is a powerful tool to analyse multi-way data.
1 code implementation • 2 Jan 2014 • Hao Peng, Yuan Qi
In this paper, we propose a new Bayesian approach, EigenGP, that learns both basis dictionary elements--eigenfunctions of a GP prior--and prior precisions in a sparse finite model.
no code implementations • 12 Nov 2013 • Shandian Zhe, Yuan Qi, Youngja Park, Ian Molloy, Suresh Chari
To overcome this limitation, we present Distributed Infinite Tucker (DINTUCKER), a large-scale nonlinear tensor decomposition algorithm on MAPREDUCE.
no code implementations • 26 Apr 2013 • Shandian Zhe, Zenglin Xu, Yuan Qi
To unify these two tasks, we present a new sparse Bayesian approach for joint association study and disease diagnosis.
no code implementations • NeurIPS 2011 • Feng Yan, Yuan Qi
To overcome this limitation, we present a novel hybrid model, EigenNet, that uses the eigenstructures of data to guide variable selection.
no code implementations • NeurIPS 2011 • Nan Ding, Yuan Qi, S. V. N. Vishwanathan
Approximate inference is an important technique for dealing with large, intractable graphical models based on the exponential family of distributions.
no code implementations • NeurIPS 2009 • Feng Yan, Ningyi Xu, Yuan Qi
Extensive experiments showed that our parallel inference methods consistently produced LDA models with the same predictive power as sequential training methods did but with 26x speedup for CGS and 196x speedup for CVB on a GPU with 30 multiprocessors; actually the speedup is almost linearly scalable with the number of multiprocessors available.