no code implementations • ACL 2022 • Juncheng Wan, Dongyu Ru, Weinan Zhang, Yong Yu
In this work, we try to improve the span representation by utilizing retrieval-based span-level graphs, connecting spans and entities in the training data based on n-gram features.
no code implementations • 28 Oct 2024 • Muyan Weng, Yunjia Xi, Weiwen Liu, Bo Chen, Jianghao Lin, Ruiming Tang, Weinan Zhang, Yong Yu
It captures user's preferences and behavior patterns with three modules: a Disentangled Interest Miner to disentangle the user's preferences into interests and disinterests, a Sequential Preference Mixer to learn users' entangled preferences considering the context of feedback, and a Comparison-aware Pattern Extractor to capture user's behavior patterns within each list.
no code implementations • 25 Oct 2024 • Kangning Zhang, Jiarui Jin, Yingjie Qin, Ruilong Su, Jianghao Lin, Yong Yu, Weinan Zhang
Furthermore, the unique nature of item-specific ID embeddings hinders the information exchange among related items and the spatial requirement of ID embeddings increases with the scale of item.
no code implementations • 21 Oct 2024 • JunJie Huang, Jiarui Qin, Jianghao Lin, Ziming Feng, Yong Yu, Weinan Zhang
Despite advancements in individual retrieval methods, multi-channel fusion, the process of efficiently merging multi-channel retrieval results, remains underexplored.
no code implementations • 7 Oct 2024 • Jinbo Hou, Kehai Qiu, Zitian Zhang, Yong Yu, Kezhi Wang, Stefano Capolongo, Jiliang Zhang, Zeyang Li, Jie Zhang
This paper aims to simultaneously optimize indoor wireless and daylight performance by adjusting the positions of windows and the beam directions of window-deployed reconfigurable intelligent surfaces (RISs) for RIS-aided outdoor-to-indoor (O2I) networks utilizing large language models (LLM) as optimizers.
no code implementations • 25 Sep 2024 • Hang Lai, Jiahang Cao, Jiafeng Xu, Hongtao Wu, Yunfeng Lin, Tao Kong, Yong Yu, Weinan Zhang
To address this issue, traditional methods attempt to learn a teacher policy with access to privileged information first and then learn a student policy to imitate the teacher's behavior with visual input.
no code implementations • 15 Sep 2024 • Qingyao Li, Wei Xia, Kounianhua Du, Xinyi Dai, Ruiming Tang, Yasheng Wang, Yong Yu, Weinan Zhang
More importantly, we construct verbal feedback from fine-grained code execution feedback to refine erroneous thoughts during the search.
1 code implementation • 8 Sep 2024 • Jianghao Lin, Jiaqi Liu, Jiachen Zhu, Yunjia Xi, Chengkai Liu, Yangtian Zhang, Yong Yu, Weinan Zhang
While traditional recommendation techniques have made significant strides in the past decades, they still suffer from limited generalization performance caused by factors like inadequate collaborative signals, weak latent representations, and noisy data.
no code implementations • 20 Aug 2024 • Yunjia Xi, Weiwen Liu, Jianghao Lin, Muyan Weng, Xiaoling Cai, Hong Zhu, Jieming Zhu, Bo Chen, Ruiming Tang, Yong Yu, Weinan Zhang
Recommender systems (RSs) play a pervasive role in today's online services, yet their closed-loop nature constrains their access to open-world knowledge.
1 code implementation • 11 Aug 2024 • Yunjia Xi, Hangyu Wang, Bo Chen, Jianghao Lin, Menghui Zhu, Weiwen Liu, Ruiming Tang, Weinan Zhang, Yong Yu
This generation inefficiency stems from the autoregressive nature of LLMs, and a promising direction for acceleration is speculative decoding, a Draft-then-Verify paradigm that increases the number of generated tokens per decoding step.
no code implementations • 7 Aug 2024 • Jiachen Zhu, Jianghao Lin, Xinyi Dai, Bo Chen, Rong Shan, Jieming Zhu, Ruiming Tang, Yong Yu, Weinan Zhang
Thus, LLMs only see a small fraction of the datasets (e. g., less than 10%) instead of the whole datasets, limiting their exposure to the full training space.
no code implementations • 11 Jul 2024 • JunJie Huang, Jizheng Chen, Jianghao Lin, Jiarui Qin, Ziming Feng, Weinan Zhang, Yong Yu
By detailing the retrieval stage, which is fundamental for effective recommendation, this survey aims to bridge the existing knowledge gap and serve as a cornerstone for researchers interested in optimizing this critical component of cascade recommender systems.
1 code implementation • 6 Jul 2024 • Yunjia Xi, Weiwen Liu, Jianghao Lin, Bo Chen, Ruiming Tang, Weinan Zhang, Yong Yu
The preferences embedded in the user's historical dialogue sessions and the current session exhibit continuity and sequentiality, and we refer to CRSs with this characteristic as sequential CRSs.
no code implementations • 2 Jul 2024 • Jianan Zhang, Zhiwei Wei, Boxun Liu, Xiayi Wang, Yong Yu, Rongqing Zhang
In dynamic autonomous driving environment, Artificial Intelligence-Generated Content (AIGC) technology can supplement vehicle perception and decision making by leveraging models' generative and predictive capabilities, and has the potential to enhance motion planning, trajectory prediction and traffic simulation.
1 code implementation • 1 Jul 2024 • Lingyue Fu, Hao Guan, Kounianhua Du, Jianghao Lin, Wei Xia, Weinan Zhang, Ruiming Tang, Yasheng Wang, Yong Yu
Knowledge Tracing (KT) aims to determine whether students will respond correctly to the next question, which is a crucial task in intelligent tutoring systems (ITS).
no code implementations • 4 Jun 2024 • Jianghao Lin, Xinyi Dai, Rong Shan, Bo Chen, Ruiming Tang, Yong Yu, Weinan Zhang
Hence, we propose and verify our core viewpoint: Large Language Models Make Sample-Efficient Recommender Systems.
1 code implementation • 29 May 2024 • Hanye Zhao, Xiaoshen Han, Zhengbang Zhu, Minghuan Liu, Yong Yu, Weinan Zhang
We propose Dynamics Diffusion, short as DyDiff, which can inject information from the learning policy to DMs iteratively.
no code implementations • 23 May 2024 • Lei Zheng, Ning li, Yanhuan Huang, Ruiwen Xu, Weinan Zhang, Yong Yu
In LIFT, the context of a target user's interaction is represented based on i) his own past behaviors and ii) the past and future behaviors of the retrieved similar interactions from other users.
no code implementations • 21 May 2024 • Qingyao Li, Wei Xia, Kounianhua Du, Qiji Zhang, Weinan Zhang, Ruiming Tang, Yong Yu
However, integrating LLMs into concept recommendation presents two urgent challenges: 1) How to construct text for concepts that effectively incorporate the human knowledge system?
no code implementations • 20 May 2024 • Kounianhua Du, Jizheng Chen, Jianghao Lin, Menghui Zhu, Bo Chen, Shuai Li, Yong Yu, Weinan Zhang
In this paper, we propose FINED to Feed INstance-wise information need with Essential and Disentangled parametric knowledge from past data for recommendation enhancement.
no code implementations • 3 May 2024 • Kounianhua Du, Jizheng Chen, Renting Rui, Huacan Chai, Lingyue Fu, Wei Xia, Yasheng Wang, Ruiming Tang, Yong Yu, Weinan Zhang
Despite the intelligence shown by the general large language models, their specificity in code generation can still be improved due to the syntactic gap and mismatched vocabulary existing among natural language and different programming languages.
no code implementations • 24 Apr 2024 • Lei Zheng, Ning li, Weinan Zhang, Yong Yu
Current recommendation systems are significantly affected by a serious issue of temporal data shift, which is the inconsistency between the distribution of historical data and that of online data.
no code implementations • 17 Apr 2024 • Kangning Zhang, Yingjie Qin, Jiarui Jin, Yifan Liu, Ruilong Su, Weinan Zhang, Yong Yu
For sufficient information extraction, we introduce separate dual lines, including Behavior Line and Modal Line, in which the Modal-specific Encoder is applied to empower modal representations.
no code implementations • 15 Apr 2024 • JunJie Huang, Guohao Cai, Jieming Zhu, Zhenhua Dong, Ruiming Tang, Weinan Zhang, Yong Yu
RAR consists of two key sub-modules, which synergistically gather information from a vast pool of look-alike users and recall items, resulting in enriched user representations.
no code implementations • 14 Apr 2024 • Siyuan Feng, Jiawei Liu, Ruihang Lai, Charlie F. Ruan, Yong Yu, Lingming Zhang, Tianqi Chen
While a traditional bottom-up development pipeline fails to close the gap timely, we introduce TapML, a top-down approach and tooling designed to streamline the deployment of ML systems on diverse platforms, optimized for developer productivity.
no code implementations • 11 Apr 2024 • Jiachen Zhu, Yichao Wang, Jianghao Lin, Jiarui Qin, Ruiming Tang, Weinan Zhang, Yong Yu
Furthermore, through causal graph analysis, we have discovered that the scenario itself directly influences click behavior, yet existing approaches directly incorporate data from other scenarios during the training of the current scenario, leading to prediction biases when they directly utilize click behaviors from other scenarios to train models.
no code implementations • 25 Mar 2024 • Yunjia Xi, Weiwen Liu, Jianghao Lin, Chuhan Wu, Bo Chen, Ruiming Tang, Weinan Zhang, Yong Yu
The rise of large language models (LLMs) has opened new opportunities in Recommender Systems (RSs) by enhancing user behavior modeling and content understanding.
1 code implementation • 19 Mar 2024 • Yifan Liu, Kangning Zhang, Xiangyuan Ren, Yanhua Huang, Jiarui Jin, Yingjie Qin, Ruilong Su, Ruiwen Xu, Yong Yu, Weinan Zhang
Each alignment is characterized by a specific objective function and is integrated into our multimodal recommendation framework.
1 code implementation • 10 Mar 2024 • Ruiwen Zhou, Yingxuan Yang, Muning Wen, Ying Wen, Wenhao Wang, Chunling Xi, Guoqiang Xu, Yong Yu, Weinan Zhang
Among these works, many of them utilize in-context examples to achieve generalization without the need for fine-tuning, while few of them have considered the problem of how to select and effectively utilize these examples.
no code implementations • 8 Mar 2024 • Jingxiao Chen, Ziqin Gong, Minghuan Liu, Jun Wang, Yong Yu, Weinan Zhang
To overcome this problem and to have an effective solution against hard constraints, we proposed a novel learning-based method that uses looking-ahead information as the feature to improve the legality of TSP with Time Windows (TSPTW) solutions.
1 code implementation • 6 Mar 2024 • Hangyu Wang, Jianghao Lin, Bo Chen, Yang Yang, Ruiming Tang, Weinan Zhang, Yong Yu
However, in order to protect user privacy and optimize utility, it is also crucial for LLMRec to intentionally forget specific user data, which is generally referred to as recommendation unlearning.
no code implementations • 29 Feb 2024 • Jingxiao Chen, Weiji Xie, Weinan Zhang, Yong Yu, Ying Wen
Firstly, unaware of the game structure, it is impossible to interact with the opponents and conduct a major learning paradigm, self-play, for competitive games.
no code implementations • 23 Jan 2024 • Jiarui Jin, Zexue He, Mengyue Yang, Weinan Zhang, Yong Yu, Jun Wang, Julian McAuley
Subsequently, we minimize the mutual information between the observation estimation and the relevance estimation conditioned on the input features.
no code implementations • 21 Jan 2024 • Jiarui Qin, Weiwen Liu, Ruiming Tang, Weinan Zhang, Yong Yu
A personalized knowledge adaptation unit is devised to effectively exploit the information from the knowledge base by adapting the retrieved knowledge to the target samples.
1 code implementation • 29 Dec 2023 • Xinyuan Wu, Wentao Dong, Hang Lai, Yong Yu, Ying Wen
Quadruped robots have strong adaptability to extreme environments but may also experience faults.
no code implementations • 27 Dec 2023 • Qingyao Li, Lingyue Fu, Weiming Zhang, Xianyu Chen, Jingwei Yu, Wei Xia, Weinan Zhang, Ruiming Tang, Yong Yu
Solving the problems encountered by students poses a significant challenge for traditional deep learning models, as it requires not only a broad spectrum of subject knowledge but also the ability to understand what constitutes a student's individual difficulties.
1 code implementation • 30 Oct 2023 • Hangyu Wang, Jianghao Lin, Xiangyang Li, Bo Chen, Chenxu Zhu, Ruiming Tang, Weinan Zhang, Yong Yu
The traditional ID-based models for CTR prediction take as inputs the one-hot encoded ID features of tabular modality, which capture the collaborative signals via feature interaction modeling.
no code implementations • 13 Oct 2023 • Jianghao Lin, Bo Chen, Hangyu Wang, Yunjia Xi, Yanru Qu, Xinyi Dai, Kangning Zhang, Ruiming Tang, Yong Yu, Weinan Zhang
Traditional CTR models convert the multi-field categorical data into ID features via one-hot encoding, and extract the collaborative signals among features.
1 code implementation • 11 Oct 2023 • Mingcheng Chen, Haoran Zhao, Yuxiang Zhao, Hulei Fan, Hongqiao Gao, Yong Yu, Zheng Tian
Data-driven black-box model-based optimization (MBO) problems arise in a great number of practical application scenarios, where the goal is to find a design over the whole space maximizing a black-box target function based on a static offline dataset.
1 code implementation • 11 Oct 2023 • Hangyu Wang, Ting Long, Liang Yin, Weinan Zhang, Wei Xia, Qichen Hong, Dingyin Xia, Ruiming Tang, Yong Yu
Besides, the students' response records contain valuable relational information between questions and knowledge concepts.
1 code implementation • 8 Sep 2023 • JinYuan Wang, Hai Zhao, Zhong Wang, Zeyang Zhu, Jinhao Xie, Yong Yu, Yongjian Fei, Yue Huang, Dawei Cheng
In recent years, great advances in pre-trained language models (PLMs) have sparked considerable research focus and achieved promising performance on the approach of dense passage retrieval, which aims at retrieving relative passages from massive corpus with given questions.
1 code implementation • 5 Sep 2023 • Lingyue Fu, Huacan Chai, Shuang Luo, Kounianhua Du, Weiming Zhang, Longteng Fan, Jiayi Lei, Renting Rui, Jianghao Lin, Yuchen Fang, Yifan Liu, Jingkuan Wang, Siyuan Qi, Kangning Zhang, Weinan Zhang, Yong Yu
With the emergence of Large Language Models (LLMs), there has been a significant improvement in the programming capabilities of models, attracting growing attention from researchers.
1 code implementation • 22 Aug 2023 • Jianghao Lin, Rong Shan, Chenxu Zhu, Kounianhua Du, Bo Chen, Shigang Quan, Ruiming Tang, Yong Yu, Weinan Zhang
With large language models (LLMs) achieving remarkable breakthroughs in natural language processing (NLP) domains, LLM-enhanced recommender systems have received much attention and have been actively explored currently.
no code implementations • 5 Aug 2023 • Jiarui Jin, Xianyu Chen, Weinan Zhang, Mengyue Yang, Yang Wang, Yali Du, Yong Yu, Jun Wang
Notice that these ranking metrics do not consider the effects of the contextual dependence among the items in the list, we design a new family of simulation-based ranking metrics, where existing metrics can be regarded as special cases.
1 code implementation • 3 Aug 2023 • Jianghao Lin, Yanru Qu, Wei Guo, Xinyi Dai, Ruiming Tang, Yong Yu, Weinan Zhang
The large capacity of neural models helps digest such massive amounts of data under the supervised learning paradigm, yet they fail to utilize the substantial data to its full potential, since the 1-bit click signal is not sufficient to guide the model to learn capable representations of features and instances.
no code implementations • 6 Jul 2023 • Yuchen Fang, Zhenggang Tang, Kan Ren, Weiqing Liu, Li Zhao, Jiang Bian, Dongsheng Li, Weinan Zhang, Yong Yu, Tie-Yan Liu
Order execution is a fundamental task in quantitative finance, aiming at finishing acquisition or liquidation for a number of trading orders of the specific assets.
1 code implementation • 19 Jun 2023 • Yunjia Xi, Weiwen Liu, Jianghao Lin, Xiaoling Cai, Hong Zhu, Jieming Zhu, Bo Chen, Ruiming Tang, Weinan Zhang, Rui Zhang, Yong Yu
In this work, we propose an Open-World Knowledge Augmented Recommendation Framework with Large Language Models, dubbed KAR, to acquire two types of external knowledge from LLMs -- the reasoning knowledge on user preferences and the factual knowledge on items.
1 code implementation • 9 Jun 2023 • Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Hao Zhang, Yong liu, Chuhan Wu, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
In this paper, we conduct a comprehensive survey on this research direction from the perspective of the whole pipeline in real-world recommender systems.
no code implementations • 7 Jun 2023 • Xianyu Chen, Jian Shen, Wei Xia, Jiarui Jin, Yakun Song, Weinan Zhang, Weiwen Liu, Menghui Zhu, Ruiming Tang, Kai Dong, Dingyin Xia, Yong Yu
Noticing that existing approaches fail to consider the correlations of concepts in the path, we propose a novel framework named Set-to-Sequence Ranking-based Concept-aware Learning Path Recommendation (SRC), which formulates the recommendation task under a set-to-sequence paradigm.
1 code implementation • 27 May 2023 • Zhengbang Zhu, Minghuan Liu, Liyuan Mao, Bingyi Kang, Minkai Xu, Yong Yu, Stefano Ermon, Weinan Zhang
MADiff is realized with an attention-based diffusion model to model the complex coordination among behaviors of multiple agents.
no code implementations • 25 Dec 2022 • Jiarui Jin, Yangkun Wang, Weinan Zhang, Quan Gan, Xiang Song, Yong Yu, Zheng Zhang, David Wipf
However, existing methods lack elaborate design regarding the distinctions between two tasks that have been frequently overlooked: (i) edges only constitute the topology in the node classification task but can be used as both the topology and the supervisions (i. e., labels) in the edge prediction task; (ii) the node classification makes prediction over each individual node, while the edge prediction is determinated by each pair of nodes.
no code implementations • 15 Dec 2022 • Hang Lai, Weinan Zhang, Xialin He, Chen Yu, Zheng Tian, Yong Yu, Jun Wang
Deep reinforcement learning has recently emerged as an appealing alternative for legged locomotion over multiple terrains by training a policy in physical simulation and then transferring it to the real world (i. e., sim-to-real transfer).
1 code implementation • 17 Nov 2022 • Yunjia Xi, Jianghao Lin, Weiwen Liu, Xinyi Dai, Weinan Zhang, Rui Zhang, Ruiming Tang, Yong Yu
Moreover, simply applying a shared network for all the lists fails to capture the commonalities and distinctions in user behaviors on different lists.
no code implementations • 7 Nov 2022 • Zhengbang Zhu, Shenyu Zhang, Yuzheng Zhuang, Yuecheng Liu, Minghuan Liu, Liyuan Mao, Ziqin Gong, Shixiong Kai, Qiang Gu, Bin Wang, Siyuan Cheng, Xinyu Wang, Jianye Hao, Yong Yu
High-quality traffic flow generation is the core module in building simulators for autonomous driving.
no code implementations • 11 Oct 2022 • Zhengbang Zhu, Rongjun Qin, JunJie Huang, Xinyi Dai, Yang Yu, Yong Yu, Weinan Zhang
The increase in the measured performance, however, can have two possible attributions: a better understanding of user preferences, and a more proactive ability to utilize human bounded rationality to seduce user over-consumption.
1 code implementation • 18 Sep 2022 • Hua Wei, Jingxiao Chen, Xiyang Ji, Hongyang Qin, Minwen Deng, Siqin Li, Liang Wang, Weinan Zhang, Yong Yu, Lin Liu, Lanxiao Huang, Deheng Ye, Qiang Fu, Wei Yang
Compared to other environments studied in most previous work, ours presents new generalization challenges for competitive reinforcement learning.
no code implementations • 3 Aug 2022 • Jiarui Jin, Xianyu Chen, Weinan Zhang, Yuanbo Chen, Zaifan Jiang, Zekun Zhu, Zhewen Su, Yong Yu
Modelling the user's multiple behaviors is an essential part of modern e-commerce, whose widely adopted application is to jointly optimize click-through rate (CTR) and conversion rate (CVR) predictions.
no code implementations • 26 Jul 2022 • Zeren Huang, WenHao Chen, Weinan Zhang, Chuhan Shi, Furui Liu, Hui-Ling Zhen, Mingxuan Yuan, Jianye Hao, Yong Yu, Jun Wang
Deriving a good variable selection strategy in branch-and-bound is essential for the efficiency of modern mixed-integer programming (MIP) solvers.
2 code implementations • 9 Jul 2022 • Siyuan Feng, Bohan Hou, Hongyi Jin, Wuwei Lin, Junru Shao, Ruihang Lai, Zihao Ye, Lianmin Zheng, Cody Hao Yu, Yong Yu, Tianqi Chen
Finally, we build an end-to-end framework on top of our abstraction to automatically optimize deep learning models for given tensor computation primitives.
1 code implementation • 17 Jun 2022 • Lingyue Fu, Jianghao Lin, Weiwen Liu, Ruiming Tang, Weinan Zhang, Rui Zhang, Yong Yu
However, with the development of user interface (UI) design, the layout of displayed items on a result page tends to be multi-block (i. e., multi-list) style instead of a single list, which requires different assumptions to model user behaviors more accurately.
1 code implementation • Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval 2021 • Jianghao Lin, Weiwen Liu, Xinyi Dai, Weinan Zhang, Shuai Li, Ruiming Tang, Xiuqiang He, Jianye Hao, Yong Yu
To better exploit search logs and model users' behavior patterns, numerous click models are proposed to extract users' implicit interaction feedback.
1 code implementation • 20 Apr 2022 • Yunjia Xi, Weiwen Liu, Jieming Zhu, Xilong Zhao, Xinyi Dai, Ruiming Tang, Weinan Zhang, Rui Zhang, Yong Yu
MIR combines low-level cross-item interaction and high-level set-to-list interaction, where we view the candidate items to be reranked as a set and the users' behavior history in chronological order as a list.
2 code implementations • 4 Mar 2022 • Minghuan Liu, Zhengbang Zhu, Yuzheng Zhuang, Weinan Zhang, Jianye Hao, Yong Yu, Jun Wang
Recent progress in state-only imitation learning extends the scope of applicability of imitation learning to real-world settings by relieving the need for observing expert actions.
1 code implementation • 25 Feb 2022 • Ting Long, Yutong Xie, Xianyu Chen, Weinan Zhang, Qinxiang Cao, Yong Yu
We thoroughly evaluate our proposed MVG approach in the context of algorithm detection, an important and challenging subfield of PLP.
no code implementations • 9 Feb 2022 • Jiarui Jin, Xianyu Chen, Yuanbo Chen, Weinan Zhang, Renting Rui, Zaifan Jiang, Zhewen Su, Yong Yu
With the prevalence of live broadcast business nowadays, a new type of recommendation service, called live broadcast recommendation, is widely used in many mobile e-commerce Apps.
no code implementations • 7 Feb 2022 • Jiarui Jin, Xianyu Chen, Weinan Zhang, JunJie Huang, Ziming Feng, Yong Yu
More concretely, we first design a search-based module to retrieve a user's relevant historical behaviors, which are then mixed up with her recent records to be fed into a time-aware sequential network for capturing her time-sensitive demands.
no code implementations • 28 Jan 2022 • Ming Zhou, Jingxiao Chen, Ying Wen, Weinan Zhang, Yaodong Yang, Yong Yu, Jun Wang
Policy Space Response Oracle methods (PSRO) provide a general solution to learn Nash equilibrium in two-player zero-sum games but suffer from two drawbacks: (1) the computation inefficiency due to the need for consistent meta-game evaluation via simulations, and (2) the exploration inefficiency due to finding the best response against a fixed meta-strategy at every epoch.
no code implementations • 27 Jan 2022 • Weijun Hong, Menghui Zhu, Minghuan Liu, Weinan Zhang, Ming Zhou, Yong Yu, Peng Sun
Exploration is crucial for training the optimal reinforcement learning (RL) policy, where the key is to discriminate whether a state visiting is novel.
1 code implementation • 27 Jan 2022 • Weijun Hong, Guilin Li, Weinan Zhang, Ruiming Tang, Yunhe Wang, Zhenguo Li, Yong Yu
Neural architecture search (NAS) has shown encouraging results in automating the architecture design.
no code implementations • COLING 2022 • Juncheng Wan, Jian Yang, Shuming Ma, Dongdong Zhang, Weinan Zhang, Yong Yu, Zhoujun Li
While end-to-end neural machine translation (NMT) has achieved impressive progress, noisy input usually leads models to become fragile and unstable.
1 code implementation • NeurIPS 2021 • Hang Lai, Jian Shen, Weinan Zhang, Yimin Huang, Xing Zhang, Ruiming Tang, Yong Yu, Zhenguo Li
Model-based reinforcement learning has attracted wide attention due to its superior sample efficiency.
no code implementations • 16 Nov 2021 • Handong Ma, Jiawei Hou, Chenxu Zhu, Weinan Zhang, Ruiming Tang, Jincai Lai, Jieming Zhu, Xiuqiang He, Yong Yu
Pseudo relevance feedback (PRF) automatically performs query expansion based on top-retrieved documents to better represent the user's information need so as to improve the search results.
1 code implementation • EMNLP 2021 • Dongyu Ru, Changzhi Sun, Jiangtao Feng, Lin Qiu, Hao Zhou, Weinan Zhang, Yong Yu, Lei LI
LogiRE treats logic rules as latent variables and consists of two modules: a rule generator and a relation extractor.
Ranked #21 on Relation Extraction on DocRED
1 code implementation • 5 Nov 2021 • Chenxu Zhu, Bo Chen, Weinan Zhang, Jincai Lai, Ruiming Tang, Xiuqiang He, Zhenguo Li, Yong Yu
To address these three issues mentioned above, we propose Automatic Interaction Machine (AIM) with three core components, namely, Feature Interaction Search (FIS), Interaction Function Search (IFS) and Embedding Dimension Search (EDS), to select significant feature interactions, appropriate interaction functions and necessary embedding dimensions automatically in a unified framework.
no code implementations • 18 Oct 2021 • Yunjia Xi, Weiwen Liu, Xinyi Dai, Ruiming Tang, Weinan Zhang, Qing Liu, Xiuqiang He, Yong Yu
As a critical task for large-scale commercial recommender systems, reranking has shown the potential of improving recommendation results by uncovering mutual influence among items.
no code implementations • ICLR 2022 • Yangkun Wang, Jiarui Jin, Weinan Zhang, Yongyi Yang, Jiuhai Chen, Quan Gan, Yong Yu, Zheng Zhang, Zengfeng Huang, David Wipf
In this regard, it has recently been proposed to use a randomly-selected portion of the training labels as GNN inputs, concatenated with the original node features for making predictions on the remaining labels.
no code implementations • ICLR 2022 • Jiarui Jin, Yangkun Wang, Kounianhua Du, Weinan Zhang, Zheng Zhang, David Wipf, Yong Yu, Quan Gan
Prevailing methods for relation prediction in heterogeneous graphs aim at learning latent representations (i. e., embeddings) of observed nodes and relations, and thus are limited to the transductive setting where the relation types must be known during training.
no code implementations • NeurIPS 2021 • Minghuan Liu, Zhengbang Zhu, Yuzheng Zhuang, Weinan Zhang, Jian Shen, Jianye Hao, Yong Yu, Jun Wang
State-only imitation learning (SOIL) enables agents to learn from massive demonstrations without explicit action or reward information.
no code implementations • ICLR 2022 • Jiarui Jin, Sijin Zhou, Weinan Zhang, Tong He, Yong Yu, Rasool Fakoor
Goal-oriented Reinforcement Learning (GoRL) is a promising approach for scaling up RL techniques on sparse reward environments requiring long horizon planning.
1 code implementation • 16 Aug 2021 • Mingcheng Chen, Zhenghui Wang, Zhiyun Zhao, Weinan Zhang, Xiawei Guo, Jian Shen, Yanru Qu, Jieli Lu, Min Xu, Yu Xu, Tiange Wang, Mian Li, Wei-Wei Tu, Yong Yu, Yufang Bi, Weiqing Wang, Guang Ning
To tackle the above challenges, we employ gradient boosting decision trees (GBDT) to handle data heterogeneity and introduce multi-task learning (MTL) to solve data insufficiency.
1 code implementation • 11 Aug 2021 • Jiarui Qin, Weinan Zhang, Rong Su, Zhirong Liu, Weiwen Liu, Ruiming Tang, Xiuqiang He, Yong Yu
Prediction over tabular data is an essential task in many data science applications such as recommender systems, online advertising, medical treatment, etc.
no code implementations • 28 May 2021 • Zeren Huang, Kerong Wang, Furui Liu, Hui-Ling Zhen, Weinan Zhang, Mingxuan Yuan, Jianye Hao, Yong Yu, Jun Wang
In the online A/B testing of the product planning problems with more than $10^7$ variables and constraints daily, Cut Ranking has achieved the average speedup ratio of 12. 42% over the production solver without any accuracy loss of solution.
1 code implementation • 13 May 2021 • Menghui Zhu, Minghuan Liu, Jian Shen, Zhicheng Zhang, Sheng Chen, Weinan Zhang, Deheng Ye, Yong Yu, Qiang Fu, Wei Yang
In Goal-oriented Reinforcement learning, relabeling the raw goals in past experience to provide agents with hindsight ability is a major solution to the reward sparsity problem.
1 code implementation • 13 Apr 2021 • Xinyi Dai, Jianghao Lin, Weinan Zhang, Shuai Li, Weiwen Liu, Ruiming Tang, Xiuqiang He, Jianye Hao, Jun Wang, Yong Yu
Modern information retrieval systems, including web search, ads placement, and recommender systems, typically rely on learning from user feedback.
2 code implementations • 24 Mar 2021 • Yangkun Wang, Jiarui Jin, Weinan Zhang, Yong Yu, Zheng Zhang, David Wipf
Over the past few years, graph neural networks (GNN) and label propagation-based methods have made significant progress in addressing node classification tasks on graphs.
Ranked #1 on Node Property Prediction on ogbn-proteins
1 code implementation • ICLR 2021 • Yutong Xie, Chence Shi, Hao Zhou, Yuwei Yang, Weinan Zhang, Yong Yu, Lei LI
Searching for novel molecules with desired chemical properties is crucial in drug discovery.
no code implementations • 28 Jan 2021 • Yuchen Fang, Kan Ren, Weiqing Liu, Dong Zhou, Weinan Zhang, Jiang Bian, Yong Yu, Tie-Yan Liu
As a fundamental problem in algorithmic trading, order execution aims at fulfilling a specific trading order, either liquidation or acquirement, for a given instrument.
no code implementations • 1 Jan 2021 • Lihua Qian, Hao Zhou, Yu Bao, Mingxuan Wang, Lin Qiu, Weinan Zhang, Yong Yu, Lei LI
Although non-autoregressive models with one-iteration generation achieves remarkable inference speed-up, they still falls behind their autoregressive counterparts inprediction accuracy.
no code implementations • 1 Jan 2021 • Jiarui Jin, Sijin Zhou, Weinan Zhang, Rasool Fakoor, David Wipf, Tong He, Yong Yu, Zheng Zhang, Alex Smola
In reinforcement learning, a map with states and transitions built based on historical trajectories is often helpful in exploration and exploitation.
no code implementations • 1 Jan 2021 • Jiarui Jin, Cong Chen, Ming Zhou, Weinan Zhang, Rasool Fakoor, David Wipf, Yong Yu, Jun Wang, Alex Smola
Goal-oriented reinforcement learning algorithms are often good at exploration, not exploitation, while episodic algorithms excel at exploitation, not exploration.
1 code implementation • 9 Dec 2020 • Yunfei Liu, Yang Yang, Xianyu Chen, Jian Shen, Haifeng Zhang, Yong Yu
Knowledge tracing (KT) defines the task of predicting whether students can correctly answer questions based on their historical response.
Ranked #3 on Knowledge Tracing on EdNet
1 code implementation • 7 Dec 2020 • Minkai Xu, Zhiming Zhou, Guansong Lu, Jian Tang, Weinan Zhang, Yong Yu
Wasserstein GANs (WGANs), built upon the Kantorovich-Rubinstein (KR) duality of Wasserstein distance, is one of the most theoretically sound GAN models.
1 code implementation • 25 Nov 2020 • Jiarui Jin, Kounianhua Du, Weinan Zhang, Jiarui Qin, Yuchen Fang, Yong Yu, Zheng Zhang, Alexander J. Smola
Heterogeneous information network (HIN) has been widely used to characterize entities of various types and their complex relations.
no code implementations • 1 Nov 2020 • Xinyi Dai, Jiawei Hou, Qing Liu, Yunjia Xi, Ruiming Tang, Weinan Zhang, Xiuqiang He, Jun Wang, Yong Yu
To this end, we propose a novel ranking framework called U-rank that directly optimizes the expected utility of the ranking list.
no code implementations • NeurIPS 2020 • Cheng Chen, Luo Luo, Weinan Zhang, Yong Yu
The Frank-Wolfe algorithm is a classic method for constrained optimization problems.
1 code implementation • NeurIPS 2020 • Jian Shen, Han Zhao, Weinan Zhang, Yong Yu
However, due to the potential distribution mismatch between simulated data and real data, this could lead to degraded performance.
no code implementations • 9 Oct 2020 • Yong Yu
Although many research works and projects turn to this direction for energy saving, the application into the optimization problem remains a challenging task.
no code implementations • 17 Sep 2020 • Chang Liu, Huichu Zhang, Wei-Nan Zhang, Guanjie Zheng, Yong Yu
The heavy traffic congestion problem has always been a concern for modern cities.
3 code implementations • 13 Sep 2020 • Yang Yang, Jian Shen, Yanru Qu, Yunfei Liu, Kerong Wang, Yaoming Zhu, Wei-Nan Zhang, Yong Yu
With the rapid development in online education, knowledge tracing (KT) has become a fundamental problem which traces students' knowledge status and predicts their performance on new questions.
Ranked #7 on Knowledge Tracing on EdNet
2 code implementations • ACL 2021 • Lihua Qian, Hao Zhou, Yu Bao, Mingxuan Wang, Lin Qiu, Wei-Nan Zhang, Yong Yu, Lei LI
With GLM, we develop Glancing Transformer (GLAT) for machine translation.
Ranked #68 on Machine Translation on WMT2014 English-German
1 code implementation • ICML 2020 • Hang Lai, Jian Shen, Wei-Nan Zhang, Yong Yu
Model-based reinforcement learning approaches leverage a forward dynamics model to support planning and decision making, which, however, may fail catastrophically if the model is inaccurate.
1 code implementation • 1 Jul 2020 • Jiarui Jin, Jiarui Qin, Yuchen Fang, Kounianhua Du, Wei-Nan Zhang, Yong Yu, Zheng Zhang, Alexander J. Smola
To the best of our knowledge, this is the first work providing an efficient neighborhood-based interaction model in the HIN-based recommendations.
no code implementations • 18 Jun 2020 • Sijin Zhou, Xinyi Dai, Haokun Chen, Wei-Nan Zhang, Kan Ren, Ruiming Tang, Xiuqiang He, Yong Yu
Interactive recommender system (IRS) has drawn huge attention because of its flexible recommendation strategy and the consideration of optimal long-term user experiences.
1 code implementation • 28 May 2020 • Jiarui Qin, Wei-Nan Zhang, Xin Wu, Jiarui Jin, Yuchen Fang, Yong Yu
These retrieved behaviors are then fed into a deep model to make the final prediction instead of simply using the most recent ones.
1 code implementation • 30 Apr 2020 • Jiarui Jin, Yuchen Fang, Wei-Nan Zhang, Kan Ren, Guorui Zhou, Jian Xu, Yong Yu, Jun Wang, Xiaoqiang Zhu, Kun Gai
Position bias is a critical problem in information retrieval when dealing with implicit yet biased user feedback data.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Dongyu Ru, Jiangtao Feng, Lin Qiu, Hao Zhou, Mingxuan Wang, Wei-Nan Zhang, Yong Yu, Lei LI
We propose adversarial uncertainty sampling in discrete space (AUSDS) to retrieve informative unlabeled samples more efficiently.
1 code implementation • 3 Apr 2020 • Yuxuan Song, Minkai Xu, Lantao Yu, Hao Zhou, Shuo Shao, Yong Yu
In this paper, motivated by the inherent connections between neural joint source-channel coding and discrete representation learning, we propose a novel regularization method called Infomax Adversarial-Bit-Flip (IABF) to improve the stability and robustness of the neural joint source-channel coding scheme.
4 code implementations • 25 Mar 2020 • Bin Liu, Chenxu Zhu, Guilin Li, Wei-Nan Zhang, Jincai Lai, Ruiming Tang, Xiuqiang He, Zhenguo Li, Yong Yu
By implementing a regularized optimizer over the architecture parameters, the model can automatically identify and remove the redundant feature interactions during the training process of the model.
Ranked #31 on Click-Through Rate Prediction on Criteo
no code implementations • 14 Mar 2020 • Guansong Lu, Zhiming Zhou, Jian Shen, Cheng Chen, Wei-Nan Zhang, Yong Yu
Recent advances in large-scale optimal transport have greatly extended its application scenarios in machine learning.
1 code implementation • ICLR 2020 • Minghuan Liu, Ming Zhou, Wei-Nan Zhang, Yuzheng Zhuang, Jun Wang, Wulong Liu, Yong Yu
In this paper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learning framework with explicit modeling of correlated policies by approximating opponents' policies, which can recover agents' policies that can regenerate similar interactions.
no code implementations • 21 Nov 2019 • Yuxuan Song, Lantao Yu, Zhangjie Cao, Zhiming Zhou, Jian Shen, Shuo Shao, Wei-Nan Zhang, Yong Yu
Domain adaptation aims to leverage the supervision signal of source domain to obtain an accurate model for target domain, where the labels are not available.
1 code implementation • 10 Nov 2019 • Jiarui Qin, Kan Ren, Yuchen Fang, Wei-Nan Zhang, Yong Yu
Various sequential recommendation methods are proposed to model the dynamic user behaviors.
no code implementations • IJCNLP 2019 • Lihua Qian, Lin Qiu, Wei-Nan Zhang, Xin Jiang, Yong Yu
Paraphrasing plays an important role in various natural language processing (NLP) tasks, such as question answering, information retrieval and sentence simplification.
no code implementations • 7 Oct 2019 • Ming Zhou, Jiarui Jin, Wei-Nan Zhang, Zhiwei Qin, Yan Jiao, Chenxi Wang, Guobin Wu, Yong Yu, Jieping Ye
Improving the efficiency of dispatching orders to vehicles is a research hotspot in online ride-hailing systems.
Multi-agent Reinforcement Learning reinforcement-learning +2
no code implementations • 10 Sep 2019 • Liheng Chen, Hongyi Guo, Yali Du, Fei Fang, Haifeng Zhang, Yaoming Zhu, Ming Zhou, Wei-Nan Zhang, Qing Wang, Yong Yu
Although existing works formulate this problem into a centralized learning with decentralized execution framework, which avoids the non-stationary problem in training, their decentralized execution paradigm limits the agents' capability to coordinate.
Multi-agent Reinforcement Learning reinforcement-learning +2
2 code implementations • 15 Aug 2019 • Jiacheng Yang, Mingxuan Wang, Hao Zhou, Chengqi Zhao, Yong Yu, Wei-Nan Zhang, Lei LI
Our experiments in machine translation show CTNMT gains of up to 3 BLEU score on the WMT14 English-German language pair which even surpasses the previous state-of-the-art pre-training aided NMT by 1. 4 BLEU score.
1 code implementation • KDD '19 2019 • Zheyi Pan, Yuxuan Liang, Weifeng Wang, Yong Yu, Yu Zheng, Junbo Zhang
Predicting urban traffic is of great importance to intelligent transportation systems and public safety, yet is very challenging because of two aspects: 1) complex spatio-temporal correlations of urban traffic, including spatial correlations between locations along with temporal correlations among timestamps; 2) diversity of such spatiotemporal correlations, which vary from location to location and depend on the surrounding geographical information, e. g., points of interests and road networks.
1 code implementation • 25 May 2019 • Yaoming Zhu, Juncheng Wan, Zhiming Zhou, Liheng Chen, Lin Qiu, Wei-Nan Zhang, Xin Jiang, Yong Yu
Knowledge base is one of the main forms to represent information in a structured way.
1 code implementation • ACL 2019 • Yunxuan Xiao, Yanru Qu, Lin Qiu, Hao Zhou, Lei LI, Wei-Nan Zhang, Yong Yu
However, many difficult questions require multiple supporting evidence from scattered text among two or more documents.
Ranked #33 on Question Answering on HotpotQA
1 code implementation • 13 May 2019 • Huichu Zhang, Siyuan Feng, Chang Liu, Yaoyao Ding, Yichen Zhu, Zihan Zhou, Wei-Nan Zhang, Yong Yu, Haiming Jin, Zhenhui Li
The most commonly used open-source traffic simulator SUMO is, however, not scalable to large road network and large traffic flow, which hinders the study of reinforcement learning on traffic scenarios.
Multi-agent Reinforcement Learning reinforcement-learning +3
2 code implementations • 7 May 2019 • Kan Ren, Jiarui Qin, Lei Zheng, Zhengyu Yang, Wei-Nan Zhang, Yong Yu
The problem is formulated as to forecast the probability distribution of market price for each ad auction.
1 code implementation • 2 May 2019 • Kan Ren, Jiarui Qin, Yuchen Fang, Wei-Nan Zhang, Lei Zheng, Weijie Bian, Guorui Zhou, Jian Xu, Yong Yu, Xiaoqiang Zhu, Kun Gai
In order to tackle these challenges, in this paper, we propose a Hierarchical Periodic Memory Network for lifelong sequential modeling with personalized memorization of sequential patterns for each user.
1 code implementation • 2 Apr 2019 • Zhiming Zhou, Jian Shen, Yuxuan Song, Wei-Nan Zhang, Yong Yu
Lipschitz continuity recently becomes popular in generative adversarial networks (GANs).
1 code implementation • 4 Mar 2019 • Zhou Fan, Rui Su, Wei-Nan Zhang, Yong Yu
In this paper we propose a hybrid architecture of actor-critic algorithms for reinforcement learning in parameterized action space, which consists of multiple parallel sub-actor networks to decompose the structured action space into simpler action spaces along with a critic network to guide the training of all sub-actor networks.
1 code implementation • 15 Feb 2019 • Zhiming Zhou, Jiadong Liang, Yuxuan Song, Lantao Yu, Hongwei Wang, Wei-Nan Zhang, Yong Yu, Zhihua Zhang
By contrast, Wasserstein GAN (WGAN), where the discriminative function is restricted to 1-Lipschitz, does not suffer from such a gradient uninformativeness problem.
no code implementations • 15 Nov 2018 • Guansong Lu, Zhiming Zhou, Yuxuan Song, Kan Ren, Yong Yu
CycleGAN is capable of learning a one-to-one mapping between two data distributions without paired examples, achieving the task of unsupervised data translation.
no code implementations • 14 Nov 2018 • Haifeng Zhang, Zilong Guo, Han Cai, Chris Wang, Wei-Nan Zhang, Yong Yu, Wenxin Li, Jun Wang
With the rapid growth of the express industry, intelligent warehouses that employ autonomous robots for carrying parcels have been widely used to handle the vast express volume.
no code implementations • 14 Nov 2018 • Haokun Chen, Xinyi Dai, Han Cai, Wei-Nan Zhang, Xuejian Wang, Ruiming Tang, Yuzhou Zhang, Yong Yu
Reinforcement learning (RL) has recently been introduced to interactive recommender systems (IRS) because of its nature of learning from dynamic interactions and planning for long-run performance.
3 code implementations • ICLR 2019 • Zhiming Zhou, Qingru Zhang, Guansong Lu, Hongwei Wang, Wei-Nan Zhang, Yong Yu
Adam is shown not being able to converge to the optimal solution in certain cases.
no code implementations • 28 Sep 2018 • Zheyi Pan, Yuxuan Liang, Junbo Zhang, Xiuwen Yi, Yong Yu, Yu Zheng
In this paper, we propose a general framework (HyperST-Net) based on hypernetworks for deep ST models.
no code implementations • 12 Sep 2018 • Liheng Chen, Yanru Qu, Zhenghui Wang, Lin Qiu, Wei-Nan Zhang, Ken Chen, Shaodian Zhang, Yong Yu
TGE-PS uses Pairs Sampling (PS) to improve the sampling strategy of RW, being able to reduce ~99% training samples while preserving competitive performance.
1 code implementation • 7 Sep 2018 • Kan Ren, Jiarui Qin, Lei Zheng, Zhengyu Yang, Wei-Nan Zhang, Lin Qiu, Yong Yu
By capturing the time dependency through modeling the conditional probability of the event for each sample, our method predicts the likelihood of the true event occurrence and estimates the survival rate over time, i. e., the probability of the non-occurrence of the event, for the censored data.
1 code implementation • 11 Aug 2018 • Kan Ren, Yuchen Fang, Wei-Nan Zhang, Shuhao Liu, Jiajun Li, Ya zhang, Yong Yu, Jun Wang
To achieve this, we utilize sequence-to-sequence prediction for user clicks, and combine both post-view and post-click attribution patterns together for the final conversion estimation.
1 code implementation • 2 Jul 2018 • Zhiming Zhou, Yuxuan Song, Lantao Yu, Hongwei Wang, Jiadong Liang, Wei-Nan Zhang, Zhihua Zhang, Yong Yu
In this paper, we investigate the underlying factor that leads to failure and success in the training of GANs.
8 code implementations • 1 Jul 2018 • Yanru Qu, Bohui Fang, Wei-Nan Zhang, Ruiming Tang, Minzhe Niu, Huifeng Guo, Yong Yu, Xiuqiang He
User response prediction is a crucial component for personalized information retrieval and filtering scenarios, such as recommender system and web search.
3 code implementations • ICML 2018 • Han Cai, Jiacheng Yang, Wei-Nan Zhang, Song Han, Yong Yu
We introduce a new function-preserving transformation for efficient neural architecture search.
no code implementations • NAACL 2018 • Zhenghui Wang, Yanru Qu, Li-Heng Chen, Jian Shen, Wei-Nan Zhang, Shaodian Zhang, Yimei Gao, Gen Gu, Ken Chen, Yong Yu
We study the problem of named entity recognition (NER) from electronic medical records, which is one of the most fundamental and critical problems for medical text mining.
Medical Named Entity Recognition named-entity-recognition +3
2 code implementations • ICLR 2019 • Sidi Lu, Lantao Yu, Siyuan Feng, Yaoming Zhu, Wei-Nan Zhang, Yong Yu
In this paper, we study the generative models of sequential discrete data.