1 code implementation • ACL 2022 • Yongqi Zhang, Zhanke Zhou, Quanming Yao, Yong Li
Based on the analysis, we propose an efficient two-stage search algorithm KGTuner, which efficiently explores HP configurations on small subgraph at the first stage and transfers the top-performed configurations for fine-tuning on the large full graph at the second stage.
1 code implementation • 14 Mar 2025 • Weichen Zhan, Zile Zhou, Zhiheng Zheng, Chen Gao, Jinqiang Cui, Yong Li, Xinlei Chen, Xiao-Ping Zhang
We evaluate several SOTA MLLMs across various aspects of spatial reasoning, such as relative and absolute spatial relationships, situational reasoning, and object-centric spatial attributes.
no code implementations • 12 Mar 2025 • Yong Li, Yi Ren, Xuesong Niu, Yi Ding, Xiu-Shen Wei, Cuntai Guan
To prevent excessive feature dropout, a progressive training strategy is used, allowing for selective exclusion of sensitive features at any model layer.
no code implementations • 12 Mar 2025 • Yong Li, Menglin Liu, Zhen Cui, Yi Ding, Yuan Zong, Wenming Zheng, Shiguang Shan, Cuntai Guan
To achieve the feature decoupling, D$^2$CA is trained to disentangle AU and domain factors by assessing the quality of synthesized faces in cross-domain scenarios when either AU or domain attributes are modified.
no code implementations • 9 Mar 2025 • Tao Feng, Yunke Zhang, Xiaochen Fan, Huandong Wang, Yong Li
Experimental results in urban mobility prediction tasks further show that the proposed method can effectively reduce confounding effects and enhance performance of urban computing tasks.
no code implementations • 9 Mar 2025 • Tao Feng, Yunke Zhang, Huandong Wang, Yong Li
The core idea of our EPR-GAIL framework is to model user consumption behaviors as a complex EPR decision process, which consists of purchase, exploration, and preference decisions.
no code implementations • 9 Mar 2025 • Tao Feng, Yunke Zhang, Huandong Wang, Yong Li
However, with the common issues of missing regional features and lacking OD flow data, it is quite daunting to predict OD flow in developing cities.
no code implementations • 8 Mar 2025 • Baining Zhao, Jianjie Fang, Zichao Dai, Ziyou Wang, Jirong Zha, Weichen Zhang, Chen Gao, Yue Wang, Jinqiang Cui, Xinlei Chen, Yong Li
Large multimodal models exhibit remarkable intelligence, yet their embodied cognitive abilities during motion in open-ended urban 3D space remain to be explored.
1 code implementation • 7 Mar 2025 • Yu Zhang, Shutong Qiao, JiaQi Zhang, Tzu-Heng Lin, Chen Gao, Yong Li
This paper explores the transformative potential of large language model agents in enhancing search and recommendation systems.
1 code implementation • 26 Feb 2025 • Yuwei Yan, Yu Shang, Qingbin Zeng, Yu Li, Keyu Zhao, Zhiheng Zheng, Xuefei Ning, Tianji Wu, Shengen Yan, Yu Wang, Fengli Xu, Yong Li
The AgentSociety Challenge is the first competition in the Web Conference that aims to explore the potential of Large Language Model (LLM) agents in modeling user behavior and enhancing recommender systems on web platforms.
1 code implementation • 26 Feb 2025 • Yinzhou Tang, Jinghua Piao, Huandong Wang, Shaw Rajib, Yong Li
Experiments demonstrate $I^3$ achieves a 31. 94\% in terms of AUC, 18. 03\% in terms of Precision, 29. 17\% in terms of Recall, 22. 73\% in terms of F1-score boost in predicting infrastructure failures, and a 28. 52\% reduction in terms of RMSE for cascade volume forecasts compared to leading models.
no code implementations • 25 Feb 2025 • Hongyi Chen, Jingtao Ding, Jianhai Shu, Xinchun Yu, Xiaojun Liang, Yong Li, Xiao-Ping Zhang
Complex nonlinear system control faces challenges in achieving sample-efficient, reliable performance.
no code implementations • 25 Feb 2025 • Hongyi Chen, Jingtao Ding, Xiaojun Liang, Yong Li, Xiao-Ping Zhang
The source localization problem in graph information propagation is crucial for managing various network disruptions, from misinformation spread to infrastructure failures.
1 code implementation • 24 Feb 2025 • Ruikun Li, Huandong Wang, Qingmin Liao, Yong Li
Energy landscapes play a crucial role in shaping dynamics of many real-world complex systems.
no code implementations • 20 Feb 2025 • Yu Meng, Kaiyuan Li, Chenran Huang, Chen Gao, Xinlei Chen, Yong Li, XiaoPing Zhang
To address this challenge, we propose Per-Layer Per-Head Vision Token Pruning (PLPHP), a two-level fine-grained pruning method including Layer-Level Retention Rate Allocation and Head-Level Vision Token Pruning.
1 code implementation • 18 Feb 2025 • Yong Zhao, Kai Xu, Zhengqiu Zhu, Yue Hu, Zhiheng Zheng, Yingfeng Chen, Yatai Ji, Chen Gao, Yong Li, Jincai Huang
To bridge this gap, we introduce CityEQA, a new task where an embodied agent answers open-vocabulary questions through active exploration in dynamic city spaces.
no code implementations • 18 Feb 2025 • Ruiying Peng, Kaiyuan Li, Weichen Zhang, Chen Gao, Xinlei Chen, Yong Li
Recently, 3D-LLMs, which combine point-cloud encoders with large models, have been proposed to tackle complex tasks in embodied intelligence and scene understanding.
no code implementations • 17 Feb 2025 • Wenrui Xu, Dalin Lyu, Weihang Wang, Jie Feng, Chen Gao, Yong Li
The Theory of Multiple Intelligences underscores the hierarchical nature of cognitive capabilities.
no code implementations • 17 Feb 2025 • Bingbing Fan, Lin Chen, Songwei Li, Jian Yuan, Fengli Xu, Pan Hui, Yong Li
We design a Reflective LLM Coder to digest social media content into insights consistent with real-world feedback, and eventually produce a codebook capturing key dimensions that signal segregation experience, such as cultural resonance and appeal, accessibility and convenience, and community engagement and local involvement.
no code implementations • 16 Feb 2025 • Zhi Sheng, Yuan Yuan, Yudi Zhang, Depeng Jin, Yong Li
Existing spatiotemporal prediction models are predominantly deterministic, focusing on primary spatiotemporal patterns.
no code implementations • 14 Feb 2025 • Yong Li, Han Gao
As artificial intelligence becomes more prevalent in our lives, people are enjoying the convenience it brings, but they are also facing hidden threats, such as data poisoning and ad- versarial attacks.
no code implementations • 12 Feb 2025 • Peiwan Wang, Chenhao Cui, Yong Li
In recent years, the dominance of machine learning in stock market forecasting has been evident.
no code implementations • 12 Feb 2025 • Jinghua Piao, Yuwei Yan, Jun Zhang, Nian Li, Junbo Yan, Xiaochong Lan, Zhihong Lu, Zhiheng Zheng, Jing Yi Wang, Di Zhou, Chen Gao, Fengli Xu, Fang Zhang, Ke Rong, Jun Su, Yong Li
In this paper, we propose AgentSociety, a large-scale social simulator that integrates LLM-driven agents, a realistic societal environment, and a powerful large-scale simulation engine.
no code implementations • 7 Feb 2025 • Yong Li, Yingjing Huang, Gengchen Mai, Fan Zhang
In this work, we propose an innovative self-supervised learning framework that leverages temporal and spatial attributes of street view imagery to learn image representations of the dynamic urban environment for diverse downstream tasks.
no code implementations • 26 Jan 2025 • An Yang, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoyan Huang, Jiandong Jiang, Jianhong Tu, Jianwei Zhang, Jingren Zhou, Junyang Lin, Kai Dang, Kexin Yang, Le Yu, Mei Li, Minmin Sun, Qin Zhu, Rui Men, Tao He, Weijia Xu, Wenbiao Yin, Wenyuan Yu, Xiafei Qiu, Xingzhang Ren, Xinlong Yang, Yong Li, Zhiying Xu, Zipeng Zhang
By leveraging our inference framework, the Qwen2. 5-1M models achieve a remarkable 3x to 7x prefill speedup in scenarios with 1 million tokens of context.
no code implementations • 23 Jan 2025 • Zhi Sheng, Yuan Yuan, Jingtao Ding, Yong Li
In this paper, we introduce a novel perspective by emphasizing the role of noise in the denoising process.
no code implementations • 23 Jan 2025 • Qingyue Long, Can Rong, Huandong Wang, Yong Li
Based on these common patterns, we can construct a general framework that enables a single model to address different tasks.
no code implementations • 16 Jan 2025 • Fengli Xu, Qianyue Hao, Zefang Zong, Jingwei Wang, Yunke Zhang, Jingyi Wang, Xiaochong Lan, Jiahui Gong, Tianjian Ouyang, Fanjin Meng, Chenyang Shao, Yuwei Yan, Qinglong Yang, Yiwen Song, Sijian Ren, Xinyuan Hu, Yu Li, Jie Feng, Chen Gao, Yong Li
This innovative paradigm enables LLMs' to mimic complex human reasoning processes, such as tree search and reflective thinking.
no code implementations • 16 Jan 2025 • Huandong Wang, Wenjie Fu, Yingzhou Tang, Zhilong Chen, Yuxi Huang, Jinghua Piao, Chen Gao, Fengli Xu, Tao Jiang, Yong Li
While large language models (LLMs) present significant potential for supporting numerous real-world applications and delivering positive social impacts, they still face significant challenges in terms of the inherent risk of privacy leakage, hallucinated outputs, and value misalignment, and can be maliciously used for generating toxic content and unethical purposes after been jailbroken.
no code implementations • 11 Jan 2025 • En Xu, Can Rong, Jingtao Ding, Yong Li
The evolutionary processes of complex systems contain critical information regarding their functional characteristics.
no code implementations • 7 Jan 2025 • Jinze Yu, Yiqun Wang, Zhengda Lu, Jianwei Guo, Yong Li, Hongxing Qin, Xiaopeng Zhang
In the inference stage, we eliminate the effects of scattering and attenuation on the Gaussians and directly project them onto a 2D plane to obtain a clear view.
1 code implementation • 31 Dec 2024 • Shi-Feng Peng, Guolei Sun, Yong Li, Hongsong Wang, Guo-Sen Xie
In contrast, the large-scale visual model SAM, pre-trained on tens of millions of images from various domains and classes, possesses excellent generalizability.
no code implementations • 25 Dec 2024 • Jiajia Chen, Jiancan Wu, Jiawei Chen, Chongming Gao, Yong Li, Xiang Wang
Collaborative recommendation fundamentally involves learning high-quality user and item representations from interaction data.
no code implementations • 21 Dec 2024 • Yunshan Zhong, Yuyao Zhou, Yuxin Zhang, Shen Li, Yong Li, Fei Chao, Zhanpeng Zeng, Rongrong Ji
Data-free quantization (DFQ), which facilitates model quantization without real data to address increasing concerns about data security, has garnered significant attention within the model compression community.
no code implementations • 19 Dec 2024 • Qingyue Long, Yuan Yuan, Yong Li
We propose a universal human mobility prediction model (named UniMob), which can be applied to both individual trajectory and crowd flow.
2 code implementations • 16 Dec 2024 • Yuanzhi Wang, Yong Li, Mengyi Liu, Xiaoya Zhang, Xin Liu, Zhen Cui, Antoni B. Chan
Therefore, the controllability of video editing remains a formidable challenge.
1 code implementation • 10 Dec 2024 • JiaQi Zhang, Chen Gao, Liyuan Zhang, Yong Li, Hongzhi Yin
To address this, we propose Chain-of-User-Thought (COUT), a novel embodied reasoning paradigm that takes a chain of thought from basic action thinking to explicit and implicit personalized preference thought to incorporate personalized factors into autonomous agent learning.
no code implementations • 7 Dec 2024 • Haiyang Jiang, Tong Chen, Wentao Zhang, Nguyen Quoc Viet Hung, Yuan Yuan, Yong Li, Lizhen Cui
Urban flow prediction is a classic spatial-temporal forecasting task that estimates the amount of future traffic flow for a given location.
no code implementations • 6 Dec 2024 • Yuheng Zhang, Yuan Yuan, Jingtao Ding, Jian Yuan, Yong Li
In this paper, we propose CoDiffMob, a diffusion method for urban mobility generation with collaborative noise priors, we emphasize the critical role of noise in diffusion models for generating mobility data.
no code implementations • 27 Nov 2024 • Dong Han, Yong Li, Joachim Denzler
With the advancement of face reconstruction (FR) systems, privacy-preserving face recognition (PPFR) has gained popularity for its secure face recognition, enhanced facial privacy protection, and robustness to various attacks.
no code implementations • 24 Nov 2024 • Yong Li
Providing optimal portfolio selection for investors has always been one of the hot topics in academia.
no code implementations • 21 Nov 2024 • Jingtao Ding, Yunke Zhang, Yu Shang, Yuheng Zhang, Zefang Zong, Jie Feng, Yuan Yuan, Hongyuan Su, Nian Li, Nicholas Sukiennik, Fengli Xu, Yong Li
The concept of world models has garnered significant attention due to advancements in multimodal large language models such as GPT-4 and video generation models such as Sora, which are central to the pursuit of artificial general intelligence.
no code implementations • 20 Nov 2024 • Jing Yi Wang, Nicholas Sukiennik, Tong Li, Weikang Su, Qianyue Hao, Jingbo Xu, Zihan Huang, Fengli Xu, Yong Li
The rapid evolution of large language models (LLMs) and their capacity to simulate human cognition and behavior has given rise to LLM-based frameworks and tools that are evaluated and applied based on their ability to perform tasks traditionally performed by humans, namely those involving cognition, decision-making, and social interaction.
1 code implementation • 20 Nov 2024 • Yuan Yuan, Jingtao Ding, Chonghua Han, Depeng Jin, Yong Li
In this paper, we build UniFlow, a foundational model for general urban flow prediction that unifies both grid-based and graphbased data.
1 code implementation • 19 Nov 2024 • Yuan Yuan, Chonghua Han, Jingtao Ding, Depeng Jin, Yong Li
This allows the model to unify both multi-data and multi-task learning, and effectively support a wide range of spatio-temporal applications.
no code implementations • 16 Nov 2024 • Long Peng, Wenbo Li, Jiaming Guo, Xin Di, Haoze Sun, Yong Li, Renjing Pei, Yang Wang, Yang Cao, Zheng-Jun Zha
Real-world image super-resolution (Real SR) aims to generate high-fidelity, detail-rich high-resolution (HR) images from low-resolution (LR) counterparts.
no code implementations • 14 Nov 2024 • Chen-Long Duan, Yong Li, Xiu-Shen Wei, Lin Zhao
In this paper, we introduce a novel pre-training framework for object detection, called Dynamic Rebalancing Contrastive Learning with Dual Reconstruction (2DRCL).
no code implementations • 14 Nov 2024 • Shutong Qiao, Chen Gao, Yong Li, Hongzhi Yin
Recent studies have shown that the rich semantic information in the text can effectively supplement the deficiencies of behavioral data.
no code implementations • 12 Nov 2024 • Weibo Zhao, Yubin Shi, Xinyu Lyu, Wanchen Sui, Shen Li, Yong Li
Quantization stands as a pivotal technique for large language model (LLM) serving, yet it poses significant challenges particularly in achieving effective low-bit quantization.
no code implementations • 9 Nov 2024 • Yu Liu, Shu Yang, Jingtao Ding, Quanming Yao, Yong Li
To tackle this issue, in this paper, we generalize the hyperedge expansion in hypergraph learning and propose an equivalent transformation for HKG modeling, referred to as TransEQ.
no code implementations • 6 Nov 2024 • Zihan Yu, Jingtao Ding, Yong Li
To solve this problem, we propose a novel search objective based on the minimum description length, which reflects the distance from the target and decreases monotonically as the search approaches the correct form of the target formula.
no code implementations • 6 Nov 2024 • Pengju Wang, Bochao Liu, Weijia Guo, Yong Li, Shiming Ge
By applying knowledge distillation, we effectively transfer global generalized knowledge and historical personalized knowledge to the local model, thus mitigating catastrophic forgetting and enhancing the general performance of personalized models.
1 code implementation • 4 Nov 2024 • Lei Chen, Chen Gao, Xiaoyi Du, Hengliang Luo, Depeng Jin, Yong Li, Meng Wang
The basic idea of LLM4IDRec is that by employing LLM to augment ID data, if augmented ID data can improve recommendation performance, it demonstrates the ability of LLM to interpret ID data effectively, exploring an innovative way for the integration of LLM in ID-based recommendation.
no code implementations • 3 Nov 2024 • Shuo Tan, Rui Liu, Xianlei Long, Kai Wan, Linqi Song, Yong Li
Deploying Convolutional Neural Networks (CNNs) on resource-constrained devices necessitates efficient management of computational resources, often via distributed systems susceptible to latency from straggler nodes.
no code implementations • 1 Nov 2024 • Junheng Peng, Yingtian Liu, Mingwei Wang, Yong Li, Huating Li
In this paper, we proposed a zero-shot self-consistency learning strategy and employed an extremely lightweight network for seismic data reconstruction.
no code implementations • 29 Oct 2024 • Zhilun Zhou, Jingyang Fan, Yu Liu, Fengli Xu, Depeng Jin, Yong Li
Motivated by the remarkable abilities of large language models (LLMs) in commonsense reasoning, embedding, and multi-agent collaboration, in this work, we synergize LLM agents and knowledge graph for socioeconomic prediction.
1 code implementation • 27 Oct 2024 • Yuwei Du, Jie Feng, Jie Zhao, Yong Li
In TrajAgent, we first develop UniEnv, an execution environment with a unified data and model interface, to support the execution and training of various models.
no code implementations • 20 Oct 2024 • Haoye Chai, Xiaoqian Qi, Shiyuan Zhang, Yong Li
Mobile traffic forecasting allows operators to anticipate network dynamics and performance in advance, offering substantial potential for enhancing service quality and improving user experience.
no code implementations • 12 Oct 2024 • Chen Gao, Baining Zhao, Weichen Zhang, Jinzhu Mao, Jun Zhang, Zhiheng Zheng, Fanhang Man, Jianjie Fang, Zile Zhou, Jinqiang Cui, Xinlei Chen, Yong Li
To address it, in this paper, we construct a benchmark platform for embodied intelligence evaluation in real-world city environments.
no code implementations • 11 Oct 2024 • Yuwei Yan, Qingbin Zeng, Zhiheng Zheng, Jingzhe Yuan, Jie Feng, Jun Zhang, Fengli Xu, Yong Li
Besides, the substantial speedup of OpenCity allows us to establish a urban simulation benchmark for LLM agents for the first time, comparing simulated urban activities with real-world data in 6 major cities around the globe.
no code implementations • 11 Oct 2024 • Yanfeng Jiang, Zelan Yang, Bohua Chen, Shen Li, Yong Li, Tao Li
To address the above issue, we propose a novel distribution-driven delta compression framework DeltaDQ, which utilizes Group-wise Dropout and Separate Quantization to achieve ultra-high compression for the delta weight.
1 code implementation • 10 Oct 2024 • Qianyue Hao, Jingyang Fan, Fengli Xu, Jian Yuan, Yong Li
Second, logical relationships between papers are implicit, and directly prompting an LLM to predict citations may result in surface-level textual similarities rather than the deeper logical reasoning.
1 code implementation • 8 Oct 2024 • Yu Shang, Yu Li, Keyu Zhao, Likai Ma, Jiahe Liu, Fengli Xu, Yong Li
We believe that the modular design space and AgentSquare search framework offer a platform for fully exploiting the potential of prior successful designs and consolidating the collective efforts of research community.
no code implementations • 4 Oct 2024 • Dong Han, Salaheldin Mohamed, Yong Li
There is a potential risk that T2I model can generate unsafe images with uncomfortable contents.
no code implementations • 27 Sep 2024 • Salaheldin Mohamed, Dong Han, Yong Li
To address this, we leverage the pre-trained UNet from Stable Diffusion to incorporate the target face image directly into the generation process.
no code implementations • 10 Sep 2024 • Junzheng Zhang, Weijia Guo, Bochao Liu, Ruixin Shi, Yong Li, Shiming Ge
After that, the discriminative representation distillation further considers a pretrained face recognizer as the discriminative teacher to supervise the learning of the student head via cross-resolution relational contrastive distillation.
no code implementations • 5 Sep 2024 • Yujie Wang, Shenhan Zhu, Fangcheng Fu, Xupeng Miao, Jie Zhang, Juan Zhu, Fan Hong, Yong Li, Bin Cui
Recent foundation models are capable of handling multiple tasks and multiple data modalities with the unified base model structure and several specialized model components.
no code implementations • 4 Sep 2024 • Shiming Ge, Bochao Liu, Pengju Wang, Yong Li, Dan Zeng
In this work, we propose a discriminative-generative distillation approach to learn privacy-preserving deep models.
no code implementations • 3 Sep 2024 • Hongyuan Su, Yu Zheng, Jingtao Ding, Depeng Jin, Yong Li
The facility location problem (FLP) is a classical combinatorial optimization challenge aimed at strategically laying out facilities to maximize their accessibility.
1 code implementation • 26 Aug 2024 • Jie Feng, Yuwei Du, Jie Zhao, Yong Li
In AgentMove, we first decompose the mobility prediction task into three sub-tasks and then design corresponding modules to complete these subtasks, including spatial-temporal memory for individual mobility pattern mining, world knowledge generator for modeling the effects of urban structure and collective knowledge extractor for capturing the shared patterns among population.
1 code implementation • 19 Aug 2024 • Jiahui Gong, Jingtao Ding, Fanjin Meng, Guilong Chen, Hong Chen, Shen Zhao, Haisheng Lu, Yong Li
Mobile devices, especially smartphones, can support rich functions and have developed into indispensable tools in daily life.
1 code implementation • 19 Aug 2024 • Chang Liu, Jingtao Ding, Yiwen Song, Yong Li
Predicting the resilience of complex networks, which represents the ability to retain fundamental functionality amidst external perturbations or internal failures, plays a critical role in understanding and improving real-world complex systems.
1 code implementation • 16 Aug 2024 • Wenjie Fu, Huandong Wang, Chen Gao, Guanghua Liu, Yong Li, Tao Jiang
Existing studies have partially addressed this need through an exploration of the pre-training data detection problem, which is an instance of a membership inference attack (MIA).
no code implementations • 8 Aug 2024 • Qingbin Zeng, Qinglong Yang, Shunan Dong, Heming Du, Liang Zheng, Fengli Xu, Yong Li
In the absence of navigation instructions, such abilities are vital for the agent to make high-quality decisions in long-range city navigation.
no code implementations • 31 Jul 2024 • Yongqing Xu, Haoqing Qi, Zhiqin Wang, Xiang Zhang, Yong Li, Tony Q. S. Quek
Mobile crowdsensing (MCS) enables data collection from massive devices to achieve a wide sensing range.
1 code implementation • 23 Jul 2024 • Huandong Wang, Changzheng Gao, Yuchen Wu, Depeng Jin, Lina Yao, Yong Li
In the training process, only the generated trajectories and their rewards obtained based on personal discriminators are shared between the server and devices, whose privacy is further preserved by our proposed perturbation mechanisms with theoretical proof to satisfy differential privacy.
1 code implementation • 16 Jul 2024 • Yu Shang, Yuming Lin, Yu Zheng, Hangyu Fan, Jingtao Ding, Jie Feng, Jiansheng Chen, Li Tian, Yong Li
Toward this problem, we propose UrbanWorld, the first generative urban world model that can automatically create a customized, realistic and interactive 3D urban world with flexible control conditions.
no code implementations • 6 Jul 2024 • Dong Han, Yufan Jiang, Yong Li, Ricardo Mendes, Joachim Denzler
In this work, we leverage the pure skin color patch from the face image as the additional information to train an auxiliary skin color feature extractor and face recognition model in parallel to improve performance of state-of-the-art (SOTA) privacy-preserving face recognition (PPFR) systems.
1 code implementation • 26 Jun 2024 • Yi Ding, Chengxuan Tong, Shuailei Zhang, Muyun Jiang, Yong Li, Kevin Lim Jun Liang, Cuntai Guan
Furthermore, we design a temporal contextual transformer module (TCT) with two types of token mixers to learn the temporal contextual information.
1 code implementation • 20 Jun 2024 • Jie Feng, Jun Zhang, Tianhui Liu, Xin Zhang, Tianjian Ouyang, Junbo Yan, Yuwei Du, Siqi Guo, Yong Li
The challenge in constructing a systematic evaluation benchmark for urban research lies in the diversity of urban data, the complexity of application scenarios and the highly dynamic nature of the urban environment.
1 code implementation • 20 Jun 2024 • Jie Feng, Yuwei Du, Tianhui Liu, Siqi Guo, Yuming Lin, Yong Li
In this paper, we propose CityGPT, a systematic framework for enhancing the capability of LLMs on understanding urban space and solving the related urban tasks by building a city-scale world model in the model.
1 code implementation • 17 Jun 2024 • Yanxin Xi, Yu Liu, Zhicheng Liu, Sasu Tarkoma, Pan Hui, Yong Li
The Sustainable Development Goals (SDGs) aim to resolve societal challenges, such as eradicating poverty and improving the lives of vulnerable populations in impoverished areas.
1 code implementation • 15 Jun 2024 • Jun Zhang, Wenxuan Ao, Junbo Yan, Depeng Jin, Yong Li
However, existing microscopic traffic simulators are inefficient in large-scale scenarios and thus fail to support the adoption of these methods in large-scale transportation system optimization scenarios.
no code implementations • 11 Jun 2024 • Hao Yu, Zelan Yang, Shen Li, Yong Li, Jianxin Wu
The advent of pre-trained large language models (LLMs) has revolutionized various natural language processing tasks.
no code implementations • 4 Jun 2024 • Jinwei Zeng, Chao Yu, Xinyi Yang, Wenxuan Ao, Qianyue Hao, Jian Yuan, Yong Li, Yu Wang, Huazhong Yang
Our method, CityLight, features a universal representation module that not only aligns the state representations of intersections by reindexing their phases based on their semantics and designing heterogeneity-preserving observations, but also encodes the narrowed relative traffic relation types to project the neighborhood intersections onto a uniform relative traffic impact space.
no code implementations • 3 Jun 2024 • Guangyi Liu, Yongqi Zhang, Yong Li, Quanming Yao
The task of reasoning over Knowledge Graphs (KGs) poses a significant challenge for Large Language Models (LLMs) due to the complex structure and large amounts of irrelevant information.
no code implementations • 30 May 2024 • Shaohua Wang, Xing Xie, Yong Li, Danhuai Guo, Zhi Cai, Yu Liu, Yang Yue, Xiao Pan, Feng Lu, Huayi Wu, Zhipeng Gui, Zhiming Ding, Bolong Zheng, Fuzheng Zhang, Jingyuan Wang, Zhengchao Chen, Hao Lu, Jiayi Li, Peng Yue, Wenhao Yu, Yao Yao, Leilei Sun, Yong Zhang, Longbiao Chen, Xiaoping Du, Xiang Li, Xueying Zhang, Kun Qin, Zhaoya Gong, Weihua Dong, Xiaofeng Meng
This report focuses on spatial data intelligent large models, delving into the principles, methods, and cutting-edge applications of these models.
1 code implementation • 29 May 2024 • Daniele Dell'Erba, Yong Li, Sven Schewe
We propose DFAMiner, a passive learning tool for learning minimal separating deterministic finite automata (DFA) from a set of labelled samples.
no code implementations • 27 May 2024 • Shiming Ge, Weijia Guo, Chenyu Li, Junzheng Zhang, Yong Li, Dan Zeng
First, we leverage a generative encoder pretrained for face inpainting and finetune it to represent masked faces into category-aware descriptors.
no code implementations • 26 May 2024 • Yong Li, Han Gao
On the other hand, the accuracy of models carrying backdoors on normal samples is no different from that of clean models. In this article, by observing the characteristics of backdoor attacks, We provide a new model training method (PT) that freezes part of the model to train a model that can isolate suspicious samples.
1 code implementation • 20 May 2024 • Nian Li, Xin Ban, Cheng Ling, Chen Gao, Lantao Hu, Peng Jiang, Kun Gai, Yong Li, Qingmin Liao
In this paper, we propose to model user Fatigue in interest learning for sequential Recommendations (FRec).
1 code implementation • 25 Apr 2024 • Yi Ding, Yong Li, Hao Sun, Rui Liu, Chengxuan Tong, Chenyu Liu, Xinliang Zhou, Cuntai Guan
Effectively learning the temporal dynamics in electroencephalogram (EEG) signals is challenging yet essential for decoding brain activities using brain-computer interfaces (BCIs).
1 code implementation • 7 Apr 2024 • Xingyu Su, Xiaojie Zhu, Yang Li, Yong Li, Chi Chen, Paulo Esteves-Veríssimo
Amidst the surge in deep learning-based password guessing models, challenges of generating high-quality passwords and reducing duplicate passwords persist.
no code implementations • 16 Mar 2024 • Xiaochong Lan, Yiming Cheng, Li Sheng, Chen Gao, Yong Li
Depression detection aims to determine whether an individual suffers from depression by analyzing their history of posts on social media, which can significantly aid in early detection and intervention.
no code implementations • 6 Mar 2024 • Yong Li, Shiguang Shan
We formulate the self-supervised AU representation learning signals in two-fold: (1) AU representation should be frame-wisely discriminative within a short video clip; (2) Facial frames sampled from different identities but show analogous facial AUs should have consistent AU representations.
no code implementations • 1 Mar 2024 • Jinzhu Mao, Dongyun Zou, Li Sheng, Siyi Liu, Chen Gao, Yue Wang, Yong Li
Identifying critical nodes in networks is a classical decision-making task, and many methods struggle to strike a balance between adaptability and utility.
2 code implementations • 29 Feb 2024 • Xingchen Zou, Yibo Yan, Xixuan Hao, Yuehong Hu, Haomin Wen, Erdong Liu, Junbo Zhang, Yong Li, Tianrui Li, Yu Zheng, Yuxuan Liang
As cities continue to burgeon, Urban Computing emerges as a pivotal discipline for sustainable development by harnessing the power of cross-domain data fusion from diverse sources (e. g., geographical, traffic, social media, and environmental data) and modalities (e. g., spatio-temporal, visual, and textual modalities).
no code implementations • 27 Feb 2024 • Zhilun Zhou, Yuming Lin, Depeng Jin, Yong Li
To deal with the different facilities needs of residents, we initiate a discussion among the residents in each community about the plan, where residents provide feedback based on their profiles.
no code implementations • 23 Feb 2024 • Jingtao Ding, Chang Liu, Yu Zheng, Yunke Zhang, Zihan Yu, Ruikun Li, Hongyi Chen, Jinghua Piao, Huandong Wang, Jiazhen Liu, Yong Li
Complex networks pervade various real-world systems, from the natural environment to human societies.
no code implementations • 21 Feb 2024 • Shutong Qiao, Chen Gao, Junhao Wen, Wei Zhou, Qun Luo, Peixuan Chen, Yong Li
However, constrained by high time and space costs, as well as the brief and anonymous nature of session data, the first LLM recommendation framework suitable for industrial deployment has yet to emerge in the field of SBR.
1 code implementation • 19 Feb 2024 • Yuan Yuan, Jingtao Ding, Jie Feng, Depeng Jin, Yong Li
Urban spatio-temporal prediction is crucial for informed decision-making, such as traffic management, resource optimization, and emergence response.
1 code implementation • 19 Feb 2024 • Yuan Yuan, Chenyang Shao, Jingtao Ding, Depeng Jin, Yong Li
Spatio-temporal modeling is foundational for smart city applications, yet it is often hindered by data scarcity in many cities and regions.
1 code implementation • 18 Feb 2024 • Lin Chen, Fengli Xu, Nian Li, Zhenyu Han, Meng Wang, Yong Li, Pan Hui
ReStruct uses a grammar translator to encode the meta-structures into natural language sentences, and leverages the reasoning power of LLMs to evaluate their semantic feasibility.
no code implementations • 15 Feb 2024 • Chenyang Shao, Fengli Xu, Bingbing Fan, Jingtao Ding, Yuan Yuan, Meng Wang, Yong Li
We find mechanistic mobility models, such as gravity model, can effectively map mobility intentions to physical mobility behaviours.
1 code implementation • 15 Feb 2024 • Pengyang Shao, Yonghui Yang, Chen Gao, Lei Chen, Kun Zhang, Chenyi Zhuang, Le Wu, Yong Li, Meng Wang
Specifically, to explore heterogeneity, we propose a semantic-aware graph neural networks based CD model.
no code implementations • 10 Feb 2024 • Yongqing Xu, Yong Li, Tony Q. S. Quek
Cognitive radio (CR) and integrated sensing and communication (ISAC) are both critical technologies for the sixth generation (6G) wireless networks.
1 code implementation • 8 Feb 2024 • Hongyi Chen, Jingtao Ding, Yong Li, Yue Wang, Xiao-Ping Zhang
In this paper, we propose a social physics-informed diffusion model named SPDiff to mitigate the above gap.
1 code implementation • 7 Feb 2024 • Jinwei Zeng, Yu Liu, Jingtao Ding, Jian Yuan, Yong Li
To relieve this issue by utilizing the strong pattern recognition of artificial intelligence, we incorporate two sources of open data representative of the transportation demand and capacity factors, the origin-destination (OD) flow data and the road network data, to build a hierarchical heterogeneous graph learning method for on-road carbon emission estimation (HENCE).
no code implementations • 4 Feb 2024 • Yu Shang, Yu Li, Fengli Xu, Yong Li
If these intuitive thoughts exhibit conflicts, SoT will invoke the reflective reasoning of scaled-up language models to emulate the intervention of System 2, which will override the intuitive thoughts and rectify the reasoning results.
no code implementations • 2 Feb 2024 • Siyi Liu, Chen Gao, Yong Li
Hyperparameter optimization is critical in modern machine learning, requiring expert knowledge, numerous trials, and high computational and human resources.
no code implementations • 24 Jan 2024 • Dong Han, Yong Li, Joachim Denzler
Lastly, secure multiparty computation is implemented for safely computing the embedding distance during model inference.
no code implementations • 24 Jan 2024 • Zhilun Zhou, Yuming Lin, Yong Li
Participatory urban planning is the mainstream of modern urban planning and involves the active engagement of different stakeholders.
1 code implementation • 16 Jan 2024 • Xin Zhang, Yu Liu, Yuming Lin, Qingmin Liao, Yong Li
Urban villages, defined as informal residential areas in or around urban centers, are characterized by inadequate infrastructures and poor living conditions, closely related to the Sustainable Development Goals (SDGs) on poverty, adequate housing, and sustainable cities.
no code implementations • 19 Dec 2023 • Jie Liu, Yijia Cao, Yong Li, Yixiu Guo, Wei Deng
Accurately predicting line loss rates is vital for effective line loss management in distribution networks, especially over short-term multi-horizons ranging from one hour to one week.
no code implementations • 19 Dec 2023 • Chen Gao, Xiaochong Lan, Nian Li, Yuan Yuan, Jingtao Ding, Zhilun Zhou, Fengli Xu, Yong Li
Finally, since this area is new and quickly evolving, we discuss the open problems and promising future directions.
1 code implementation • 19 Dec 2023 • Fengli Xu, Jun Zhang, Chen Gao, Jie Feng, Yong Li
Urban environments, characterized by their complex, multi-layered networks encompassing physical, social, economic, and environmental dimensions, face significant challenges in the face of rapid urbanization.
1 code implementation • 13 Dec 2023 • Haoran Ye, Jiarui Wang, Helan Liang, Zhiguang Cao, Yong Li, Fanzhang Li
The recent end-to-end neural solvers have shown promise for small-scale routing problems but suffered from limited real-time scaling-up performance.
no code implementations • 13 Dec 2023 • Huan Yan, Yong Li
Intelligent transportation systems are vital for modern traffic management and optimization, greatly improving traffic efficiency and safety.
1 code implementation • 14 Nov 2023 • GuanYu Lin, Chen Gao, Yu Zheng, Jianxin Chang, Yanan Niu, Yang song, Kun Gai, Zhiheng Li, Depeng Jin, Yong Li, Meng Wang
Recent proposed cross-domain sequential recommendation models such as PiNet and DASL have a common drawback relying heavily on overlapped users in different domains, which limits their usage in practical recommender systems.
1 code implementation • 14 Nov 2023 • GuanYu Lin, Chen Gao, Yu Zheng, Yinfeng Li, Jianxin Chang, Yanan Niu, Yang song, Kun Gai, Zhiheng Li, Depeng Jin, Yong Li
In this paper, we propose a meta-learning method to annotate the unlabeled data from loss and gradient perspectives, which considers the noises in both positive and negative instances.
2 code implementations • 10 Nov 2023 • Wenjie Fu, Huandong Wang, Chen Gao, Guanghua Liu, Yong Li, Tao Jiang
However, this hypothesis heavily relies on the overfitting of target models, which will be mitigated by multiple regularization methods and the generalization of LLMs.
1 code implementation • 9 Nov 2023 • Zhenyu Han, Yanxin Xi, Tong Xia, Yu Liu, Yong Li
Built environment supports all the daily activities and shapes our health.
no code implementations • 4 Nov 2023 • Yong Li, Zhiguo Zhao, Yunli Chen, Rui Tian
To address these challenges, our research introduces a parallel spatial transformation (PST)-based framework for large-scale, multi-view, multi-sensor scenarios.
no code implementations • 30 Oct 2023 • Huiyao Shu, Ang Wang, Ziji Shi, Hanyu Zhao, Yong Li, Lu Lu
However, a memory-efficient execution plan that includes a reasonable operator execution order and tensor memory layout can significantly increase the models' memory efficiency and reduce overheads from high-level techniques.
1 code implementation • 16 Oct 2023 • Xiaochong Lan, Chen Gao, Depeng Jin, Yong Li
Next, in the reasoning-enhanced debating stage, for each potential stance, we designate a specific LLM-based agent to advocate for it, guiding the LLM to detect logical connections between text features and stance, tackling the second challenge.
Ranked #1 on
Stance Detection
on P-Stance
1 code implementation • 16 Oct 2023 • Nian Li, Chen Gao, Mingyu Li, Yong Li, Qingmin Liao
Existing agent modeling typically employs predetermined rules or learning-based neural networks for decision-making.
2 code implementations • 13 Oct 2023 • Ling Yue, Yongqi Zhang, Quanming Yao, Yong Li, Xian Wu, Ziheng Zhang, Zhenxi Lin, Yefeng Zheng
Knowledge graph (KG) embedding is a fundamental task in natural language processing, and various methods have been proposed to explore semantic patterns in distinctive ways.
Ranked #1 on
Link Property Prediction
on ogbl-biokg
no code implementations • 3 Oct 2023 • Xuanming Hu, Wei Fan, Dongjie Wang, Pengyang Wang, Yong Li, Yanjie Fu
We design several experiments to indicate that our framework can outperform compared to other generative models for the urban planning task.
1 code implementation • NeurIPS 2023 • Haoran Ye, Jiarui Wang, Zhiguang Cao, Helan Liang, Yong Li
As a Neural Combinatorial Optimization method, DeepACO performs better than or on par with problem-specific methods on canonical routing problems.
1 code implementation • 19 Sep 2023 • Haojun Xia, Zhen Zheng, Yuchao Li, Donglin Zhuang, Zhongzhu Zhou, Xiafei Qiu, Yong Li, Wei Lin, Shuaiwen Leon Song
Therefore, we propose Flash-LLM for enabling low-cost and highly-efficient large generative model inference with the sophisticated support of unstructured sparsity on high-performance but highly restrictive Tensor Cores.
1 code implementation • 19 Sep 2023 • Zhilun Zhou, Jingtao Ding, Yu Liu, Depeng Jin, Yong Li
To capture the effect of multiple factors on urban flow, such as region features and urban environment, we employ diffusion model to generate urban flow for regions under different conditions.
no code implementations • 28 Aug 2023 • Yuhan Quan, Jingtao Ding, Chen Gao, Nian Li, Lingling Yi, Depeng Jin, Yong Li
Micro-videos platforms such as TikTok are extremely popular nowadays.
no code implementations • 26 Aug 2023 • Jian Zhu, Wen Cheng, Yu Cui, Chang Tang, Yuyang Dai, Yong Li, Lingfang Zeng
Hash representation learning of multi-view heterogeneous data is the key to improving the accuracy of multimedia retrieval.
no code implementations • 25 Aug 2023 • Yunzhu Pan, Nian Li, Chen Gao, Jianxin Chang, Yanan Niu, Yang song, Depeng Jin, Yong Li
Specifically, in short-video recommendation, the easiest-to-collect user feedback is the skipping behavior, which leads to two critical challenges for the recommendation model.
1 code implementation • 23 Aug 2023 • Wenjie Fu, Huandong Wang, Liyuan Zhang, Chen Gao, Yong Li, Tao Jiang
Membership Inference Attack (MIA) identifies whether a record exists in a machine learning model's training set by querying the model.
1 code implementation • 17 Aug 2023 • Yuanzhi Wang, Yong Li, Xiaoya Zhang, Xin Liu, Anbo Dai, Antoni B. Chan, Zhen Cui
In addition to the utilization of a pretrained T2I 2D Unet for spatial content manipulation, we establish a dedicated temporal Unet architecture to faithfully capture the temporal coherence of the input video sequences.
1 code implementation • 8 Aug 2023 • Yunzhu Pan, Chen Gao, Jianxin Chang, Yanan Niu, Yang song, Kun Gai, Depeng Jin, Yong Li
To enhance the robustness of our model, we then introduce a multi-task learning module to simultaneously optimize two kinds of feedback -- passive-negative feedback and traditional randomly-sampled negative feedback.
no code implementations • 7 Aug 2023 • Taichi Liu, Chen Gao, Zhenyu Wang, Dong Li, Jianye Hao, Depeng Jin, Yong Li
Graph Neural Network (GNN)-based models have become the mainstream approach for recommender systems.
2 code implementations • 5 Aug 2023 • Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie zhou, Liang He, Xipeng Qiu
Currently, EduChat is available online as an open-source project, with its code, data, and model parameters available on platforms (e. g., GitHub https://github. com/icalk-nlp/EduChat, Hugging Face https://huggingface. co/ecnu-icalk ).
1 code implementation • 1 Aug 2023 • Yanxin Xi, Yu Liu, Tong Li, Jintao Ding, Yunke Zhang, Sasu Tarkoma, Yong Li, Pan Hui
Especially satellite imagery is a potential data source for studying sustainable urban development.
no code implementations • 31 Jul 2023 • Xiaochong Lan, Chen Gao, Shiqi Wen, Xiuqi Chen, Yingge Che, Han Zhang, Huazhou Wei, Hengliang Luo, Yong Li
To address these two challenges, we design a system of living NEeds predictiON named NEON, consisting of three phases: feature mining, feature fusion, and multi-task prediction.
1 code implementation • 19 Jul 2023 • Feiran Hu, Peng Wang, Yangyang Li, Chenlong Duan, Zijian Zhu, Fei Wang, Faen Zhang, Yong Li, Xiu-Shen Wei
The SnakeCLEF2023 competition aims to the development of advanced algorithms for snake species identification through the analysis of images and accompanying metadata.
1 code implementation • 19 Jul 2023 • Jinzhu Mao, Liu Cao, Chen Gao, Huandong Wang, Hangyu Fan, Depeng Jin, Yong Li
Understanding and characterizing the vulnerability of urban infrastructures, which refers to the engineering facilities essential for the regular running of cities and that exist naturally in the form of networks, is of great value to us.
1 code implementation • 12 Jul 2023 • Yan Wen, Chen Gao, Lingling Yi, Liwei Qiu, Yaqing Wang, Yong Li
Automated Machine Learning (AutoML) techniques have recently been introduced to design Collaborative Filtering (CF) models in a data-specific manner.
no code implementations • 3 Jul 2023 • Xinhang Li, Xiangyu Zhao, Yejing Wang, Yu Liu, Yong Li, Cheng Long, Yong Zhang, Chunxiao Xing
As a representative information retrieval task, site recommendation, which aims at predicting the optimal sites for a brand or an institution to open new branches in an automatic data-driven way, is beneficial and crucial for brand development in modern business.
no code implementations • 17 Jun 2023 • Huandong Wang, Huan Yan, Can Rong, Yuan Yuan, Fenyu Jiang, Zhenyu Han, Hongjie Sui, Depeng Jin, Yong Li
In this survey, we will systematically review the literature on multi-scale simulation of complex systems from the perspective of knowledge and data.
no code implementations • 14 Jun 2023 • Tong Li, Li Yu, Yibo Ma, Tong Duan, Wenzhen Huang, Yan Zhou, Depeng Jin, Yong Li, Tao Jiang
We show that the decline in carbon efficiency leads to a carbon efficiency trap, estimated to cause additional carbon emissions of 23. 82 +- 1. 07 megatons in China.
no code implementations • 8 Jun 2023 • Can Rong, Jingtao Ding, Zhicheng Liu, Yong Li
The Origin-Destination~(OD) networks provide an estimation of the flow of people from every region to others in the city, which is an important research topic in transportation, urban simulation, etc.
no code implementations • 6 Jun 2023 • Can Rong, Huandong Wang, Yong Li
Origin-destination (OD) flow, which contains valuable population mobility information including direction and volume, is critical in many urban applications, such as urban planning, transportation management, etc.
1 code implementation • 24 May 2023 • Jiajia Chen, Jiancan Wu, Jiawei Chen, Xin Xin, Yong Li, Xiangnan He
Through theoretical analyses, we identify two fundamental factors: (1) with graph convolution (\textit{i. e.,} neighborhood aggregation), popular items exert larger influence than tail items on neighbor users, making the users move towards popular items in the representation space; (2) after multiple times of graph convolution, popular items would affect more high-order neighbors and become more influential.
1 code implementation • 22 May 2023 • Yu Zheng, Hongyuan Su, Jingtao Ding, Depeng Jin, Yong Li
Existing re-blocking or heuristic methods are either time-consuming which cannot generalize to different slums, or yield sub-optimal road plans in terms of accessibility and construction costs.
2 code implementations • 21 May 2023 • Yuan Yuan, Jingtao Ding, Chenyang Shao, Depeng Jin, Yong Li
To enhance the learning of each step, an elaborated spatio-temporal co-attention module is proposed to capture the interdependence between the event time and space adaptively.
no code implementations • 18 May 2023 • Bochao Liu, Pengju Wang, Weijia Guo, Yong Li, Liansheng Zhuang, Weiping Wang, Shiming Ge
In this work, we present a new private generative modeling approach where samples are generated via Hamiltonian dynamics with gradients of the private dataset estimated by a well-trained network.
no code implementations • 15 May 2023 • Suguman Bansal, Yong Li, Lucas Martinelli Tabajara, Moshe Y. Vardi, Andrew Wells
Our central result is that LTLf model checking of non-terminating transducers is \emph{exponentially harder} than that of terminating transducers.
no code implementations • 10 May 2023 • Wang-Yu Tong, Yong Li, Shou-Dong Ye, An-Jing Wang, Yan-Yan Tang, Mei-Li Li, Zhong-Fan Yu, Ting-Ting Xia, Qing-Yang Liu, Si-Qi Zhu
RNA-guided gene editing based on the CRISPR-Cas system is currently the most effective genome editing technique.
1 code implementation • 6 Apr 2023 • Yu Zhang, Xiaoguang Di, Junde Wu, Rao Fu, Yong Li, Yue Wang, Yanwu Xu, Guohui YANG, Chunhui Wang
In this paper, to make the learning easier in low-light image enhancement, we introduce FLW-Net (Fast and LightWeight Network) and two relative loss functions.
1 code implementation • CVPR 2023 • Yong Li, Yuanzhi Wang, Zhen Cui
Specially, the representation of each modality is decoupled into two parts, i. e., modality-irrelevant/-exclusive spaces, in a self-regression manner.
1 code implementation • 22 Mar 2023 • Haiquan Qiu, Yongqi Zhang, Yong Li, Quanming Yao
These results further inspire us to propose a novel labeling strategy to learn more rules in KG reasoning.
1 code implementation • 15 Mar 2023 • Yuhan Quan, Jingtao Ding, Chen Gao, Lingling Yi, Depeng Jin, Yong Li
Graph Neural Network(GNN) based social recommendation models improve the prediction accuracy of user preference by leveraging GNN in exploiting preference similarity contained in social relations.
no code implementations • 3 Mar 2023 • Yongqing Xu, Yong Li, J. Andrew Zhang, Marco Di Renzo, Tony Q. S. Quek
However, due to multiple performance metrics used for communication and sensing, the limited degrees-of-freedom (DoF) in optimizing ISAC systems poses a challenge.
1 code implementation • 25 Feb 2023 • Yu Liu, Xin Zhang, Jingtao Ding, Yanxin Xi, Yong Li
To address such issues, in this paper, we propose a Knowledge-infused Contrastive Learning (KnowCL) model for urban imagery-based socioeconomic prediction.
no code implementations • 22 Feb 2023 • Huiming Chen, Huandong Wang, Qingyue Long, Depeng Jin, Yong Li
Based on these frameworks, we have instantiated FedOpt algorithms.
1 code implementation • 9 Feb 2023 • Yuan Yuan, Huandong Wang, Jingtao Ding, Depeng Jin, Yong Li
To enhance the fidelity and utility of the generated activity data, our core idea is to model the evolution of human needs as the underlying mechanism that drives activity generation in the simulation model.
1 code implementation • 8 Feb 2023 • GuanYu Lin, Chen Gao, Yu Zheng, Jianxin Chang, Yanan Niu, Yang song, Zhiheng Li, Depeng Jin, Yong Li
In this paper, we propose Dual-interest Factorization-heads Attention for Sequential Recommendation (short for DFAR) consisting of feedback-aware encoding layer, dual-interest disentangling layer and prediction layer.
no code implementations • ICLR 2023 2023 • Hongzhi Shi, Jingtao Ding, Yufan Cao, Quanming Yao, Li Liu, Yong Li
The essence of our method is to model the formula skeleton with a message-passing flow, which helps transform the discovery of the skeleton into the search for the message-passing flow.
no code implementations • 1 Feb 2023 • Ziji Shi, Le Jiang, Ang Wang, Jie Zhang, Xianyan Jia, Yong Li, Chencan Wu, Jialin Li, Wei Lin
However, finding a suitable model parallel schedule for an arbitrary neural network is a non-trivial task due to the exploding search space.
1 code implementation • 2 Jan 2023 • Pengfei Wen, Zhi-Sheng Ye, Yong Li, Shaowei Chen, Pu Xie, Shuai Zhao
Physics-Informed Neural Network (PINN) is an efficient tool to fuse empirical or physical dynamic models with data-driven models.
no code implementations • CVPR 2023 • Wei Huang, Chang Chen, Yong Li, Jiacheng Li, Cheng Li, Fenglong Song, Youliang Yan, Zhiwei Xiong
In contrast to existing methods, we instead utilize the difference between images to build a better representation space, where the distinct style features are extracted and stored as the bases of representation.
1 code implementation • ICCV 2023 • Yuanzhi Wang, Zhen Cui, Yong Li
Recovering missed modality is popular in incomplete multimodal learning because it usually benefits downstream tasks.
no code implementations • 6 Nov 2022 • Zhen Cheng, Tao Wang, Yong Li, Fenglong Song, Chang Chen, Zhiwei Xiong
To solve this problem, we propose a learning-based data synthesis approach to learn the properties of real-world SDRTVs by integrating several tone mapping priors into both network structures and loss functions.
no code implementations • AAAI -22 2022 • Zefang Zong, Meng Zheng, Yong Li, Depeng Jin
It is of great importance to efficiently provide high-quality solutions of cooperative PDP.
1 code implementation • 11 Oct 2022 • Taolin Zhang, Junwei DOng, Jianing Wang, Chengyu Wang, Ang Wang, Yinghui Liu, Jun Huang, Yong Li, Xiaofeng He
Recently, knowledge-enhanced pre-trained language models (KEPLMs) improve context-aware representations via learning from structured relations in knowledge graphs, and/or linguistic knowledge from syntactic or dependency analysis.
1 code implementation • 18 Sep 2022 • GuanYu Lin, Chen Gao, Yinfeng Li, Yu Zheng, Zhiheng Li, Depeng Jin, Dong Li, Jianye Hao, Yong Li
Such user-centric recommendation will make it impossible for the provider to expose their new items, failing to consider the accordant interactions between user and item dimensions.
no code implementations • 17 Sep 2022 • Xiaocong Chen, Siyu Wang, Lina Yao, Lianyong Qi, Yong Li
It is more challenging to balance the exploration and exploitation in DRL RS where RS agent need to deeply explore the informative trajectories and exploit them efficiently in the context of recommender systems.
1 code implementation • 26 Aug 2022 • Chen Gao, Yu Zheng, Wenjie Wang, Fuli Feng, Xiangnan He, Yong Li
Existing recommender systems extract user preferences based on the correlation in data, such as behavioral correlation in collaborative filtering, feature-feature, or feature-behavior correlation in click-through rate prediction.
1 code implementation • 14 Aug 2022 • Yinfeng Li, Chen Gao, Quanming Yao, Tong Li, Depeng Jin, Yong Li
In particular, we first unify the fine-grained user similarity and the complex matching between user preferences and spatiotemporal activity into a heterogeneous hypergraph.
1 code implementation • 10 Aug 2022 • Yu Zheng, Chen Gao, Jingtao Ding, Lingling Yi, Depeng Jin, Yong Li, Meng Wang
Recommender systems are prone to be misled by biases in the data.
no code implementations • 8 Aug 2022 • Zhilong Chen, Jinghua Piao, Xiaochong Lan, Hancheng Cao, Chen Gao, Zhicong Lu, Yong Li
Recommender systems are playing an increasingly important role in alleviating information overload and supporting users' various needs, e. g., consumption, socialization, and entertainment.
2 code implementations • 5 May 2022 • Yongqi Zhang, Zhanke Zhou, Quanming Yao, Yong Li
While hyper-parameters (HPs) are important for knowledge graph (KG) learning, existing methods fail to search them efficiently.
no code implementations • 29 Apr 2022 • Xiaoxiao Xu, Zhiwei Fang, Qian Yu, Ruoran Huang, \\Chaosheng Fan, Yong Li, Yang He, Changping Peng, Zhangang Lin, Jingping Shao
The exposure sequence is being actively studied for user interest modeling in Click-Through Rate (CTR) prediction.
1 code implementation • 26 Apr 2022 • Jie Shuai, Kun Zhang, Le Wu, Peijie Sun, Richang Hong, Meng Wang, Yong Li
Second, while most current models suffer from limited user behaviors, can we exploit the unique self-supervised signals in the review-aware graph to guide two recommendation components better?
1 code implementation • 11 Apr 2022 • Yuanxing Zhang, Langshi Chen, Siran Yang, Man Yuan, Huimin Yi, Jie Zhang, Jiamang Wang, Jianbo Dong, Yunlong Xu, Yue Song, Yong Li, Di Zhang, Wei Lin, Lin Qu, Bo Zheng
However, we observe that GPU devices in training recommender systems are underutilized, and they cannot attain an expected throughput improvement as what it has achieved in CV and NLP areas.
no code implementations • 8 Apr 2022 • Yong Li, Heng Wang, Xiang Ye
Motivated by ANIL, we rethink the role of adaption in the feature extractor of CNAPs, which is a state-of-the-art representative few-shot method.
1 code implementation • 26 Feb 2022 • Yu Zheng, Chen Gao, Jianxin Chang, Yanan Niu, Yang song, Depeng Jin, Yong Li
Modeling user's long-term and short-term interests is crucial for accurate recommendation.
no code implementations • 17 Jan 2022 • Liang Chen, Qibiao Peng, Jintang Li, Yang Liu, Jiawei Chen, Yong Li, Zibin Zheng
To address such a challenge, we set the trigger as a single node, and the backdoor is activated when the trigger node is connected to the target node.
no code implementations • 14 Jan 2022 • Baole Ai, Zhou Qin, Wenting Shen, Yong Li
Graph Neural Networks (GNNs) have shown promising results in various tasks, among which link prediction is an important one.
no code implementations • 15 Dec 2021 • Huiming Chen, Huandong Wang, Quanming Yao, Yong Li, Depeng Jin, Qiang Yang
Federated optimization (FedOpt), which targets at collaboratively training a learning model across a large number of distributed clients, is vital for federated learning.
no code implementations • NeurIPS 2021 • Chen Gao, Yinfeng Li, Quanming Yao, Depeng Jin, Yong Li
Deep sparse networks (DSNs), of which the crux is exploring the high-order feature interactions, have become the state-of-the-art on the prediction task with high-sparsity features.
1 code implementation • 5 Nov 2021 • Zirui Zhu, Chen Gao, Xu Chen, Nian Li, Depeng Jin, Yong Li
With the hypergraph convolutional networks, the social relations can be modeled in a more fine-grained manner, which more accurately depicts real users' preferences, and benefits the recommendation performance.
no code implementations • 1 Nov 2021 • Huandong Wang, Qiaohong Yu, Yu Liu, Depeng Jin, Yong Li
Further, a complex embedding model with elaborately designed scoring functions is proposed to measure the plausibility of facts in STKG to solve the knowledge graph completion problem, which considers temporal dynamics of the mobility patterns and utilizes PoI categories as the auxiliary information and background knowledge.
no code implementations • 1 Nov 2021 • Yu Liu, Jingtao Ding, Yong Li
Specifically, motivated by distilled knowledge and rich semantics in KG, we firstly construct an urban KG (UrbanKG) with cities' key elements and semantic relationships captured.
no code implementations • 1 Nov 2021 • Chang Liu, Chen Gao, Depeng Jin, Yong Li
We first conduct information propagation on two sub-graphs to learn the representations of POIs and users.
no code implementations • 8 Oct 2021 • Junyang Lin, An Yang, Jinze Bai, Chang Zhou, Le Jiang, Xianyan Jia, Ang Wang, Jie Zhang, Yong Li, Wei Lin, Jingren Zhou, Hongxia Yang
Recent expeditious developments in deep learning algorithms, distributed training, and even hardware design for large models have enabled training extreme-scale models, say GPT-3 and Switch Transformer possessing hundreds of billions or even trillions of parameters.
no code implementations • submitted to TOIS 2021 • Chen Gao, Yu Zheng, Nian Li, Yinfeng Li, Yingrong Qin, Jinghua Piao, Yuhan Quan, Jianxin Chang, Depeng Jin, Xiangnan He, Yong Li
In this survey, we conduct a comprehensive review of the literature on graph neural network-based recommender systems.
1 code implementation • 20 Aug 2021 • Xiawei Guo, Yuhan Quan, Huan Zhao, Quanming Yao, Yong Li, WeiWei Tu
Tabular data prediction (TDP) is one of the most popular industrial applications, and various methods have been designed to improve the prediction performance.
no code implementations • ICCV 2021 • Tao Wang, Yong Li, Jingyang Peng, Yipeng Ma, Xian Wang, Fenglong Song, Youliang Yan
One is a 1D weight vector used for image-level scenario adaptation, the other is a 3D weight map aimed for pixel-wise category fusion.
2 code implementations • 16 Aug 2021 • Yu Zheng, Chen Gao, Liang Chen, Depeng Jin, Yong Li
These years much effort has been devoted to improving the accuracy or relevance of the recommendation system.
1 code implementation • 13 Aug 2021 • Erzhuo Shao, Jie Feng, Yingheng Wang, Tong Xia, Yong Li
Thus, obtaining fine-grained population distribution from coarse-grained distribution becomes an important problem.
no code implementations • 11 Aug 2021 • Yong Li, Yufei Sun, Zhen Cui, Shiguang Shan, Jian Yang
To mitigate racial bias and meantime preserve robust FR, we abstract face identity-related representation as a signal denoising problem and propose a progressive cross transformer (PCT) method for fair face recognition.
no code implementations • 10 Aug 2021 • Zefang Zong, Jingwei Wang, Tao Feng, Tong Xia, Depeng Jin, Yong Li
For each problem, we comprehensively introduce the existing DRL solutions.