no code implementations • 19 Dec 2024 • David M. Bossens, Shanshan Feng, Yew-Soon Ong
As AI systems are integrated into social networks, there are AI safety concerns that AI-generated content may dominate the web, e. g. in popularity or impact on beliefs.
no code implementations • 11 Dec 2024 • Zihao Han, Baoquan Zhang, Lisai Zhang, Shanshan Feng, Kenghong Lin, Guotao Liang, Yunming Ye, Xiaochen Qi, Guangming Ye
Although these methods have shown superior performance, in this paper, we find that 1) existing methods suffer from a schedule-restoration mismatching issue, i. e., the theoretical schedule and practical restoration processes usually exist a large discrepancy, which theoretically results in the schedule not fully leveraged for restoring images; and 2) the key reason causing such issue is that the restoration process of all pixels are actually asynchronous but existing methods set a synchronous noise schedule to them, i. e., all pixels shares the same noise schedule.
no code implementations • 19 Nov 2024 • Baoquan Zhang, Shanshan Feng, Bingqi Shan, Xutao Li, Yunming Ye, Yew-Soon Ong
To address this issue, in this paper, we regard the gradient and its flow as meta-knowledge and then propose a novel Neural Ordinary Differential Equation (ODE)-based meta-optimizer to optimize prototypes, called MetaNODE.
no code implementations • 16 Nov 2024 • Jiao Liu, Zhu Sun, Shanshan Feng, Yew-Soon Ong
In the evolutionary computing community, the remarkable language-handling capabilities and reasoning power of large language models (LLMs) have significantly enhanced the functionality of evolutionary algorithms (EAs), enabling them to tackle optimization problems involving structured language or program code.
1 code implementation • 29 May 2024 • Lanting Fang, Yulian Yang, Kai Wang, Shanshan Feng, Kaiyu Feng, Jie Gui, Shuliang Wang, Yew-Soon Ong
We aim to predict future links within the dynamic graph while simultaneously providing causal explanations for these predictions.
no code implementations • 9 May 2024 • Zhuoxuan Jiang, Haoyuan Peng, Shanshan Feng, Fan Li, Dongsheng Li
Self-correction is emerging as a promising approach to mitigate the issue of hallucination in Large Language Models (LLMs).
1 code implementation • 2 Apr 2024 • Shanshan Feng, Haoming Lyu, Caishun Chen, Yew-Soon Ong
However, the generalization abilities of LLMs still are unexplored to address the next POI recommendations, where users' geographical movement patterns should be extracted.
no code implementations • 12 Mar 2024 • Ziqi Yin, Shanshan Feng, Shang Liu, Gao Cong, Yew Soon Ong, Bin Cui
With the proliferation of spatio-textual data, Top-k KNN spatial keyword queries (TkQs), which return a list of objects based on a ranking function that considers both spatial and textual relevance, have found many real-life applications.
1 code implementation • 26 Sep 2023 • Huiwei Lin, Shanshan Feng, Baoquan Zhang, Xutao Li, Yunming Ye
Our previous work proposes a novel replay-based method called proxy-based contrastive replay (PCR), which handles the shortcomings by achieving complementary advantages of both replay manners.
no code implementations • 8 Sep 2023 • Huiwei Lin, Shanshan Feng, Baoquan Zhang, Hongliang Qiao, Xutao Li, Yunming Ye
By decomposing the dot-product logits into an angle factor and a norm factor, we empirically find that the bias problem mainly occurs in the angle factor, which can be used to learn novel knowledge as cosine logits.
no code implementations • 30 Aug 2023 • Dezhao Yang, Jianghong Ma, Shanshan Feng, Haijun Zhang, Zhao Zhang
Specifically, the denoising process considers both social network structure and user interaction interests in a global view.
no code implementations • 30 Aug 2023 • Kangzhe Liu, Jianghong Ma, Shanshan Feng, Haijun Zhang, Zhao Zhang
It is centered on multi-category video games, consisting of two {components}: Balance-driven Implicit Preferences Learning for data pre-processing and Clustering-based Diversified Recommendation {Module} for final prediction.
1 code implementation • CVPR 2023 • Huiwei Lin, Baoquan Zhang, Shanshan Feng, Xutao Li, Yunming Ye
It aims to continuously learn new classes from data stream and the samples of data stream are seen only once, which suffers from the catastrophic forgetting issue, i. e., forgetting historical knowledge of old classes.
no code implementations • COLING 2022 • Ziming Huang, Zhuoxuan Jiang, Ke Wang, Juntao Li, Shanshan Feng, Xian-Ling Mao
Although most existing methods can fulfil this requirement, they can only model single-source dialog data and cannot effectively capture the underlying knowledge of relations among data and subtasks.
no code implementations • 10 Oct 2022 • Zhuoxuan Jiang, Lingfeng Qiao, Di Yin, Shanshan Feng, Bo Ren
Recent language generative models are mostly trained on large-scale datasets, while in some real scenarios, the training datasets are often expensive to obtain and would be small-scale.
1 code implementation • IEEE Transactions on Geoscience and Remote Sensing 2022 • Kuai Dai, Xutao Li, Yunming Ye, Shanshan Feng, Danyu Qin, Rui Ye
To address the sequential error accumulation issue, MSTCGAN adopts a parallel prediction framework to produce the future image sequences by a one-hot time condition input.
no code implementations • 3 Mar 2022 • Baoquan Zhang, Hao Jiang, Xutao Li, Shanshan Feng, Yunming Ye, Rui Ye
Then, resorting to the prior, we split each few-shot task to a set of subtasks with different concept levels and then perform class prediction via a model of decision tree.
no code implementations • 21 Jan 2022 • Jiahong Liu, Menglin Yang, Min Zhou, Shanshan Feng, Philippe Fournier-Viger
Inspired by the recently active and emerging self-supervised learning, in this study, we attempt to enhance the representation power of hyperbolic graph models by drawing upon the advantages of contrastive learning.
no code implementations • 9 Oct 2021 • Baoquan Zhang, Shanshan Feng, Xutao Li, Yunming Ye, Rui Ye
In this framework, a scene graph construction module is carefully designed to represent each test remote sensing image or each scene class as a scene graph, where the nodes reflect these co-occurrence objects meanwhile the edges capture the spatial correlations between these co-occurrence objects.
1 code implementation • 11 Aug 2021 • Baoquan Zhang, Xutao Li, Yunming Ye, Shanshan Feng
In this paper, 1) we figure out the reason, i. e., in the pre-trained feature space, the base classes already form compact clusters while novel classes spread as groups with large variances, which implies that fine-tuning feature extractor is less meaningful; 2) instead of fine-tuning feature extractor, we focus on estimating more representative prototypes.
1 code implementation • 7 Jun 2021 • Sanshi Yu, Zhuoxuan Jiang, Dong-Dong Chen, Shanshan Feng, Dongsheng Li, Qi Liu, JinFeng Yi
Hence, the key is to make full use of rich interaction information among streamers, users, and products.
no code implementations • 2 Jun 2021 • Zanbo Wang, Wei Wei, Xianling Mao, Shanshan Feng, Pan Zhou, Zhiyong He, Sheng Jiang
To this end, we propose a model called Global Context enhanced Document-level NER (GCDoc) to leverage global contextual information from two levels, i. e., both word and sentence.
1 code implementation • 26 Mar 2021 • Baoquan Zhang, Xutao Li, Shanshan Feng, Yunming Ye, Rui Ye
Although the existing meta-optimizers can also be adapted to our framework, they all overlook a crucial gradient bias issue, \emph{i. e.}, the mean-based gradient estimation is also biased on sparse data.
no code implementations • 10 Dec 2020 • Ziyang Wang, Wei Wei, Xian-Ling Mao, Xiao-Li Li, Shanshan Feng
In RNMSR, we propose to learn the user preference from both instance-level and group-level, respectively: (i) instance-level, which employs GNNs on a similarity-based item-pairwise session graph to capture the users' preference in instance-level.
no code implementations • 20 Nov 2020 • Ziyang Wang, Wei Wei, Gao Cong, Xiao-Li Li, Xian-Ling Mao, Minghui Qiu, Shanshan Feng
Based on BGNN, we propose a novel approach, called Session-based Recommendation with Global Information (SRGI), which infers the user preferences via fully exploring global item-transitions over all sessions from two different perspectives: (i) Fusion-based Model (SRGI-FM), which recursively incorporates the neighbor embeddings of each node on global graph into the learning process of session level item representation; and (ii) Constrained-based Model (SRGI-CM), which treats the global-level item-transition information as a constraint to ensure the learned item embeddings are consistent with the global item-transition.
no code implementations • 16 Nov 2020 • Ziyang Wang, Wei Wei, Xian-Ling Mao, Guibing Guo, Pan Zhou, Shanshan Feng
Due to the huge commercial interests behind online reviews, a tremendousamount of spammers manufacture spam reviews for product reputation manipulation.
no code implementations • 15 Nov 2020 • Wei Wei, Jiayi Liu, Xianling Mao, Guibin Guo, Feida Zhu, Pan Zhou, Yuchong Hu, Shanshan Feng
The consistency of a response to a given post at semantic-level and emotional-level is essential for a dialogue system to deliver human-like interactions.
no code implementations • 13 Nov 2020 • Zhiyong He, Zanbo Wang, Wei Wei, Shanshan Feng, Xianling Mao, Sheng Jiang
Sequence labeling (SL) is a fundamental research problem encompassing a variety of tasks, e. g., part-of-speech (POS) tagging, named entity recognition (NER), text chunking, etc.
no code implementations • EMNLP 2017 • Zhuoxuan Jiang, Shanshan Feng, Gao Cong, Chunyan Miao, Xiaoming Li
Recent years have witnessed the proliferation of Massive Open Online Courses (MOOCs).