no code implementations • 20 Nov 2024 • Zi Wang, Fei Wu, Feng Yu, Yurui Zhou, Jia Hu, Geyong Min
Edge-AI, the convergence of edge computing and artificial intelligence (AI), has become a promising paradigm that enables the deployment of advanced AI models at the network edge, close to users.
1 code implementation • 17 Oct 2024 • Shangda Wu, Yashan Wang, Ruibin Yuan, Zhancheng Guo, Xu Tan, Ge Zhang, Monan Zhou, Jing Chen, Xuefeng Mu, Yuejie Gao, Yuanliang Dong, Jiafeng Liu, Xiaobing Li, Feng Yu, Maosong Sun
Challenges in managing linguistic diversity and integrating various musical modalities are faced by current music information retrieval systems.
no code implementations • 27 Aug 2024 • Gengmo Zhou, Zhen Wang, Feng Yu, Guolin Ke, Zhewei Wei, Zhifeng Gao
Virtual Screening is an essential technique in the early phases of drug discovery, aimed at identifying promising drug candidates from vast molecular libraries.
no code implementations • 6 Jul 2024 • Longfei Huang, Feng Yu, Zhihao Guan, Zhonghua Wan, Yang Yang
Recent studies have enhanced the zero-shot performance of multimodal base models in referring expression comprehension tasks by introducing visual prompts.
1 code implementation • CVPR 2024 • Feng Yu, Teng Zhang, Gilad Lerman
We present the subspace-constrained Tyler's estimator (STE) designed for recovering a low-dimensional subspace within a dataset that may be highly corrupted with outliers.
no code implementations • 27 Mar 2024 • Gilad Lerman, Feng Yu, Teng Zhang
It further shows that under the generalized haystack model, STE initialized by the Tyler's M-estimator (TME), can recover the subspace when the fraction of iniliers is too small for TME to handle.
no code implementations • 15 Jan 2024 • Guoxin Wang, Sheng Shi, Shan An, Fengmei Fan, Wenshu Ge, Qi Wang, Feng Yu, Zhiren Wang
Previous research on the diagnosis of Bipolar disorder has mainly focused on resting-state functional magnetic resonance imaging.
no code implementations • 4 Jan 2024 • Feng Yu, Lixin Shen, Guohui Song
Sparse Bayesian Learning (SBL) models are extensively used in signal processing and machine learning for promoting sparsity through hierarchical priors.
1 code implementation • 27 Nov 2023 • Yan Pei, Jiahui Xu, Qianhao Chen, Chenhao Wang, Feng Yu, Lisan Zhang, Wei Luo
Finally, a Decoder layer is employed to reconstruct the artifact-reduced EEG signal.
no code implementations • 4 Oct 2023 • Guoxin Wang, Xuyang Cao, Shan An, Fengmei Fan, Chao Zhang, Jinsong Wang, Feng Yu, Zhiren Wang
In this work, we proposed a multi-dimension-embedding-aware modality fusion transformer (MFFormer) for schizophrenia and bipolar disorder classification using rs-fMRI and T1 weighted structural MRI (T1w sMRI).
no code implementations • 14 May 2022 • Hsin-Hsiung Huang, Feng Yu, Xing Fan, Teng Zhang
While matrix variate regression models have been studied in many existing works, classical statistical and computational methods for the analysis of the regression coefficient estimation are highly affected by high dimensional and noisy matrix-valued predictors.
1 code implementation • 10 May 2022 • Jiafeng Liu, Yuanliang Dong, Zehua Cheng, Xinran Zhang, Xiaobing Li, Feng Yu, Maosong Sun
In this work, we propose a permutation invariant language model, SymphonyNet, as a solution for symbolic symphony music generation.
Ranked #1 on Audio Generation on Symphony music
1 code implementation • 3 Mar 2022 • Yiu-ming Cheung, Juyong Jiang, Feng Yu, Jian Lou
Despite enormous research interest and rapid application of federated learning (FL) to various areas, existing studies mostly focus on supervised federated learning under the horizontally partitioned local dataset setting.
no code implementations • 28 Sep 2021 • Feng Yu, He Li, Sige Bian, Yongming Tang
We construct a dataset consisting entirely of face video sequences for network training and evaluation, and conduct hyper-parameter optimization in our experiments.
no code implementations • 20 Feb 2021 • Xing Fan, Marianna Pensky, Feng Yu, Teng Zhang
The paper considers a Mixture Multilayer Stochastic Block Model (MMLSBM), where layers can be partitioned into groups of similar networks, and networks in each group are equipped with a distinct Stochastic Block Model.
2 code implementations • 11 Jan 2021 • Yichen Xu, Yanqiao Zhu, Feng Yu, Qiang Liu, Shu Wu
To better model complex feature interaction, in this paper we propose a novel DisentanglEd Self-atTentIve NEtwork (DESTINE) framework for CTR prediction that explicitly decouples the computation of unary feature importance from pairwise interaction.
1 code implementation • 27 Oct 2020 • Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, Liang Wang
On the node attribute level, we corrupt node features by adding more noise to unimportant node features, to enforce the model to recognize underlying semantic information.
no code implementations • 3 Sep 2020 • Yanqiao Zhu, Yichen Xu, Feng Yu, Shu Wu, Liang Wang
In CAGNN, we perform clustering on the node embeddings and update the model parameters by predicting the cluster assignments.
no code implementations • 17 Aug 2020 • Zeyu Cui, Feng Yu, Shu Wu, Qiang Liu, Liang Wang
In this way, the items are represented at the attribute level, which can provide fine-grained information of items in recommendation.
no code implementations • 29 Jun 2020 • Shu Wu, Feng Yu, Xueli Yu, Qiang Liu, Liang Wang, Tieniu Tan, Jie Shao, Fan Huang
The CTR (Click-Through Rate) prediction plays a central role in the domain of computational advertising and recommender systems.
Ranked #33 on Click-Through Rate Prediction on Criteo
3 code implementations • 7 Jun 2020 • Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, Liang Wang
Moreover, our unsupervised method even surpasses its supervised counterparts on transductive tasks, demonstrating its great potential in real-world applications.
Ranked #1 on Node Classification on DBLP
1 code implementation • 6 May 2020 • Feng Yu, Yanqiao Zhu, Qiang Liu, Shu Wu, Liang Wang, Tieniu Tan
However, these methods compress a session into one fixed representation vector without considering the target items to be predicted.
Ranked #3 on Session-Based Recommendations on yoochoose1
1 code implementation • 1 Jan 2020 • Feng Yu, Zhaocheng Liu, Qiang Liu, Haoli Zhang, Shu Wu, Liang Wang
IM is an efficient and exact implementation of high-order FM, whose time complexity linearly grows with the order of interactions and the number of feature fields.
Ranked #2 on Click-Through Rate Prediction on KKBox
no code implementations • CIKM 2020 • Feng Yu, Zhaocheng Liu, Qiang Liu, Haoli Zhang, Shu Wu, Liang Wang
IM is an efficient and exact implementation of high-order FM, whose time complexity linearly grows with the order of interactions and the number of feature fields.
1 code implementation • 6 Aug 2019 • Feng Yu, Yi Yang, Teng Zhang
In comparison, this work proposes to decompose the objective function into two components, where one component is the loss function plus part of the total variation penalty, and the other component is the remaining total variation penalty.
Optimization and Control Computation
no code implementations • 29 Apr 2019 • Wei Luo, Feng Yu
Learning long-term dependencies still remains difficult for recurrent neural networks (RNNs) despite their success in sequence modeling recently.
no code implementations • 29 Sep 2016 • Qiang Liu, Shu Wu, Feng Yu, Liang Wang, Tieniu Tan
In this paper, we propose a novel representation learning method, Information Credibility Evaluation (ICE), to learn representations of information credibility on social media.
4 code implementations • 1 Jan 2015 • Qiang Liu, Feng Yu, Shu Wu, Liang Wang
The explosion in online advertisement urges to better estimate the click prediction of ads.
no code implementations • CIKM 2015 • Qiang Liu, Feng Yu, Shu Wu, Liang Wang
The explosion in online advertisement urges to better estimate the click prediction of ads.