Search Results for author: Maolin Wang

Found 14 papers, 7 papers with code

Cumulative Distribution Function based General Temporal Point Processes

no code implementations1 Feb 2024 Maolin Wang, Yu Pan, Zenglin Xu, Ruocheng Guo, Xiangyu Zhao, Wanyu Wang, Yiqi Wang, Zitao Liu, Langming Liu

Our contributions encompass the introduction of a pioneering CDF-based TPP model, the development of a methodology for incorporating past event information into future event prediction, and empirical validation of CuFun's effectiveness through extensive experimentation on synthetic and real-world datasets.

Information Retrieval Point Processes +1

Large Multimodal Model Compression via Efficient Pruning and Distillation at AntGroup

no code implementations10 Dec 2023 Maolin Wang, Yao Zhao, Jiajia Liu, Jingdong Chen, Chenyi Zhuang, Jinjie Gu, Ruocheng Guo, Xiangyu Zhao

In our research, we constructed a dataset, the Multimodal Advertisement Audition Dataset (MAAD), from real-world scenarios within Alipay, and conducted experiments to validate the reliability of our proposed strategy.

Model Compression

Federated Knowledge Graph Completion via Latent Embedding Sharing and Tensor Factorization

no code implementations17 Nov 2023 Maolin Wang, Dun Zeng, Zenglin Xu, Ruocheng Guo, Xiangyu Zhao

To address these issues, we propose a novel method, i. e., Federated Latent Embedding Sharing Tensor factorization (FLEST), which is a novel approach using federated tensor factorization for KG completion.

Embedding in Recommender Systems: A Survey

1 code implementation28 Oct 2023 Xiangyu Zhao, Maolin Wang, Xinjian Zhao, Jiansheng Li, Shucheng Zhou, Dawei Yin, Qing Li, Jiliang Tang, Ruocheng Guo

This survey covers embedding methods like collaborative filtering, self-supervised learning, and graph-based techniques.

AutoML Collaborative Filtering +3

Tensorized Hypergraph Neural Networks

no code implementations5 Jun 2023 Maolin Wang, Yaoming Zhen, Yu Pan, Yao Zhao, Chenyi Zhuang, Zenglin Xu, Ruocheng Guo, Xiangyu Zhao

THNN is a faithful hypergraph modeling framework through high-order outer product feature message passing and is a natural tensor extension of the adjacency-matrix-based graph neural networks.

User Retention-oriented Recommendation with Decision Transformer

1 code implementation11 Mar 2023 Kesen Zhao, Lixin Zou, Xiangyu Zhao, Maolin Wang, Dawei Yin

However, deploying the DT in recommendation is a non-trivial problem because of the following challenges: (1) deficiency in modeling the numerical reward value; (2) data discrepancy between the policy learning and recommendation generation; (3) unreliable offline performance evaluation.

Contrastive Learning counterfactual +1

Tensor Networks Meet Neural Networks: A Survey and Future Perspectives

1 code implementation22 Jan 2023 Maolin Wang, Yu Pan, Zenglin Xu, Xiangli Yang, Guangxi Li, Andrzej Cichocki

Interestingly, although these two types of networks originate from different observations, they are inherently linked through the common multilinearity structure underlying both TNs and NNs, thereby motivating a significant number of intellectual developments regarding combinations of TNs and NNs.

Tensor Networks

DeceFL: A Principled Decentralized Federated Learning Framework

1 code implementation15 Jul 2021 Ye Yuan, Jun Liu, Dou Jin, Zuogong Yue, Ruijuan Chen, Maolin Wang, Chuan Sun, Lei Xu, Feng Hua, Xin He, Xinlei Yi, Tao Yang, Hai-Tao Zhang, Shaochun Sui, Han Ding

Although there has been a joint effort in tackling such a critical issue by proposing privacy-preserving machine learning frameworks, such as federated learning, most state-of-the-art frameworks are built still in a centralized way, in which a central client is needed for collecting and distributing model information (instead of data itself) from every other client, leading to high communication pressure and high vulnerability when there exists a failure at or attack on the central client.

Federated Learning Privacy Preserving

TedNet: A Pytorch Toolkit for Tensor Decomposition Networks

1 code implementation11 Apr 2021 Yu Pan, Maolin Wang, Zenglin Xu

Tensor Decomposition Networks (TDNs) prevail for their inherent compact architectures.

Tensor Decomposition

NITI: Training Integer Neural Networks Using Integer-only Arithmetic

1 code implementation28 Sep 2020 Maolin Wang, Seyedramin Rasoulinezhad, Philip H. W. Leong, Hayden K. -H. So

While integer arithmetic has been widely adopted for improved performance in deep quantized neural network inference, training remains a task primarily executed using floating point arithmetic.

Compressing Recurrent Neural Networks with Tensor Ring for Action Recognition

1 code implementation NIPS Workshop CDNNRIA 2018 Yu Pan, Jing Xu, Maolin Wang, Jinmian Ye, Fei Wang, Kun Bai, Zenglin Xu

Recurrent Neural Networks (RNNs) and their variants, such as Long-Short Term Memory (LSTM) networks, and Gated Recurrent Unit (GRU) networks, have achieved promising performance in sequential data modeling.

Action Recognition Temporal Action Localization +1

Cannot find the paper you are looking for? You can Submit a new open access paper.