no code implementations • 28 Mar 2024 • Yiyuan Yang, Guodong Long, Tao Shen, Jing Jiang, Michael Blumenstein
To address challenges in this new setting, we explore a simple yet effective solution to learn a comprehensive foundation model.
1 code implementation • 18 Jan 2024 • Zhijie Zhong, Zhiwen Yu, Yiyuan Yang, Weizheng Wang, Kaixiang Yang
In this study, we introduce PatchAD, a novel multi-scale patch-based MLP-Mixer architecture that leverages contrastive learning for representational extraction and anomaly detection.
no code implementations • 18 Dec 2023 • Chengyuan Zhu, Yiyuan Yang, Kaixiang Yang, Haifeng Zhang, Qinmin Yang, C. L. Philip Chen
This refinement is crucial in effectively identifying genuine threats to pipelines, thus enhancing the safety of energy transportation.
1 code implementation • NeurIPS 2023 • Kaichen Zhou, Jia-Xing Zhong, Sangyun Shin, Kai Lu, Yiyuan Yang, Andrew Markham, Niki Trigoni
The introduction of neural radiance fields has greatly improved the effectiveness of view synthesis for monocular videos.
2 code implementations • 17 Jun 2023 • Yiyuan Yang, Chaoli Zhang, Tian Zhou, Qingsong Wen, Liang Sun
On the other hand, contrastive learning aims to find a representation that can clearly distinguish any instance from the others, which can bring a more natural and promising representation for time series anomaly detection.
1 code implementation • 7 Apr 2023 • Yiyuan Yang, Rongshang Li, Qiquan Shi, Xijun Li, Gang Hu, Xing Li, Mingxuan Yuan
This paper proposes a novel Stream-Graph neural network-based Data Prefetcher (SGDP).
no code implementations • 14 Apr 2021 • Yiyuan Yang, Kenneth Li, Fernanda Eliott, Maithilee Kunda
People's visual experiences of the world are easy to carve up and examine along natural language boundaries, e. g., by category labels, attribute labels, etc.
1 code implementation • 16 Dec 2019 • Yiyuan Yang, Riqiang Gao, Yucheng Tang, Sanja L. Antic, Steve Deppen, Yuankai Huo, Kim L. Sandler, Pierre P. Massion, Bennett A. Landman
To improve performance on the primary task, we propose an Internal-Transfer Weighting (ITW) strategy to suppress the loss functions on auxiliary tasks for the final stages of training.