no code implementations • 28 Dec 2024 • Haiming Yao, Wei Luo, Tao Zhou, Ang Gao, Xue Wang
The integration of deep learning technology has significantly improved the efficiency and accuracy of intelligent Raman spectroscopy (RS) recognition.
no code implementations • 11 Dec 2024 • Haiming Yao, Wei Luo, Xue Wang
Raman spectroscopy, as a label-free detection technology, has been widely utilized in the clinical diagnosis of pathogenic bacteria.
no code implementations • 11 Dec 2024 • Haiming Yao, Wei Luo, Ang Gao, Tao Zhou, Xue Wang
Raman spectroscopy has attracted significant attention in various biochemical detection fields, especially in the rapid identification of pathogenic bacteria.
no code implementations • 9 Dec 2024 • Hongjuan Li, Hui Kang, Geng Sun, Jiahui Li, Jiacheng Wang, Xue Wang, Dusit Niyato, Victor C. M. Leung
Thus, by jointly optimizing the excitation current weights and hover position of UAVs as well as the sequence of data transmission to various BSs, we formulate an uplink interference mitigation multi-objective optimization problem (MOOP) to decrease interference affection, enhance transmission efficiency, and improve energy efficiency, simultaneously.
no code implementations • 4 Dec 2024 • Bingjie Song, Xin Huang, Ruting Xie, Xue Wang, Qing Wang
Specifically, the query features from the content image preserve geometric consistency across multiple views, while the key and value features from the style image are used to guide the stylistic transfer.
1 code implementation • 14 Oct 2024 • Junkang Wu, Xue Wang, Zhengyi Yang, Jiancan Wu, Jinyang Gao, Bolin Ding, Xiang Wang, Xiangnan He
Aligning large language models (LLMs) with human values and intentions is crucial for their utility, honesty, and safety.
1 code implementation • 16 Aug 2024 • Xue Wang, Tian Zhou, Jianqing Zhu, Jialin Liu, Kun Yuan, Tao Yao, Wotao Yin, Rong Jin, HanQin Cai
Attention based models have achieved many remarkable breakthroughs in numerous applications.
1 code implementation • 12 Jun 2024 • Yi-Fan Zhang, Qingsong Wen, Chaoyou Fu, Xue Wang, Zhang Zhang, Liang Wang, Rong Jin
Seeing clearly with high resolution is a foundation of Large Multimodal Models (LMMs), which has been proven to be vital for visual perception and reasoning.
1 code implementation • 22 Mar 2024 • Yifan Zhang, Weiqi Chen, Zhaoyang Zhu, Dalin Qin, Liang Sun, Xue Wang, Qingsong Wen, Zhang Zhang, Liang Wang, Rong Jin
For the state-of-the-art (SOTA) model, the MSE is reduced by $33. 3\%$.
1 code implementation • 8 Mar 2024 • Yi-Fan Zhang, Weichen Yu, Qingsong Wen, Xue Wang, Zhang Zhang, Liang Wang, Rong Jin, Tieniu Tan
In the realms of computer vision and natural language processing, Large Vision-Language Models (LVLMs) have become indispensable tools, proficient in generating textual descriptions based on visual inputs.
no code implementations • 8 Feb 2024 • Peisong Niu, Tian Zhou, Xue Wang, Liang Sun, Rong Jin
Time series forecasting is essential for many practical applications, with the adoption of transformer-based models on the rise due to their impressive performance in NLP and CV.
no code implementations • 5 Feb 2024 • Yuan Gao, Haokun Chen, Xiang Wang, Zhicai Wang, Xue Wang, Jinyang Gao, Bolin Ding
Our research demonstrates the efficacy of leveraging AIGS and the DiffsFormer architecture to mitigate data scarcity in stock forecasting tasks.
2 code implementations • ICLR 2024 • Donghao Luo, Xue Wang
As a pure convolution structure, ModernTCN still achieves the consistent state-of-the-art performance on five mainstream time series analysis tasks while maintaining the efficiency advantage of convolution-based models, therefore providing a better balance of efficiency and performance than state-of-the-art Transformer-based and MLP-based models.
no code implementations • 28 Nov 2023 • Yifan Zhang, Xue Wang, Tian Zhou, Kun Yuan, Zhang Zhang, Liang Wang, Rong Jin, Tieniu Tan
We demonstrate the effectiveness of \abbr through comprehensive experiments on multiple OOD detection benchmarks, extensive empirical studies show that \abbr significantly improves the performance of OOD detection over state-of-the-art methods.
1 code implementation • 24 Nov 2023 • Peisong Niu, Tian Zhou, Xue Wang, Liang Sun, Rong Jin
In the burgeoning domain of Large Language Models (LLMs), there is a growing interest in applying LLM to time series forecasting, with multiple studies focused on leveraging textual prompts to further enhance the predictive prowess.
6 code implementations • 16 Oct 2023 • Ming Jin, Qingsong Wen, Yuxuan Liang, Chaoli Zhang, Siqiao Xue, Xue Wang, James Zhang, Yi Wang, Haifeng Chen, XiaoLi Li, Shirui Pan, Vincent S. Tseng, Yu Zheng, Lei Chen, Hui Xiong
In this survey, we offer a comprehensive and up-to-date review of large models tailored (or adapted) for time series and spatio-temporal data, spanning four key facets: data types, model categories, model scopes, and application areas/tasks.
2 code implementations • NeurIPS 2023 • Yi-Fan Zhang, Qingsong Wen, Xue Wang, Weiqi Chen, Liang Sun, Zhang Zhang, Liang Wang, Rong Jin, Tieniu Tan
Online updating of time series forecasting models aims to address the concept drifting problem by efficiently updating forecasting models based on streaming data.
no code implementations • 4 Jun 2023 • Donghao Luo, Xue Wang
The past few years have witnessed the rapid development in multivariate time series forecasting.
1 code implementation • 25 Apr 2023 • Yi-Fan Zhang, Xue Wang, Kexin Jin, Kun Yuan, Zhang Zhang, Liang Wang, Rong Jin, Tieniu Tan
In particular, when the adaptation target is a series of domains, the adaptation accuracy of AdaNPC is 50% higher than advanced TTA methods.
3 code implementations • 23 Feb 2023 • Tian Zhou, Peisong Niu, Xue Wang, Liang Sun, Rong Jin
The main challenge that blocks the development of pre-trained model for time series analysis is the lack of a large amount of data for training.
1 code implementation • The Eleventh International Conference on Learning Representations (ICLR 2023) 2023 • Yifan Zhang, Xue Wang, Jian Liang, Zhang Zhang, Liang Wang, Rong Jin, Tieniu Tan
A fundamental challenge for machine learning models is how to generalize learned models for out-of-distribution (OOD) data.
Ranked #8 on Domain Adaptation on Office-Home
no code implementations • ICCV 2023 • Xue Wang, Zhibo Wang, Haiqin Weng, Hengchang Guo, Zhifei Zhang, Lu Jin, Tao Wei, Kui Ren
Considering the insufficient study on such complex causal questions, we make the first attempt to explain different causal questions by contrastive explanations in a unified framework, ie., Counterfactual Contrastive Explanation (CCE), which visually and intuitively explains the aforementioned questions via a novel positive-negative saliency-based explanation scheme.
no code implementations • 1 Nov 2022 • Haiming Yao, Xue Wang, Wenyong Yu
The extensive experiments conducted demonstrate that the proposed ST-MAE method can advance state-of-the-art performance on multiple benchmarks across application scenarios with a superior inference efficiency, which exhibits great potential to be the uniform model for unsupervised visual anomaly detection.
no code implementations • 24 Jun 2022 • Tian Zhou, Jianqing Zhu, Xue Wang, Ziqing Ma, Qingsong Wen, Liang Sun, Rong Jin
Various deep learning models, especially some latest Transformer-based approaches, have greatly improved the state-of-art performance for long-term time series forecasting. However, those transformer-based models suffer a severe deterioration performance with prolonged input length, which prohibits them from using extended historical info. Moreover, these methods tend to handle complex examples in long-term forecasting with increased model complexity, which often leads to a significant increase in computation and less robustness in performance(e. g., overfitting).
no code implementations • 22 Jun 2022 • Haiming Yao, Wenyong Yu, Xue Wang
Subsequently, a contrastive-learning-based memory feature module (CMFM) is proposed to obtain discriminative representations and construct a normal feature memory bank in the latent space, which can be employed as a substitute for defects and fast anomaly scores at the patch level.
3 code implementations • 18 May 2022 • Tian Zhou, Ziqing Ma, Xue Wang, Qingsong Wen, Liang Sun, Tao Yao, Wotao Yin, Rong Jin
Recent studies have shown that deep learning models such as RNNs and Transformers have brought significant performance gains for long-term forecasting of time series because they effectively utilize historical information.
Ranked #3 on Time Series Forecasting on ETTh2 (96) Univariate
no code implementations • 1 Apr 2022 • Qi Zhang, Xin Huang, Ying Feng, Xue Wang, Hongdong Li, Qing Wang
A two-stage network is developed for novel view synthesis.
no code implementations • 1 Apr 2022 • Yaning Li, Xue Wang, Hao Zhu, Guoqing Zhou, Qing Wang
Existing light field representations, such as epipolar plane image (EPI) and sub-aperture images, do not consider the structural characteristics across the views, so they usually require additional disparity and spatial structure cues for follow-up tasks.
no code implementations • CVPR 2023 • Bingxu Mu, Zhenxing Niu, Le Wang, Xue Wang, Rong Jin, Gang Hua
Deep neural networks (DNNs) are known to be vulnerable to both backdoor attacks as well as adversarial attacks.
3 code implementations • 30 Jan 2022 • Tian Zhou, Ziqing Ma, Qingsong Wen, Xue Wang, Liang Sun, Rong Jin
Although Transformer-based methods have significantly improved state-of-the-art results for long-term series forecasting, they are not only computationally expensive but more importantly, are unable to capture the global view of time series (e. g. overall trend).
no code implementations • CVPR 2022 • Zexing Du, Xue Wang, Guoqing Zhou, Qing Wang
To deal with the great number of untrimmed videos produced every day, we propose an efficient unsupervised action segmentation method by detecting boundaries, named action boundary detection (ABD).
no code implementations • 2 Dec 2021 • Zhongyun Hu, Ntumba Elie Nsampi, Xue Wang, Qing Wang
Before solving these two sub-problems, we first learn a shading-aware illumination descriptor via a well-designed neural rendering framework, of which the key is a shading bases module that generates multiple shading bases from the foreground image.
no code implementations • 8 Sep 2021 • Pichao Wang, Xue Wang, Hao Luo, Jingkai Zhou, Zhipeng Zhou, Fan Wang, Hao Li, Rong Jin
In this paper, we further investigate this problem and extend the above conclusion: only early convolutions do not help for stable training, but the scaled ReLU operation in the \textit{convolutional stem} (\textit{conv-stem}) matters.
no code implementations • 27 Jun 2021 • William A. Barnett, Xue Wang, Hai-Chuan Xu, Wei-Xing Zhou
We derive the default cascade model and the fire-sale spillover model in a unified interdependent framework.
1 code implementation • 28 May 2021 • Pichao Wang, Xue Wang, Fan Wang, Ming Lin, Shuning Chang, Hao Li, Rong Jin
A key component in vision transformers is the fully-connected self-attention which is more powerful than CNNs in modelling long range dependencies.
no code implementations • 23 Jan 2021 • Yaning Li, Xue Wang, Hao Zhu, Guoqing Zhou, Qing Wang
Based on these two observations, we propose a learning-based FSS reconstruction approach for one-time aliasing removing over the whole focal stack.
no code implementations • IEEE Transactions on Vehicular Technology 2020 • Xue Wang
However, energy consumption is a critical issue for the D2D communication, especially for D2D relay networks.
no code implementations • 27 Feb 2020 • Qingsong Wen, Liang Sun, Fan Yang, Xiaomin Song, Jingkun Gao, Xue Wang, Huan Xu
In this paper, we systematically review different data augmentation methods for time series.
no code implementations • 7 Dec 2018 • Xue Wang, Mike Mingcheng Wei, Tao Yao
We propose a minimax concave penalized multi-armed bandit algorithm under generalized linear model (G-MCP-Bandit) for a decision-maker facing high-dimensional data in an online learning and decision-making process.
no code implementations • ICML 2018 • Xue Wang, Mingcheng Wei, Tao Yao
In addition, we develop a linear approximation method, the 2-step Weighted Lasso procedure, to identify the MCP estimator for the MCP-Bandit algorithm under non-i. i. d.
no code implementations • 13 Dec 2017 • Rohit Pandey, Marie White, Pavel Pidlypenskyi, Xue Wang, Christine Kaeser-Chen
Mobile virtual reality (VR) head mounted displays (HMD) have become popular among consumers in recent years.