no code implementations • 7 Jan 2025 • Yuxiao Hu, Qian Li, Dongxiao Zhang, Jinyue Yan, Yuntian Chen
We propose Context-Alignment, a new paradigm that aligns TS with a linguistic component in the language environments familiar to LLMs to enable LLMs to contextualize and comprehend TS data, thereby activating their capabilities.
no code implementations • 5 Dec 2024 • Xiangnan Yu, Hao Xu, Zhiping Mao, Hongguang Sun, Yong Zhang, Dongxiao Zhang, Yuntian Chen
In complex physical systems, conventional differential equations often fall short in capturing non-local and memory effects, as they are limited to local dynamics and integer-order interactions.
no code implementations • 1 Aug 2024 • Dayin Chen, Xiaodan Shi, Mingkun Jiang, Haoran Zhang, Dongxiao Zhang, Yuntian Chen, Jinyue Yan
We develop a brand new NAS search space that incorporates various data processing techniques from state-of-the-art (SOTA) TSF models and typical PVPF deep learning models.
no code implementations • 19 Jun 2024 • Longfei Ma, Nan Cheng, Xiucheng Wang, Jiong Chen, Yinjun Gao, Dongxiao Zhang, Jun-Jie Zhang
The development of Digital Twins (DTs) represents a transformative advance for simulating and optimizing complex systems in a controlled digital space.
no code implementations • 7 Jun 2024 • Zhongzheng Wang, Yuntian Chen, Guodong Chen, Dongxiao Zhang
This study introduces the multimodal latent dynamic (MLD) model, a deep learning framework for fast flow prediction and well control optimization in GCS.
no code implementations • 7 Jun 2024 • Lei Xu, Yulong Chen, Yuntian Chen, Longfeng Nie, Xuetao Wei, Liang Xue, Dongxiao Zhang
Notably, as the number of data volume and the count of local epochs increases within a threshold, there is an improvement in model performance accompanied by a reduction in the variance of performance errors.
no code implementations • 6 Jun 2024 • Yongan Zhang, Junfeng Zhao, Jian Li, Xuanran Wang, Youzhuang Sun, Yuntian Chen, Dongxiao Zhang
The proposed FAL effectively reduces noise interference in predicting formation resistivity from cased transient electromagnetic well logging curves, better learns high-frequency features, and thereby enhances the prediction accuracy and noise resistance of the neural network model.
no code implementations • 6 Jun 2024 • Jiaxin Gao, Qinglong Cao, Yuntian Chen, Dongxiao Zhang
PV-Client employs an ENhanced Transformer module to capture complex interactions of various features in PV systems, and utilizes a linear module to learn trend information in PV power.
1 code implementation • 14 May 2024 • Qinglong Cao, Yuntian Chen, Lu Lu, Hao Sun, Zhenzhong Zeng, Xiaokang Yang, Dongxiao Zhang
Our framework paves the way for sustainable and inclusive VLM research, transcending the barriers between academia and industry.
no code implementations • 13 May 2024 • Mengge Du, Yuntian Chen, Zhongzheng Wang, Longfeng Nie, Dongxiao Zhang
The first strategy is to take LLMs as a black-box optimizer and achieve equation self-improvement based on historical samples and their performance.
no code implementations • 16 Apr 2024 • Dayin Chen, Xiaodan Shi, Haoran Zhang, Xuan Song, Dongxiao Zhang, Yuntian Chen, Jinyue Yan
In this study, we propose a distributed phone-based ambient temperature estimation system which enables collaboration among multiple phones to accurately measure the ambient temperature in different areas of an indoor space.
no code implementations • 14 Apr 2024 • Wenchao Wu, Hao Xu, Dongxiao Zhang, Fanyang Mo
We present an innovative of artificial intelligence with column chromatography, aiming to resolve inefficiencies and standardize data collection in chemical separation and purification domain.
no code implementations • CVPR 2024 • Qian Li, Yuxiao Hu, Yinpeng Dong, Dongxiao Zhang, Yuntian Chen
Adversarial training is often formulated as a min-max problem, however, concentrating only on the worst adversarial examples causes alternating repetitive confusion of the model, i. e., previously defended or correctly classified samples are not defensible or accurately classifiable in subsequent adversarial training.
1 code implementation • 2 Dec 2023 • Changqi Sun, Hao Xu, Yuntian Chen, Dongxiao Zhang
Explainable artificial intelligence (XAI) aims to develop transparent explanatory approaches for "black-box" deep learning models.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 1 Dec 2023 • Longfeng Nie, Yuntian Chen, Dongxiao Zhang, Xinyue Liu, Wentian Yuan
Specifically, the temporal and spatial characteristics of remote sensing data of the satellite Himawari-8 are extracted and fused by recurrent neural network (RNN) and convolution operation, respectively.
no code implementations • 1 Dec 2023 • Longfeng Nie, Yuntian Chen, Mengge Du, Changqi Sun, Dongxiao Zhang
Compared with widely used semantic segmentation networks, including SegNet, PSPNet, DeepLabV3+, UNet, and ResUnet, our proposed model CldNet with an accuracy of 80. 89+-2. 18% is state-of-the-art in identifying cloud types and has increased by 32%, 46%, 22%, 2%, and 39%, respectively.
no code implementations • 27 Sep 2023 • Hao Xu, Yuntian Chen, Zhenzhong Zeng, Nina Li, Jian Li, Dongxiao Zhang
Through this AI-driven knowledge discovery, we uncover previously undisclosed explicit equations that shed light on the connection between terrain features and precipitation patterns.
no code implementations • 24 Sep 2023 • Hao Xu, Wei Fan, Ambrose C. Taylor, Dongxiao Zhang, Lecheng Ruan, Rundong Shi
Computational solid mechanics has become an indispensable approach in engineering, and numerical investigation of fracture in composites is essential as composites are widely used in structural applications.
1 code implementation • 14 Sep 2023 • Mengge Du, Yuntian Chen, Longfeng Nie, Siyu Lou, Dongxiao Zhang
The embedding phase integrates the initially identified PDE from the discovering process as a physical constraint into the predictive model for robust training.
1 code implementation • 3 Jul 2023 • Hao Xu, Yuntian Chen, Dongxiao Zhang
Our model-agnostic framework can be applied to a variety of common network architectures, providing a comprehensive understanding of the role of prior knowledge in deep learning models.
1 code implementation • CVPR 2023 • Qian Li, Yuxiao Hu, Ye Liu, Dongxiao Zhang, Xin Jin, Yuntian Chen
Classical adversarial attacks for Face Recognition (FR) models typically generate discrete examples for target identity with a single state image.
no code implementations • 7 Nov 2022 • Hao Xu, Jinglong Lin, Dongxiao Zhang, Fanyang Mo
A new research framework is proposed to incorporate machine learning techniques into the field of experimental chemistry to facilitate chromatographic enantioseparation.
no code implementations • 5 Oct 2022 • Jiaxin Gao, WenBo Hu, Dongxiao Zhang, Yuntian Chen
Accurate electrical load forecasting is beneficial for better scheduling of electricity generation and saving electrical energy.
1 code implementation • 4 Oct 2022 • Mengge Du, Yuntian Chen, Dongxiao Zhang
The working mechanisms of complex natural systems tend to abide by concise and profound partial differential equations (PDEs).
Deep Reinforcement Learning Model-based Reinforcement Learning +2
1 code implementation • 5 Aug 2022 • Hao Xu, Junsheng Zeng, Dongxiao Zhang
The PIC is also employed to discover unrevealed macroscale governing equations from microscopic simulation data in an actual physical scene.
no code implementations • 20 Jun 2022 • Yuntian Chen, Dongxiao Zhang, Qun Zhao, Dexun Liu
An algorithm named InterOpt for optimizing operational parameters is proposed based on interpretable machine learning, and is demonstrated via optimization of shale gas development.
no code implementations • 28 May 2022 • Jian Li, Dongxiao Zhang, Tianhao He, Qiang Zheng
In this work, a novel coupled theory-guided neural network (TgNN) based surrogate model is built to facilitate computation efficiency under the premise of satisfactory accuracy.
1 code implementation • 11 May 2022 • Mengge Du, Yuntian Chen, Dongxiao Zhang
Imposing physical constraints on neural networks as a method of knowledge embedding has achieved great progress in solving physical problems described by governing equations.
no code implementations • 6 May 2022 • Qiang Zheng, Xiaoguang Yin, Dongxiao Zhang
To realize accuracy and efficiency simultaneously in battery modeling, we propose to build a data-driven surrogate for a battery system while incorporating the underlying physics as constraints.
no code implementations • 30 Apr 2022 • Tianhao He, Haibin Chang, Dongxiao Zhang
Furthermore, based on the constructed TgU-net surrogate, a data assimilation method is employed to identify the physical process and parameters simultaneously.
no code implementations • 16 Apr 2022 • Hao Xu, Yuntian Chen, Dongxiao Zhang
The interpretability of deep neural networks has attracted increasing attention in recent years, and several methods have been created to interpret the "black box" model.
no code implementations • 15 Apr 2022 • Nanzhe Wang, Haibin Chang, Xiangzhao Kong, Martin O. Saar, Dongxiao Zhang
In this work, we propose a closed-loop optimization framework, based on deep learning surrogates, for the well control optimization of geothermal reservoirs.
no code implementations • 15 Feb 2022 • Yuntian Chen, Dongxiao Zhang
Scientific research's mandate is to comprehend and explore the world, as well as to improve it based on experience and knowledge.
no code implementations • 31 Dec 2021 • Nanzhe Wang, Qinzhuo Liao, Haibin Chang, Dongxiao Zhang
The results show that the deep learning method can provide equivalent upscaling accuracy to the numerical method, and efficiency can be improved significantly compared to numerical upscaling.
no code implementations • 14 Nov 2021 • Rui Xu, Dongxiao Zhang, Nanzhe Wang
The surrogate models are used to conduct uncertainty quantification considering a stochastic permeability field, as well as to infer unknown permeability information based on limited well production data and observation data of formation properties.
no code implementations • 12 Oct 2021 • Nanzhe Wang, Haibin Chang, Dongxiao Zhang
Pressure and saturation are coupled with each other in the governing equations, and thus the two networks are also mutually conditioned in the training process by the discretized governing equations, which also increases the difficulty of model training.
no code implementations • 28 Sep 2021 • Xing Luo, Dongxiao Zhang
Consequently, to improve day-ahead PVPG forecasting accuracy, as well as eliminate the impacts of concept drift, this paper proposes an adaptive LSTM (AD-LSTM) model, which is a DL framework that can not only acquire general knowledge from historical data, but also dynamically learn specific knowledge from newly-arrived data.
no code implementations • 25 Sep 2021 • Pengfei Tang, Junsheng Zeng, Dongxiao Zhang, Heng Li
The results demonstrate that constructing the settling surrogate with the MFNN can reduce the need for high-fidelity data and thus computational cost by 80%, while the accuracy lost is less than 5% compared to a high-fidelity surrogate.
no code implementations • 5 Aug 2021 • Qiang Zheng, Dongxiao Zhang
In order to obtain diverse reconstructions, the discrete latent codes are modeled using conditional GPT in an autoregressive manner, while incorporating conditional information from a given slice, rock type, and porosity.
2 code implementations • 9 Jun 2021 • Yuntian Chen, Yingtao Luo, Qiang Liu, Hao Xu, Dongxiao Zhang
Partial differential equations (PDEs) are concise and understandable representations of domain knowledge, which are essential for deepening our understanding of physical processes and predicting future responses.
no code implementations • 31 May 2021 • Junsheng Zeng, Hao Xu, Yuntian Chen, Dongxiao Zhang
Although deep-learning has been successfully applied in a variety of science and engineering problems owing to its strong high-dimensional nonlinear mapping capability, it is of limited use in scientific knowledge discovery.
no code implementations • 31 May 2021 • Hao Xu, Dongxiao Zhang
In the framework, a preliminary result of potential terms provided by the deep learning-genetic algorithm is added into the loss function of the PINN as physical constraints to improve the accuracy of derivative calculation.
1 code implementation • 11 Dec 2020 • Yuntian Chen, Dou Huang, Dongxiao Zhang, Junsheng Zeng, Nanzhe Wang, Haoran Zhang, Jinyue Yan
Machine learning models have been successfully used in many scientific and engineering fields.
no code implementations • 29 Nov 2020 • Qiang Zheng, Dongxiao Zhang
In fact, the proposed framework can realize the targets of MPS and TPS simultaneously by incorporating high-order information directly from rock images with the GANs scheme, while preserving low-order counterparts through conditioning.
no code implementations • 24 Nov 2020 • Hao Xu, Dongxiao Zhang, Nanzhe Wang
Our proposed algorithm is also able to discover PDEs with high-order derivatives or heterogeneous parameters accurately with sparse and noisy data.
no code implementations • 17 Nov 2020 • Nanzhe Wang, Haibin Chang, Dongxiao Zhang
In order to achieve the theory-guided training, the governing equations of the studied problems can be discretized and the finite difference scheme of the equations can be embedded into the training of CNN.
no code implementations • 8 Sep 2020 • Rui Xu, Dongxiao Zhang, Miao Rong, Nanzhe Wang
In the weak form, high order derivatives in the PDE can be transferred to the test functions by performing integration-by-parts, which reduces computational error.
no code implementations • 24 Aug 2020 • Miao Rong, Dongxiao Zhang, Nanzhe Wang
In this paper, the Lagrangian dual-based TgNN (TgNN-LD) is proposed to improve the effectiveness of TgNN.
no code implementations • 28 Jul 2020 • Nanzhe Wanga, Haibin Changa, Dongxiao Zhang
The first category is deep-learning surrogate-based inversion methods, in which the Theory-guided Neural Network (TgNN) is constructed as a deep-learning surrogate for problems with uncertain model parameters.
no code implementations • 2 Jun 2020 • Tianhao He, Dongxiao Zhang
In this study, a theory-guided generative adversarial network (TgGAN) is proposed to solve dynamic partial differential equations (PDEs).
no code implementations • 16 May 2020 • Hao Xu, Dongxiao Zhang, Junsheng Zeng
Next, genetic algorithm is utilized to discover the form of PDEs and corresponding coefficients with an incomplete candidate library.
1 code implementation • 26 Apr 2020 • Yuntian Chen, Dongxiao Zhang
In this study, we propose an ensemble long short-term memory (EnLSTM) network, which can be trained on a small dataset and process sequential data.
no code implementations • 26 Apr 2020 • Yuntian Chen, Dongxiao Zhang
In the training process, the model prediction is mapped to the space of value that conforms to the physical mechanism through the projection matrix, and then the model is trained based on the indirect labels.
no code implementations • 25 Apr 2020 • Nanzhe Wang, Haibin Chang, Dongxiao Zhang
The trained neural network can predict solutions of subsurface flow problems with new stochastic parameters.
no code implementations • 21 Jan 2020 • Hao Xu, Haibin Chang, Dongxiao Zhang
In the proposed framework, a deep neural network that is trained with available data of a physical problem is utilized to generate meta-data and calculate derivatives, and the genetic algorithm is then employed to discover the underlying PDE.
no code implementations • 24 Oct 2019 • Nanzhe Wang, Dongxiao Zhang, Haibin Chang, Heng Li
The TgNN can achieve higher accuracy than the ordinary Artificial Neural Network (ANN) because the former provides physically feasible predictions and can be more readily generalized beyond the regimes covered with the training data.
no code implementations • 13 Aug 2019 • Hao Xu, Haibin Chang, Dongxiao Zhang
However, prior to achieving this goal, major challenges remain to be resolved, including learning PDE under noisy data and limited discrete data.
no code implementations • 29 Oct 2018 • Haibin Chang, Dongxiao Zhang
Using the training data set, a data-driven method is developed to learn the governing equation of the considered physical problem by identifying the occurred (or dominated) processes and selecting the proper empirical model.