Search Results for author: Miguel A. Bessa

Found 9 papers, 3 papers with code

Engineering software 2.0 by interpolating neural networks: unifying training, solving, and calibration

no code implementations16 Apr 2024 Chanwook Park, Sourav Saha, Jiachen Guo, Xiaoyu Xie, Satyajit Mojumder, Miguel A. Bessa, Dong Qian, Wei Chen, Gregory J. Wagner, Jian Cao, Wing Kam Liu

The evolution of artificial intelligence (AI) and neural network theories has revolutionized the way software is programmed, shifting from a hard-coded series of codes to a vast neural network.

Tensor Decomposition

Gradient-free neural topology optimization

no code implementations7 Mar 2024 Gawel Kus, Miguel A. Bessa

Gradient-free optimizers allow for tackling problems regardless of the smoothness or differentiability of their objective function, but they require many more iterations to converge when compared to gradient-based algorithms.

Continual learning for surface defect segmentation by subnetwork creation and selection

no code implementations8 Dec 2023 Aleksandr Dekhovich, Miguel A. Bessa

We introduce a new continual (or lifelong) learning algorithm called LDA-CP&S that performs segmentation tasks without undergoing catastrophic forgetting.

Continual Learning Segmentation

iPINNs: Incremental learning for Physics-informed neural networks

no code implementations10 Apr 2023 Aleksandr Dekhovich, Marcel H. F. Sluiter, David M. J. Tax, Miguel A. Bessa

Physics-informed neural networks (PINNs) have recently become a powerful tool for solving partial differential equations (PDEs).

Incremental Learning Multi-Task Learning

Cooperative data-driven modeling

1 code implementation23 Nov 2022 Aleksandr Dekhovich, O. Taylan Turan, Jiaxiang Yi, Miguel A. Bessa

However, artificial neural networks suffer from catastrophic forgetting, i. e. they forget how to perform an old task when trained on a new one.

Continual Learning

Continual Prune-and-Select: Class-incremental learning with specialized subnetworks

1 code implementation9 Aug 2022 Aleksandr Dekhovich, David M. J. Tax, Marcel H. F. Sluiter, Miguel A. Bessa

In particular, CP&S is capable of sequentially learning 10 tasks from ImageNet-1000 keeping an accuracy around 94% with negligible forgetting, a first-of-its-kind result in class-incremental learning.

Class Incremental Learning Incremental Learning +1

Adaptivity for clustering-based reduced-order modeling of localized history-dependent phenomena

no code implementations24 Sep 2021 Bernardo P. Ferreira, F. M. Andrade Pires, Miguel A. Bessa

This paper proposes a novel Adaptive Clustering-based Reduced-Order Modeling (ACROM) framework to significantly improve and extend the recent family of clustering-based reduced-order models (CROMs).

Clustering

Neural network relief: a pruning algorithm based on neural activity

1 code implementation22 Sep 2021 Aleksandr Dekhovich, David M. J. Tax, Marcel H. F. Sluiter, Miguel A. Bessa

Current deep neural networks (DNNs) are overparameterized and use most of their neuronal connections during inference for each task.

Spiderweb nanomechanical resonators via Bayesian optimization: inspired by nature and guided by machine learning

no code implementations10 Aug 2021 Dongil Shin, Andrea Cupertino, Matthijs H. J. de Jong, Peter G. Steeneken, Miguel A. Bessa, Richard A. Norte

From ultra-sensitive detectors of fundamental forces to quantum networks and sensors, mechanical resonators are enabling next-generation technologies to operate in room temperature environments.

Bayesian Optimization BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.