Understanding 3D point cloud models for learning purposes has become an imperative challenge for real-world identification such as autonomous driving systems.
Deep transfer learning techniques try to tackle the limitations of deep learning, the dependency on extensive training data and the training costs, by reusing obtained knowledge.
In this study, we present our discovery of evolutionary and nature-inspired algorithms applications in Data Science and Data Analytics in three main topics of pre-processing, supervised algorithms, and unsupervised algorithms.
In recent years, smart healthcare IoT devices have become ubiquitous, but they work in isolated networks due to their policy.
There is a trade-off between the computation of many frames and the speed of the captioning process.
Generative Adversarial Networks (GANs) are machine learning methods that are used in many important and novel applications.
A successful deep learning model is dependent on extensive training data and processing power and time (known as training costs).
Then, we offer a framework of these solutions, called universal smart cities decision making, with three main sections of data capturing, data analysis, and decision making to optimize the smart mobility within smart cities.
Embodied AI aims to train an agent that can See (Computer Vision), Talk (NLP), Navigate and Interact with its environment (Reinforcement Learning), and Reason (General Intelligence), all at the same time.
Medical Imaging is one of the growing fields in the world of computer vision.
The proposed system functions and operates as followed: it reads a video; representative image frames are identified and selected; the image frames are captioned; NLP is applied to all generated captions together with text summarization; and finally, a title and an abstract are generated for the video.
With the above in mind, this paper proposes a video captioning framework that aims to describe the activities in a video and estimate a person's daily physical activity level.
The availability of abundant labeled data in recent years led the researchers to introduce a methodology called transfer learning, which utilizes existing data in situations where there are difficulties in collecting new annotated data.
However, the more universal an algorithm is, the higher number of feature dimensions it needs to work with, and that inevitably causes the emerging problem of Curse of Dimensionality (CoD).
DRDr II is a hybrid of machine learning and deep learning worlds.
However, This assumption may not always hold in real-world applications where the training and the test data fall from different distributions, due to many factors, e. g., collecting the training and test sets from different sources, or having an out-dated training set due to the change of data over time.
The goal of DeepMSRF is to identify the gender of the speaker first, and further to recognize his or her name for any given video stream.
This paper addresses the problem of identifying two main types of lesions - Exudates and Microaneurysms - caused by Diabetic Retinopathy (DR) in the eyes of diabetic patients.
The ever-increasing number of Internet of Things (IoT) devices has created a new computing paradigm, called edge computing, where most of the computations are performed at the edge devices, rather than on centralized servers.
In [1, 2], we have explored the theoretical aspects of feature extraction optimization processes for solving largescale problems and overcoming machine learning limitations.
Dimension reduction, together with EAs, lends itself to solve CoD and solve complex problems, in terms of time complexity, efficiently.
We focus on data science as a crucial area, specifically focusing on a curse of dimensionality (CoD) which is due to the large amount of generated/sensed/collected data.
However, the increase in features leads to the problem of the curse of dimensionality (CoD), which is considered to be an NP-hard problem.
A widespread practice is to use the same type of activation function in all neurons in a given layer.