TOMAZ, M. E. F.; http://lattes.cnpq.br/2575683216341148; TOMAZ, Micael Espínola Fonseca.
Resumen:
Artificial Neural Networks (ANNs) are a current and recurring theme in any technological field, and their various applications confirm this fact. Training an ANN requires using a large dataset to represent the input and output possibilities of the problem being modeled, which leads to a high demand for computational processing. This work presents
an investigation into the application of Compressed Sampling as a pre-processing technique in an ANN. The main objective is to evaluate the performance of this technique across different datasets, considering its efficiency in reducing the dimensionality of input data while simultaneously maintaining the accuracy of the results. The research addresses fundamental concepts regarding digital image representation, sparsity, and the theory of Compressed Sampling, as well as detailing the implemented methodology, including the architecture of the neural networks. The obtained results demonstrate that, despite a possible reduction in model accuracy, Compressed Sampling can provide significant gains in computational efficiency. This study contributes to understanding the potential of Compressed Sampling in machine learning applications, highlighting future research directions in this area.