SOUSA, R. P.; http://lattes.cnpq.br/3300067187001570; SOUSA, Robson Pequeno de.
Resumo:
The main objective of this thesis is to develop an analytic method for designing
translation-invariant operators via neural network training. A new neural network
architecture, called Modular Morphological Neural Network (MMNN), is defined using
a fundamental result of minimal representations for translation-invariant set mappings
via Mathematical Morphology, proposed by Banon and Barrera (1991). The MMNN
general architecture is capable of learning both binary and gray-scale translationinvariant
operators. For its training, ideas of the Back-Propagation (BP) algorithm
and the methodology proposed by Pessoa and Maragos (1997) for overcoming the
problem of non-differentiability of rank functions are used. Also is developped an
alternative MMNN training method via Genetic Algorithms (GA), and provide a comparative
analysis of BP vs. GA training in problems of image restoration and pattern
recognition. The MMNN structure can be viewed as a special case of the Morphological/
Rank/Linear Neural Network (MRL-NN) proposed by Pessoa and Maragos (1997),
but with specific architecture and training rules. The effectiveness of the proposed
BP and GA training algorithms for MMNNs is encouraging, offering alternative design
tools for the important class of translation-invariant operators.