GADELHA, G. M.; http://lattes.cnpq.br/4071050262331837; GADELHA, Guilherme Monteiro.
Resumen:
Multi-task learning (MTL) is a design paradigm for neural networks that aims to improve generalization while solving multiple tasks simultaneously in a single network. MTL has been successful in various fields such as Natural Language Processing, Speech Recognition, Computer Vision, and Drug Discovery. Neural Architecture Search (NAS) is a subfield of Deep Learning that proposes methods to automatically design neural networks by searching and arranging layers and blocks to maximize an objective function. Currently, there are few methods in the literature that explore the use of NAS to build MTL networks. In this context, this work investigates a sequence of comparative experiments between multi-task networks, single-task networks, and networks created with a neural architecture search strategy. These experiments aim to understand better the differences between these paradigms of neural network design and compare the results achieved by each. We investigated neural network architectures for different use cases, such as the ICAO-FVC dataset, MNIST, FASHION-MNIST, Celeb-A, and CIFAR-10 datasets. Additionally, we experimented with a well-established dataset of NAS to benchmark new proposed methods in the field. Our experiments have revealed that the NAS technique, developed through Reinforcement Learning, is capable of discovering optimal architectures in a shorter time than the current state-of-theart technique based on Regularized Evolution. Furthermore, this technique has demonstrated competitive results across various datasets of multi-task learning, in terms of accuracy and equal error rate. While it may not be the top performer in the case of ICAO-FVC, it still delivers a competitive outcome and holds the potential to uncover even better architectures than the best handcrafted one.