FREITAS, R. R.; FREITAS, Rennan Rocha de.
Abstract:
Convolutional neural networks have achieved human-like accuracy in various computer vision tasks. However, the complexity of these models, along with the increasing number of parameters, creates knowledge representations and decisions that are not easily comprehensible. Therefore, these networks are often used as black-box algorithms. As a result, it is challenging to adopt such models in critical environments that require explanations of their results, such as the medical context. This study aims to train a brain tumor image classifier using a dataset of magnetic resonance imaging (MRI) images, and to apply, evaluate, and compare interpretability techniques on this classifier. As a result, we obtained a classifier with an accuracy rate of 95%, and part of the test set images were explained using 11 feature attribution interpretability techniques. Subsequently, the techni-ques were compared subjectively and objectively, revealing that the RISE technique achieved the best objective score among the evaluated techniques.