http://lattes.cnpq.br/1930583565675269; SILVA, Anderson Gustafson Freire da.
Resumo:
Software testing aiming to find errors from the execution of a particular system. In other
words, it attempts to detect divergences between current and intended behavior for a software under development. However, testing is not a trivial activity. Especially complex systems, which require robust test batteries, can devote up to 50% of development time and financial resources to testing. Model-Based Testing (MBT) can facilitate this process by enabling automatic generation of test cases from models that describe behaviors or functionality of the system under test. In the MBT context, using as models use cases described in natural language, we observed that about 86 % of the generated manual execution suite, becomes obsolete when evolving the system models and generating a new suite. Such model evolutions may be due to changing requirements, incorrect requirements elicitation, or refactoring to improve the quality of the use case. In addition, we detected that some of these tests, about 52%, become obsolete due to typos fixing or writing improvements, not modifing system behavior, and can being reused without much effort. Discarding such tests implies the loss of your historical data (e.g., run/results history and associated system version). Based on this, the objective of this paper is to automatically identify tests impacted by syntactic modifications, inserted during the evolution of the system models, facilitating the reuse and preservation of historical data.