NASCIMENTO, A. J.; NASCIMENTO, André Jordão do.
Resumo:
One of the biggest problems encountered in applications that are involved in the Big Data ecosystem is related to the availability and quality of data for AI models and other targeted analyses. Applications with this focus need high-quality data, since the results of their services depend on the integrity of the information used in the process. When we think of textual data, we should know that the information provided to applications that involve text processing should be the best possible. An application has therefore been developed to manage the collection and ongoing processing of textual data. The context of the application is fixed on the collection of textual data from the Reddit social network. Using the API provided by the network, data is ingested from a specific community. Based on the data collected, the tool orchestrates all the tasks that manage the collection, processing and availability of this data. To test the tool, the available data is passed to a PLN model, which uses LDA to map topics based on the texts extracted from the site. The application is based on the concepts of streaming data and text processing, continuously and automatically, in order to maintain a solid, quality database for text analysis.