ALMEIDA, G. S. H.; ALMEIDA, Gabryelle Soares Herculano de.
Abstract:
The field of Music Information Retrieval (MIR) encompasses a variety of topics, including
music transcription, source separation, and recognition of instruments and/or musical genres. A
practical example of this field is Spotify, which uses recommendation systems capable of learning
patterns of the reproduced content and suggesting similar music to users. However, instrument
recognition can still present challenges depending on the dataset used, making pattern recognition
more difficult. In this context, the objective of this research is to train a model capable of detecting and
identifying instruments, and evaluate this model on different well-known datasets in the field of MIR.
For this purpose, the OpenMIC-2018 dataset it’s used for training and three datasets to evaluate the
model: MTG-Jamendo, NSynth and audio from live performances with separated instruments using
Demucs. Accuracy will be one of the criteria used to assess the model’s performance. By addressing
this issue, we hope to contribute to advancements in the field of MIR, enabling more personalized
music recommendations through improved accuracy in recommendation systems. Additionally, we
seek to provide insights to the MIR community, aiding in music analysis and related fields, in order to
facilitate increasingly efficient applications.