SOUSA, M. L.; http://lattes.cnpq.br/0443265785880303; SOUSA, Marley Lobão de.
Résumé:
This project presents a comparison between different numerical representations of computers
in terms of the number of logic cells, time and power, when applied to machine
learning modules only in the FPGA context. Comparison was made by analysis of the ratio
between the characteristics of the hardware generated by the Intel® Quartus® Prime
synthesizer. This way, validations were made in relation to the quantity of logical elements
and maximum frequency reached, when using adder and multiplier arithmetic blocks. In
addition, the training of neural networks by means of Posits or fixed point, both with 16
bits for representation, producing the same final accuracy was also validated. When comparing
power consumption between Posits and floats with the same amount of bits, Posit
did not prove to be as advantageous. However, comparing power consumption between
Posits and floats, with the floats having a larger amount of bits, the results proved very
promising for Posit arithmetic. However, for the neural network application, the logic elements,
time, and power metrics showed better results for Posit as the total amount of bits
for representation was increased. Therefore, in the FPGA context Posit arithmetic has a
higher resource cost, which may guide the decision of whether or not to use hardware
accelerators in the application of interest.