RÊGO, M. G. R.; http://lattes.cnpq.br/7689109069639555; RÊGO, Matheus Gaudencio do.
Résumé:
In introductory programming courses it is common to ask students to solve exercises that
require code as solution. When evaluating a code, a teacher or student often compare how that solution relates to other codes. This comparison gives insight about different strategies to solve a problem or information about what students are doing for a given problem. It is a costly task to evaluate codes and understand how they are related. We propose a novel approach to use automatic code comparison strategies to compare a
pool of programs. Using the result from this approach we created visualizations for this given data. First, we present a study on how automatic code comparison strategies are related to the way that teachers compare codes. Then we use the code similarity information to create a graph visualization that was later used by students during a peer assessment task. Finally we create a heatmap that show how different questions are related and which topics each of those questions explore. Then we do aqualitative study on how that information can be used and is useful for a teacher. Our first experiment shows that teachers may not have a high agreement among themselves while comparing codes, but is possible to have a personalized fined tuned algorithm that has a great similarity score with some teachers. Code comparison doesn’t seem to improve the technical quality of feedback created by the students during the peer assessment task, but that visualization improves the student’s confidence in their peer assessment capability. The last study shows that we can have information about how different exercisesare related and how some topics are being used in such