Automated assessment of programming assignments : visual feedback, assignment mobility, and assessment of students' testing skills

Loading...
Thumbnail Image
Journal Title
Journal ISSN
Volume Title
Perustieteiden korkeakoulu | Doctoral thesis (article-based)
Checking the digitized thesis and permission for publishing
Instructions for the author
Date
2011
Major/Subject
Mcode
Degree programme
Language
en
Pages
Verkkokirja (1771 KB, 73 s.)
Series
Aalto University publication series DOCTORAL DISSERTATIONS , 131/2011
Abstract
The main objective of this thesis is to improve the automated assessment of programming assignments from the perspective of assessment tool developers. We have developed visual feedback on functionality of students' programs and explored methods to control the level of detail in visual feedback. We have found that visual feedback does not require major changes to existing assessment platforms. Most modern platforms are web based, creating an opportunity to describe visualizations in JavaScript and HTML embedded into textual feedback. Our preliminary results on the effectiveness of automatic visual feedback indicate that students perform equally well with visual and textual feedback. However, visual feedback based on automatically extracted object graphs can take less time to prepare than textual feedback of good quality. We have also developed programming assignments that are easier to port from one server environment to another by performing assessment on the client-side. This not only makes it easier to use the same assignments in different server environments but also removes the need for sandboxing the execution of students' programs. The approach will likely become more important in the future together with interactive study materials becoming more popular. Client-side assessment is more suitable for self-studying material than for grading because assessment results sent by a client are often too easy to falsify. Testing is an important part of programming and automated assessment should also cover students' self-written tests. We have analyzed how students behave when they are rewarded for structural test coverage (e.g. line coverage) and found that this can lead students to write tests with good coverage but with poor ability to detect faulty programs. Mutation analysis, where a large number of (faulty) programs are automatically derived from the program under test, turns out to be an effective way to detect tests otherwise fooling our assessment systems. Applying mutation analysis directly for grading is problematic because some of the derived programs are equivalent with the original and some assignments or solution strategies generate more equivalent mutants than others.
Description
Supervising professor
Malmi, Lauri, Prof.
Thesis advisor
Korhonen, Ari, Dr.
Karavirta, Ville, Dr.
Keywords
teaching programming, automated assessment, visual feedback, software visualization, portability, mobility, testing, mutation analysis
Other note
Parts
  • [Publication 1]: Ari Korhonen, Lauri Malmi, Jussi Nikander, and Petri Tenhunen. Interaction and feedback in automatically assessed algorithm simulation exercises. Journal of Information Technology Education, pages 241-255, vol 2, 2003.
  • [Publication 2]: Petri Ihantola, Tuukka Ahoniemi, Ville Karavirta, and Otto Seppälä. Review of recent systems for automatic assessment of programming assignments. In Proceedings of the 10th Koli Calling International Conference on Computing Education Research (Koli Calling '10), ACM, New York, NY, USA, pages 86-93, 2010.
  • [Publication 3]: Petri Ihantola. Creating and visualizing test data from programming exercises. Informatics in education 6(1), pages 81-102, January 2007.
  • [Publication 4]: Petri Ihantola, Ville Karavirta and Otto Seppälä. Automated visual feedback from programming assignments. In Proceedings of the 6th Program Visualization Workshop (PVW '11), Technische Universität Darmstadt, Darmstadt, pages 87-95, 2011. © 2011 by authors.
  • [Publication 5]: Petri Ihantola, Ville Karavirta, Ari Korhonen, and Jussi Nikander. Taxonomy of effortless creation of algorithm visualizations. Proceedings of the 2005 International workshop on Computing Education Research (ICER '05), ACM, New York, NY, USA, pages 123-133, 2005.
  • [Publication 6]: Guido Rößling, Myles McNally, Pierluigi Crescenzi, Atanas Radenski, Petri Ihantola, and M. Gloria Sánchez-Torrubia. Adapting moodle to better support CS education. In Proceedings of the 2010 ITiCSE working group reports (ITiCSE-WGR '10), ACM, New York, NY, USA, pages 15-27, 2010.
  • [Publication 7]: Petri Ihantola and Ville Karavirta. Two-dimensional Parson's puzzles: the concept, tools, and first observations. Journal of Information Technology Education: Innovations in Practice, vol. 10, pages 119-132, 2011.
  • [Publication 8]: Ville Karavirta and Petri Ihantola. Automatic assessment of JavaScript exercises. In Proceedings of 1st Educators' Day on Web Engineering Curricula (WECU 2010), Volume 607 of CEUR-WS, Vienna, Austria, pages P9:1-P9:10, 2010. © 2010 by authors.
  • [Publication 9]: Kalle Aaltonen, Petri Ihantola, and Otto Seppälä. Mutation analysis vs. code coverage in automated assessment of students' testing skills. Proceedings of the ACM international conference companion on Object oriented programming systems languages and applications companion (SPLASH '10), ACM, New York, NY, USA, 153-160, 2010.
Citation