Manchester’s contribution to the Institute of Coding is focussed on 3 main contributions:
- RoboTA: A Continuous Marking and Feedback System for Software Engineering
- Customised Competence Profiles for Employers
- Empirical Analysis of Learning Challenges in Software Engineering
RoboTA: A Continuous Marking and Feedback System for Software Engineering
We are converting our current system for the automated marking of team software engineering projects based on GitHub to provide continuous formative feedback to students. On every push, every merge, every issue tracker update and every new code review, our Jenkins CI server re-assesses the work of the team, and creates a personalised report showing how the team is progressing towards the goals of the current assessment.
As part of this work, we are developing and evaluating new metrics for software engineering product and process quality, including:
- Clean and consistent application of standard DVCS workflows for collaborative team coding.
- The management of code health and quality, through the use of automated build and code quality reports.
- The successful use of code review to increase code quality across a team/throughout a project.
- The use of automated tests to maintain and manage code quality, including test-first, test-driven and behaviour-driven approaches to coding.
- The use of modern evolutionary design practices, including refactoring, design-for-testability and design patterns.
Customised Competence Profiles for Employers
The RoboTA system, and the library of metrics we are creating for use with it, can be used flexibly, to assess student code in a variety of different ways. This opens up the possibility for our industry partners to create competency profiles by combining different metrics, and weighting them in ways that reflect their own standards for process and product quality.
A student who is interested in working for a particular employer will be able to ask for their work to be assessed according to their competency profile, and will receive a report showing areas and skills they can usefully work on before considering an application.
Empirical Analysis of Learning Challenges in Software Engineering
We are using our historical collection of student source code repositories and continuous build histories to perform empirical analysis of how students learn these techniques, including collating teaching resources based on frequently occurring errors, and methods to correct them. The results will be packaged as teaching materials for use in teaching in IoC partner sites, and will feed back into the development of the RoboTA metrics.