title: 'Engagement in Computing Lab Work: trialling the Moodle Workshop Activity' author: ED76003A (Module 3) / 33347617 mainfont: Helvetica Neue linestretch: 1.5 papersize: a4paper geometry: margin=1.5in header-includes: \renewcommand{\abstractname}{Declaration}

abstract: '\noindent By submitting this, I declare it to be my own work. Any quotations from other sources have been credited to their authors. Approximate word count: 1900 words'

There is a well-documented problem with the second year in UK Higher Education (Milsom et al., 2014); it nestles between the infinite possibilities of the first year and the focus on the immediate future of the third year, and students can end up feeling demotivated and disengaged. This phenomenon is not unique to the UK; the “sophomore slump” is often discussed in US Higher Education, and it might even be a more general phenomenon – Computer Engineering has a common pitfall known as the “second-system effect” (Brooks, 1995), where after learning about and prototyping a solution for a particular problem, a designer’s second attempt is likely to fail through lack of focus. Investigations of characteristics of students on programmes with a manifest slump (compared with those of students on non-slumping programmes) suggest that there are observable characteristics, such as meta-cognitive self-regulation and independence of learning belief, which are correlated with the presence or absence of a second-year slump (Darwent and Stewart, 2013).

This essay reflects on student motivation and engagement; in particular, it documents one attempt to increase student engagement with the lab work for one of the modules on which I teach, Perception and Multimedia Computing, a level 5 module forming part of BSc programmes offered by the Computing department at Goldsmiths in Creative Computing, in Music Computing, and in Games Programming.

The context for this module is that it attempts to bring together knowledge and skills for creative practice, and to encourage development of those skills and of principled investigation, particularly using computer programming as a way of exploring perception. This makes it a challenging module for students, who have historically found this among the harder of their level 5 modules (based on marks achieved and anecdotal feedback). Lab work in this module is intended to reinforce or prefigure material delivered in lectures; to allow students to validate information given; and to provide a starting point for students’ individual further learning (the students are explicitly told that fully completing a labsheet should take more time than is timetabled for the formal laboratory session).

In addition, over the course of this (2014-15) academic session there was a concerted attempt at departmental to evaluate systems to enhance students’ learning experience and to assist departmental pedagogy to scale to larger student numbers; colleagues worked with Kadenze^1, codecircle^2 and Trello^3 among others. Although not part of the plan, it seemed to me that there would be a benefit in the departmental evaluation to have a deeper appreciation of the capabilities of the College VLE (based on Moodle^4). Part of the plan to allow teaching larger cohorts was to have a greater component of assessment and feedback coming from peers; as well as reducing the burden on individual lecturers, peer feedback is useful both to the recipients but also doubly to those who give it: once because they have to engage with the work of others in the cohort, and twice because they have to critically evaluate it; it encourages students to develop their critical faculties, develop skills, integrate knowledge and foster constructive collaboration (Dochy et al., 1999).

EU-funded *Practice and Performance Analysis Inspiring Social
Education* (PRAISE) project (see
<http://www.iiia.csic.es/praise/>) investigating and enhancing
feedback among practitioner communities.

Lab work is an essential part of Computing pedagogy in Higher Education; practical sessions “are a key aspect of all courses” (Fry et al., p.287), allowing students to put into practice material that they have seen and heard about in lectures, and to develop intuitions and skills related to computational systems – leading them towards deeper Computational Thinking (Wing, 2006). In the specific context of this module, each lab session can be thought of as a formative assignment: the students receive group and individual oral feedback on their work, and they are encouraged to further develop their work in labs when they come to their summative coursework assignments.

Therefore, to attempt to learn more about Moodle’s capabilities for learning enhancement, to attempt to increase engagement with laboratory work amongst the student body, and to help the students to improve their critical assessment of their own and their peers’ work, I started off by scheduling a formative assessment based on a somewhat open-ended exercise in the very first lab sheet, distributed in the first week of the Autumn Term 2014, using the Moodle ‘Workshop’ activity (Moodle, 2015). The students’ brief was to write a computer programme responding to the material in the first lecture, illustrating some aspect of perceptual bistability; they were to upload their programme and a very brief writeup, and then assess their own submission and a subset of their peers’ according to a number of yes/no questions and a free-text response.

In order to engage those students motivated more extrinsically than intrinsically (Fry et al., chap. 3), I explicitly told the cohort that the execution and evaluation criteria were similar to those for the students’ summative assignments, though it was also clear to the students that there was no component of this work that was assessed – dealing with the perceived problems of peers providing grades for each other (Falchikov, 1995) was out of scope for this trial. Student participation in the submission and assessment ended up being reasonably high; 37 students (around 90% of those enrolled) submitted their response to the brief, and around 90% of those gave feedback on at least one of their peers’ submissions.

After the first of these ‘Workshop’ activities, I scheduled a second one at the end of a syllabus section, where again the students were asked to respond to a relatively open-ended brief, with the same assessment criteria. Participation in this exercise was lower, despite substantial chivying and chasing individuals on my part: even after a deadline extension, only 29 students (75% of the total) submitted responses to the brief, and 80% of those gave feedback on the submissions of others. This brought us to the point in the term where the first summative assignment needed to be distributed, which suggested to me that there would be even lower take-up of subsequent formative activities until the submission deadline, at which point my day-to-day involvement with the module teaching was reduced as my co-teaching colleague took over lectures and labs.

Despite the relatively high participation in these activities, I am not satisfied with the outcome of the trial; students did not give feedback about this particular activity, but the decline in participation in (and enthusiasm for) the second Workshop is the main reason, but also the remainder of the module saw a steady decline in student engagement generally. The remainder of this essay is an attempt to understand why, and to suggest ways of improving outcomes in the next academic session.

One thing that I think was the right thing to do was to do this from the very start of the module; the sooner students can receive formative feedback, the sooner they can integrate their response into their own practice, hopefully leading to better learning and better performance in their summative assignments (Fry et al., p.133). Of course, the start of the second year is too late to start the process of encouraging the students to engage with each others’ work, and of course there are structures in the first year to help them do this (not only group work but smaller activities in individual lectures and lab sessions), but there is no uniformity in approach – some of the work with the multiple computer systems described above was to see whether it was possible to standardize approaches somewhat.

However, from my experience at least, it seems that the Moodle ‘Workshop’ activity is not suitable for peer assessment as part of a module: and that it barely satisfies the simpler requirements as a platform for peer feedback. It suffers from problems such as a poor User Interface, both for the individual students and for the lecturer – simple things such as having to choose numbers from a drop-down list between 0 and 100; it is also inflexible in ways that hinder – any mistake in the assessment criteria at setup time not detected before students start participating can only be rectified if all students resubmit their feedback, and late submissions cannot be handled. It might be that participation in the second exercise was lower because my own attitude towards the tool was coloured by the experience I had in the first one, where I did have to ask students to resubmit their feedback because of an error I had made in the initial setup phase several weeks before. In Computing in particular, VLEs and other computer systems students interact with do double duty: not only are they there for pedagogical and administrative reasons, but they serve as exemplars of Computing systems, and delivery of the curriculum is expected to condition students’ expectations towards high-quality software and tools (QAA, 2007).

As well as the errors I made in setting up the first Workshop, I think another contributor to the feeling of dissatisfaction and lower participation in the second Workshop might have been the lack of my own direct involvement in the feedback component of the first: I did not include myself among those assessing the submissions (though I did look at them informally). Though this was with the best of intentions – attempting to foster a self-motivated community of peer support – in retrospect it is too early in the students’ development to expect this to come about at the start of the second year.

Despite these somewhat negative thoughts and an unsatisfactory overall experience, I plan to repeat and even extend this trial in the coming academic session: because maintaining engagement is critical, and fostering a supportive peer community ought to help; and because increasing student numbers makes finding scalable pedagogical solutions more important than ever. Two simple remedies for the problems described above suggest themselves: firstly, now that I am aware of the traps in the Moodle Workshop setup, they can be avoided; secondly, at least for the first Workshop and probably for more, I should be an explicit participant in the feedback portion of the Workshop – budgeting time for this will be essential, as the only way to preserve the perception of fairness will be to assess all the submissions, not just a subset.

I also plan to make additional structural changes to the assessment of the module: part of the summative assessment in the following year will be a reflection on the lab work. I will make this known to the students from the start, hopefully helping those extrinsically-motivated students to see the benefits of engagement. I also think that making the feedback cycle more rapid than this year should help, and instead of delaying the assessment part of the Workshop until most students have submitted, I will impose a strict and short deadline, with more of these Workshops (maybe fortnightly, which would allow for four such activities before the coursework brief is given out), which would have the possibility of setting up a virtuous cycle of activity and feedback: the participation rate in any individual ‘Workshop’ will likely be lower, but the overall participation and spread of benefits will hopefully be higher.

References

Brooks, F P, Jr (1995) ‘The Mythical Man-Month’. Addison Wesley

Darwent, S and Stewart, M (2013) ‘Are student characteristics implicated in the Sophomore Slump?’ Second year experience Project Report. At: http://secondyearexperience.ljmu.ac.uk/wp-content/uploads/2012/12/Are-student-characteristics-implemented-in-the-Sophomore-Slump.docx [last accessed 30 Aug 2015]

Dochy, F, Segers, M and Sluijsmans, D (1999) ‘The use of self-, peer and co-assessment in higher education: A review’. Studies in Higher Education 24(3), 331–350

Falchikov, N (1995) ‘Peer Feedback Marking: Developing Peer Assessment’. Innovations in Education & Training International 32(2), 175–187

Fry, H, Ketteridge, S and Marshall, S (eds.) (2008) ‘A Handbook for Teaching and Learning in Higher Education’. Routledge

Milsom, C, Stewart, M, Yorke, M and Zaitseva, E (eds.) (2014) ‘Stepping up to the second year at university: academic, psychological and social dimensions’. Routledge

Moodle (2015) ‘Workshop module’. At https://docs.moodle.org/29/en/Workshop_module [last accessed 31 Aug 2015]

QAA (2007) ‘Subject Benchmark Statement: Computing’

Wing, J M (2006) ‘Computational Thinking’, Communications of the ACM 49(3)