pages tagged homoiconicity
http://christophe.rhodes.io/notes/tag/homoiconicity/
notesikiwikiTue, 08 May 2018 21:21:14 +0200algorithms and data structures term2http://christophe.rhodes.io/notes/blog/posts/2018/algorithms_and_data_structures_term2/
http://christophe.rhodes.io/notes/blog/posts/2018/algorithms_and_data_structures_term2/
algorithms-and-data-structuresemacsgoldsmithshomoiconicityis52038blispussstrikeTue, 08 May 2018 21:17:43 +02002018-05-08T19:21:14Z<p>I presented some of the work on teaching algorithms and data
structures at
the
<a href="https://www.european-lisp-symposium.org/2018/index.html">2018 European Lisp Symposium</a></p>
<p>Given that I wanted to go to the symposium (and <a href="http://christophe.rhodes.io/notes/blog/posts/2018/els2018_reflections/">I’m glad I
did!</a>), the most economical method for going was
if I presented research work – because then there was a reasonable
chance that <a href="https://www.gold.ac.uk/">my employer</a> would fund the
expenses (spoiler: they did; thank you!). It might perhaps be
surprising to hear that they don’t usually directly fund attending
events where one is not presenting; on the other hand, it’s perhaps
reasonable on the basis that part of an academic’s job as a scholar
and researcher is to be creating and disseminating new knowledge, and
of course universities, like any organizations, need to prioritise
spending money on things which bring value or further the
organization’s mission.</p>
<p>In any case, I found that I wanted to write about the teaching work
that I have been doing, and in particular I chose to write about a
small, Lisp-related aspect. Specifically, it is now fairly normal in
technical subjects to perform a lot of automated testing of students;
it relieves the burden on staff to assess things which can be
mechanically assessed, and deliver feedback to individual students
which can be delivered automatically; this frees up staff time to
perform targeted interventions, give better feedback on more
qualitative aspects of the curriculum, or work fewer weekends of the
year. A large part of my teaching work for the last 18 months has
been developing material for these automated tests, and working on the
infrastructure underlying them, for my and colleagues’ teaching.</p>
<p>So, the more that we can test automatically <em>and meaningfully</em>, the
more time we have to spend on other things. The main novelty here,
and the lisp-related hook for the paper I submitted to ELS, was being
able to give meaningful feedback on numerical answer questions which
probed whether students were developing a good mental model of the
meaning of pseudocode. That’s a bit vague; let’s be specific and
consider the <code>break</code> and <code>continue</code> keywords:</p>
<pre><code>x ← 0
for 0 ≤ i < 9
x ← x + i
if x > 17
continue
end if
x ← x + 1
end for
return x
</code></pre>
<p>The above pseudocode is typical of what a student might see; the
question would be “what does the above block of pseudocode return?”,
which is mildly arithmetically challenging, particularly under time
pressure, but the conceptual aspect that was being tested here was
whether the student understood the effect of <code>continue</code>. Therefore,
it is important to give the student specific feedback; the more
specific, the better. So if a student answered 20 to this question
(as if the <code>continue</code> acted as a <code>break</code>), they would receive a
specific feedback message reminding them about the difference between
the two operators; if they answered 45, they received a message
reminding them that <code>continue</code> has a particular meaning in loops; and
any other answers received generic feedback.</p>
<p>Having just one of these questions does no good, though. Students
will go to almost any lengths to avoid learning things, and it is easy
to communicate answers to multiple-choice and short-answer questions
among a cohort. So, I needed hundreds of these questions: at least
one per student, but in fact by design the students could take these
multiple-chocie quizzes multiple times, as they are primarily an aid
for the students themselves, to help them discover what they know.</p>
<p>Now of course I could treat the above pseudocode fragment as a
template, parameterise it (initial value, loop bounds, increment) and
compute the values needing the specific feedback in terms of the
values of the parameters. But this generalizes badly: what happens
when I decide that I want to vary the operators (say to introduce
multiplication) or modify the structure somewhat (<em>e.g.</em> by swapping
the two increments before and after the <code>continue</code>)? The
parametrization gets more and more complicated, the chances of (my)
error increase, and perhaps most importantly it’s not any fun.</p>
<p>Instead, what did I do? With some sense of grim inevitability, I
evolved (or maybe accreted) an interpreter (in emacs lisp) for a
sexp-based representation of this pseudocode. At the start of the
year, it's pretty simple; towards the end it has developed into an
almost reasonable mini-language. Writing the interpreter is
straightforward, though the way it evolved into one gigantic <code>case</code>
statement for supported operators rather than having reasonable
semantics is a bit of a shame; as a bonus, implementing a
pretty-printer for the sexp-based pseudocode, with correct indentation
and keyword highlighting, is straightforward. Then armed with the
pseudocode I will ask the students to interpret, I can mutate it in
ways that I anticipate students might think like (replacing <code>continue</code>
with <code>break</code> or <code>progn</code>) and interpret that form to see which wrong
answer should generate what feedback.</p>
<p>Anyway, that was the hook. There's some evidence
in <a href="https://research.gold.ac.uk/23155/">the paper</a> that the general
approach of repeated micro-assessment, and also the the consideration
of likely student mistakes and giving specific feedback, actually
works. And now that the (provisional) results are in, how does this
term compare with <a href="http://christophe.rhodes.io/notes/blog/posts/2018/algorithms_and_data_structures_term1/">last term</a>?
We can look at the relationship between this term’s marks and last
term’s. What should we be looking for? Generally, I would expect
marks in the second term’s coursework to be broadly similar to the
marks in the first term – all else being equal, students who put in a
lot of effort and are confident with the material in term 1 are likely
to have an easier time integrating the slightly more advanced material
in term 2. That’s not a deterministic rule, though; some students
will have been given a wake-up call by the term 1 marks, and equally
some students might decide to coast.</p>
<p><a href="http://christophe.rhodes.io/notes/blog/posts/2018/algorithms_and_data_structures_term2/term2-vs-term1.png"><img src="http://christophe.rhodes.io/notes/blog/posts/2018/algorithms_and_data_structures_term2/200x-term2-vs-term1.png" width="200" height="150" alt="plot of term 2 marks against term 1: a = 0.82, R² = 0.67" class="img" /></a></p>
<p>I’ve asked R to draw the regression line in the above picture; a
straight line fit seems reasonable based on the plot. What are the
statistics of that line?</p>
<pre><code>R> summary(lm(Term2~Term1, data=d))
Call:
lm(formula = Term2 ~ Term1, data = d)
Residuals:
Min 1Q Median 3Q Max
-41.752 -6.032 1.138 6.107 31.155
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.18414 4.09773 0.777 0.439
Term1 0.82056 0.05485 14.961 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 10.46 on 107 degrees of freedom
(32 observations deleted due to missingness)
Multiple R-squared: 0.6766, Adjusted R-squared: 0.6736
F-statistic: 223.8 on 1 and 107 DF, p-value: < 2.2e-16
</code></pre>
<p>Looking at the summary above, we have a strong positive relationship
between term 1 and term 2 marks. The intercept is approximately zero
(if you got no marks in term 1, you should expect no marks in term 2),
and the slope is less than one: on average, each mark a student got in
term 1 tended to convert to 0.8 marks in term 2 – this is plausibly
explained by the material being slightly harder in term 2, and by the
fact that some of the assessments were more explicitly designed to
allow finer discrimination at the top end – marks in the 90s. (A note
for international readers: in the UK system, the pass mark is 40%,
excellent work is typically awarded a mark in the 70% range – marks of
90% should be reserved for exceptional work). The average case is,
however, only that: there was significant variation from that average
line, and indeed (looking at the quartiles) over 50% of the cohort was
more than half a degree class (5 percentage points) away from their
term 2 mark as “predicted” from their mark for term 1.</p>
<p>All of this seems reasonable, and it was a privilege to work with this
cohort of students, and to present the sum of their interactions on
this course to the audience I had. I got the largest round of
applause, I think, for revealing that as part of the peer-assessment I
had required that students run each others’ code. I also had to
present some of the context for the work; not only because this was an
international gathering, with people in various university systems and
from industry, but also because of the large-scale disruption caused
by
<a href="https://www.ucu.org.uk/strikeforuss">industrial</a> <a href="https://ussbriefs.com/">action</a> over
the <a href="https://www.uss.co.uk/">Universities Superannuation Scheme</a> (the
collective, defined benefit pension fund for academics at about 68
Universities and ~300 other bodies associated with Higher Education).
Perhaps most gratifyingly, students were able to continue learning
despite being deprived of their tuition for three consecutive weeks;
judging by their performance on the various assessments so far,</p>
<p>And now? The students will sit an exam, after which I and colleagues
will look in detail at those results and the relationship with the
students’ coursework marks (as I did <a href="http://christophe.rhodes.io/notes/blog/posts/2017/analysing_algorithms_and_data_structures_data/">last
year</a>). I will
continue developing this material (my board for this module currently
lists 33 todo items), and adapt it for next year and for new cohorts.
And maybe you will join me?
The <a href="https://www.doc.gold.ac.uk/computing/">Computing department</a>
at <a href="https://www.gold.ac.uk">Goldsmiths</a> is hiring lecturers and senior
lecturers to come and participate in research, scholarship and
teaching in computing:
a
<a href="https://jobs.gold.ac.uk/vacancy/lecturer-in-creative-computing-348799.html">lecturer in creative computing</a>,
a
<a href="https://jobs.gold.ac.uk/vacancy/lecturer-in-computer-games-348630.html">lecturer in computer games</a>,
a
<a href="https://jobs.gold.ac.uk/vacancy/lecturer-in-data-science-348531.html">lecturer in data science</a>,
a
<a href="https://jobs.gold.ac.uk/vacancy/lecturer-in-physical-and-creative-computing-348527.html">lecturer in physical and creative computing</a>,
a
<a href="https://jobs.gold.ac.uk/vacancy/lecturer-in-computer-science-348441.html">lecturer in computer science</a> and
a
<a href="https://jobs.gold.ac.uk/vacancy/senior-lecturer-in-computer-science-348301.html">senior lecturer in computer science</a>.
Anyone reading this is welcome
to <a href="mailto:c.rhodes@gold.ac.uk">contact me</a> to find out more!</p>