Christophe Rhodes' blognoteshttp://christophe.rhodes.io/notes/blog/temporally-organized notesikiwiki2019-08-04T11:46:46Zholiday hacking swankrhttp://christophe.rhodes.io/notes/blog/posts/2019/holiday_hacking_swankr/2019-08-04T11:46:46Z2019-08-04T11:46:46Z
<p>I'm on holiday! And holidays, as seems to be my usual, involve trains!</p>
<p>I was on a train yesterday, wondering how I could possibly fill two
hours of leisure time (it's more straightforward these days than a few
years ago, when my fellow travellers were less able to occupy their
leisure time with books), when help came from a thoroughly unexpected
quarter: I got a bug report
for <a href="https://www.github.com/csrhodes/swankr/">swankr</a>.</p>
<p>I wrote swankr nine years ago, mostly
at <a href="https://ismir2010.ismir.net/">ISMIR2010</a>. I was about to start
the
<a href="https://web.archive.org/web/20101224183258/http://teclo.net:80/">Teclo</a> phase
of my ongoing adventures, and an academic colleague had recommended
that I learn R; and the moment where it all fell into place was when I
discovered that R supported handlers and restarts: enough that writing
support for slime's SLDB was a lower-effort endeavour than learning
R's default debugger. I implemented support for the bits of slime
that I use, and also for displaying images in the REPL -- so that I
can present <code>lattice</code> objects as thumbnails of the rendered graph, and
have
the
<a href="https://web.archive.org/web/20170630092017/http://www.advogato.org/person/crhodes/diary/149.html">canned demo</a> of
adding two of them together to get a combined graph: generates “oo”
sounds at every outing, guaranteed! (Now that I mostly use <code>ggplot2</code>,
I need to find a new demo: incrementally building and styling a plot
by adding theme, scale, legend objects and so on is actually useful
but lacks a wow factor.)</p>
<p>SLIME works by exchanging messages between the emacs front-end and the
backend lisp; code on the backend lisp is responsible for parsing the
messages, executing whatever is needed to support the request, and
sending a response. The messages are, broadly, sexp-based, and the
message parsing and executing in the backends is mostly portable
Common Lisp, delegating to implementation-specific code for specific
implementation-specific support needed.</p>
<p>“Mostly portable Common Lisp” doesn't mean that it'll run in R. The R
code is necessarily completely separate, implementing just enough of a
Lisp reader to parse the messages. This works fine, because the
messages themselves are simple; when the front-end sends a message
asking for evaluation for the listener, say, the message is in the
form of a sexp, but the form to evaluate is a string of the
user-provided text: so as long as we have enough of a sexp
reader/evaluator to deal with the SLIME protocol messages, the rest
will be handled by the backend's <code>eval</code> function.</p>
<p>... except in some cases. When slime sends a form to be evaluated
which contains an embedded presentation object, the presentation is
replaced by <code>#.(swank:lookup-presented-object-or-lose 57.)</code> in the
string sent for evaluation. This works fine for Common Lisp backends
– at least provided <code>*read-eval*</code> is true – but doesn't work
elsewhere. One regexp allows us to preprocess the string to evaluate
to rewrite this into something else, but what? Enter cunning plan
number 1: (ab)use backquote.</p>
<p>Backquote? Yes, R has backquote <code>bquote</code>. It also has moderately
crazy function call semantics, so it's possible to: rewrite the string
ob be evaluated to contain unquotations; parse the string into a form;
funcall the <code>bquote</code> function on that form (effectively performing the
unquotations, simulating the read-time evaluation), and then <code>eval</code>
the result. And this would be marvellous except that Philipp Marek
discovered that his own <code>bquote</code>-using code didn't work. Some
investigation later, and the root cause became apparent:</p>
<pre><code>CL-USER> (progn (set 'a 3) (eval (read-from-string "``,a")))
`,a
</code></pre>
<p>compare</p>
<pre><code>R> a <- 3; bquote(bquote(.(a)))
bquote(3)
</code></pre>
<p>NO NESTED BACKQUOTES?</p>
<p>So, scratch the <code>do.call("bquote", ...)</code> plan. What else is available
to us? It turns out that there's a <code>substitute</code> function, which for
<code>substitute(expr, env)</code></p>
<blockquote><p>returns the parse tree for the (unevaluated) expression ‘expr’,
substituting any variables bound in ‘env’.</p></blockquote>
<p>Great! Because that means we can use the regular expression to
rewrite the presentation lookup function calls to be specific variable
references, then substitute these variable references from the
id-to-object environment that we have <em>anyway</em>, and Bob is your
metaphorical
uncle.
<a href="https://github.com/csrhodes/swankr/commit/e3fe25d720d192563ad078fc78befb5128d8da0f">So I did that</a>.</p>
<p>The situation is not perfect, unfortunately. Here's some more of the
documentation for <code>substitute</code>:</p>
<blockquote><p>‘substitute’ works on a purely lexical basis. There is no guarantee
that the resulting expression makes any sense.</p></blockquote>
<p>... and here's some more from <code>do.call</code>:</p>
<blockquote><p>The behavior of some functions, such as ‘substitute’, will not be
the same for functions evaluated using ‘do.call’ as if they were
evaluated from the interpreter. The precise semantics are currently
undefined and subject to change.</p></blockquote>
<p>Well that's not ominous at all.</p>
sbcl 1 5 0http://christophe.rhodes.io/notes/blog/posts/2019/sbcl_1_5_0/2019-02-24T21:40:13Z2019-02-24T21:40:13Z
<p>Today, I
released <a href="http://www.sbcl.org/all-news.html#1.5.0">sbcl-1.5.0</a> – with
no particular reason for the minor version bump except that when the
patch version (we don’t in fact do semantic versioning) gets large
it’s hard to remember what I need to type in the release script. In
the 17 versions (and 17 months) since sbcl-1.4.0, there have been over
2900 commits – almost all by other people – fixing user-reported,
developer-identified, and random-tester-lesser-spotted bugs; providing
enhancements; improving support for various platforms; and making
things faster, smaller, or sometimes both.</p>
<p>It’s sometimes hard for developers to know what their users think of
all of this furious activity. It’s definitely hard for <em>me</em>, in the
case of SBCL: I throw releases over the wall, and sometimes people
tell me I messed up (but usually there is a resounding silence). So I
am running
a
<a href="https://docs.google.com/forms/d/e/1FAIpQLSe6GZTqlQbGZNusJ8W7oIWqpkh_6PuTsNGm4c_P8TfnUjBYxQ/viewform">user survey</a>,
where I invite you to tell me things about your use of SBCL. All
questions are optional: if something is personally or commercially
sensitive, please feel free not to tell me about it! But it’s nine
years since
the
<a href="http://random-state.net/sbcl-survey-2010-results.html">last survey</a>
(that I know of), and much has changed since then – I'd be glad to
hear any information SBCL users would be willing to provide. I hope
to collate the information in late March, and report on any insight I
can glean from the answers.</p>
a trace of riscv successhttp://christophe.rhodes.io/notes/blog/posts/2018/a_trace_of_riscv_success/2018-08-13T16:57:21Z2018-08-13T16:57:21Z
<p>(Don't get too excited by the “success” in the title of this post.)</p>
<p>A brief <a href="http://www.sbcl.org/">SBCL</a> RISC-V update this time, as there
have been no train journeys since
the <a href="http://christophe.rhodes.io/notes/blog/posts/2018/first_riscy_steps/">last blog post</a>. Most of what has been
achieved is trivial definitions and rearrangements. We've defined
some
<a href="https://github.com/csrhodes/sbcl/commit/3da9509e10a0e106673eb70ddc9c4b5053ec4bce">more of the MOVE Virtual Operations</a> that
the compiler needs to be able to move quantities between different
storage classes (from <code>immediate</code> to <code>descriptor-reg</code>, or between
tagged fixnums in <code>descriptor-reg</code> to untagged fixnums in
<code>signed-reg</code>, for example).</p>
<p>We've
<a href="https://github.com/csrhodes/sbcl/commit/86ea59e9128a0b902b8dbf316e94e205f9cb9fac">defined a debugging printer function</a>,
which wouldn't be needed at this stage except that we're asking the
compiler to print out substantial parts of its internal data
structures. We've reorganized, and not in a desperately good way,
the
<a href="https://github.com/csrhodes/sbcl/commit/84046e772174f8a9cb7ba58ffdc1221eb7836335">definitions of the registers and their storage bases and classes</a>;
this area of code is “smelly”, in that various arguments aren't
evaluate when maybe they should be, which means that some other
structures need to be defined at compile-time so that variables can be
defined at compile-time so that values (in the next form) can be
evaluated at read-time; it's all a bit horrible, and
as <a href="http://pvk.ca/">Paul</a> said, not friendly to interactive
development. And then a few
more
<a href="https://github.com/csrhodes/sbcl/commit/0af43b94c8aa4af7a2c54a7f579e30714955182e">fairly trivial definitions later</a>,
we are at the point where we have a first trace file from compiling
our very
simple
<a href="https://github.com/csrhodes/sbcl/blob/a59598999bf3f7115a59369a1ef0c6fcbe0df923/src/code/scratch.lisp"><code>scratch.lisp</code> file</a>.
Hooray! Here's the assembly code of our <code>foo</code> function:</p>
<pre><code>L10:
VOP XEP-ALLOCATE-FRAME {#<SB!ASSEM:LABEL 1>}
L1:
VOP EMIT-LABEL {#<SB!ASSEM:LABEL 2>}
L2:
VOP EMIT-LABEL {#<SB!ASSEM:LABEL 3>}
L3:
VOP NOTE-ENVIRONMENT-START {#<SB!ASSEM:LABEL 4>}
L4:
L11:
L5:
VOP TYPE-CHECK-ERROR/C #:G0!4[A0] :DEBUG-ENVIRONMENT
{SB!KERNEL:OBJECT-NOT-SIMPLE-ARRAY-FIXNUM-ERROR X}
L12:
byte 50
byte 40
.align 2
L6:
VOP EMIT-LABEL {#<SB!ASSEM:LABEL 7>}
L7:
VOP EMIT-LABEL {#<SB!ASSEM:LABEL 8>}
L8:
VOP NOTE-ENVIRONMENT-START {#<SB!ASSEM:LABEL 9>}
L9:
L13:
L14:
L15:
</code></pre>
<p>I hope that everyone can agree that this is a flawless gem of a function.</p>
<p>The next step is probably to start actually defining and using RISC-V
instructions, so... stay tuned!</p>
first riscy stepshttp://christophe.rhodes.io/notes/blog/posts/2018/first_riscy_steps/2018-08-13T16:57:21Z2018-08-07T17:36:33Z
<p>In our <a href="http://christophe.rhodes.io/notes/blog/posts/2018/beginning_an_sbcl_port/">first post</a>, I summarized the work of
a few evenings of heat and a few hours on a train, being basically the
boilerplate (parenthesis-infested boilerplate, but boilerplate
nonetheless) necessary to get an <a href="http://sbcl.sourceforge.net/">SBCL</a>
cross-compiler built. At that point, it was possible to start running
<code>make-host-2.sh</code>, attempting to <em>use</em> the cross-compiler to build the
Lisp sources to go into the initial Lisp image for the new,
target <a href="http://www.sbcl.org/">SBCL</a>. Of course, given that all I had
done was create boilerplate, running the cross-compiler was unlikely
to end well: and in fact it barely started well, giving an immediate
error of functions not being defined.</p>
<p>Well, functions not being defined is
a
<a href="http://lambda-the-ultimate.org/node/12">fairly normal state of affairs</a> (fans
of static type systems look away now!), and easily fixed: we simply
define them until they're not undefined any more, ideally with
sensible rather than nonsensical contents. That additional
constraint, though, gives us our first opportunity to indulge in some
actual thought about our target platform.</p>
<p>The routines we have to define include things like functions to return
Temporary Names (registers or stack locations) for such things as the
lisp return address, the old frame pointer, and the number of
arguments to a function: these are things that will be a part of the
lisp calling convention, which may or may not bear much relation to
the “native” C calling convention. On most SBCL platforms, it doesn't
have much in common, with Lisp execution effectively being one large
complicated subroutine when viewed from the prism of the C world; the
Lisp execution uses a separate stack, which allows us to be sure about
what on the stack is a Lisp object and what might not be (making
garbage collectors precise on those platforms).</p>
<p>In the nascent RISC-V port, in common with other 32-register ports,
these values of interest will be stored in registers, so we need to
think about mapping these concepts to the platform registers,
documentation of which can be
found <a href="https://riscv.org/specifications/">at the RISC-V site</a>. In
particular, let's have a look at logical page 109 (physical page 121)
of
the
<a href="https://content.riscv.org/wp-content/uploads/2017/05/riscv-spec-v2.2.pdf">user-level RISC-V ISA specification v2.2</a>:</p>
<table>
<thead>
<tr>
<th>Register</th>
<th>ABI Name</th>
<th>Description</th>
<th>Saver</th>
</tr>
</thead>
<tbody>
<tr>
<td>x0</td>
<td>zero</td>
<td>Hard-wired zero</td>
<td>-</td>
</tr>
<tr>
<td>x1</td>
<td>ra</td>
<td>Return address</td>
<td>Caller</td>
</tr>
<tr>
<td>x2</td>
<td>sp</td>
<td>Stack pointer</td>
<td>Callee</td>
</tr>
<tr>
<td>x3</td>
<td>gp</td>
<td>Global pointer</td>
<td>-</td>
</tr>
<tr>
<td>x4</td>
<td>tp</td>
<td>Thread pointer</td>
<td>-</td>
</tr>
<tr>
<td>x5</td>
<td>t0</td>
<td>Temporary/Alternate link register</td>
<td>Caller</td>
</tr>
<tr>
<td>x6-7</td>
<td>t1-2</td>
<td>Temporaries</td>
<td>Caller</td>
</tr>
<tr>
<td>x8</td>
<td>s0/fp</td>
<td>Saved register/Frame pointer</td>
<td>Callee</td>
</tr>
<tr>
<td>x9</td>
<td>s1</td>
<td>Saved register</td>
<td>Callee</td>
</tr>
<tr>
<td>x10-11</td>
<td>a0-1</td>
<td>Function arguments/return values</td>
<td>Caller</td>
</tr>
<tr>
<td>x12-17</td>
<td>a2-7</td>
<td>Function arguments</td>
<td>Caller</td>
</tr>
<tr>
<td>x18-27</td>
<td>s2-11</td>
<td>Saved registers</td>
<td>Callee</td>
</tr>
<tr>
<td>x28-31</td>
<td>t3-6</td>
<td>Temporaries</td>
<td>Caller</td>
</tr>
</tbody>
</table>
<p>The table (and the rest of the document) tells us that <code>x0</code> is a wired
zero: no matter what is written to it, reading from it will always
return 0. That's a useful thing to know; we can now define a storage
class for zero specifically, and define the <code>zero</code> register at offset
0.</p>
<p>The platform ABI uses <code>x1</code> for a link register (storing the return
address for a subroutine). We may or may not want to use this for our
own return address; I'm not yet sure if we will be able to. For now,
though, we'll keep it reserved for the platform return address,
defining it as register <code>lr</code> in SBCL nomenclature, and use <code>x5</code> for
our own lisp return address register. Why <code>x5</code>? The ISA definition
for the <code>jalr</code> instruction (Jump And Link Register, logical page 16)
informs us that return address prediction hardware should manipulate
the prediction stack when <em>either</em> <code>x1</code> or <code>x5</code> is used as the link
register with <code>jalr</code>, and not when any other register is used; that's
what “Alternate link register” means in the above table. We'll see if
this choice for our <code>lra</code> register,
for <a href="https://arxiv.org/abs/1807.07940">Spectre</a> or for worse, gives us
any kind of speed boost – or even if it is tenable at all.</p>
<p>We can define the Number Stack Pointer and Number Frame Pointer
registers, even though we don't need them yet: in SBCL parlance, the
“number” stack is the C stack, and our register table says that those
should be <code>x2</code> and <code>x8</code> respectively. Next, for our own purposes we
will need: Lisp stack, frame and old-frame pointer registers; some
function argument registers; and some non-descriptor temporaries.
We'll put those semi-arbitrarily in the temporaries and function
arguments region of the registers; the only bit of forethought here is
to alternate our Lisp arguments and non-descriptor temporaries with
one eye on the “C” RISC-V extension for compressed instructions: if an
instruction uses registers <code>x8</code>–<code>x15</code> and the “C” extension is
implemented on a particular RISC-V board, the instruction might be
encodable in two bytes instead of four, hopefully reducing instruction
loading and decoding time and instruction cache pressure. (This might
also be overthinking things at this stage). In the spirit of not
overthinking, let's just put nargs on <code>x31</code> for now.</p>
<p>With
<a href="https://github.com/csrhodes/sbcl/commit/506323cf0a5dc25e4decf72474b9e216513699b9">all these decisions implemented</a>,
we hit our first “real” error! Remembering that we are attempting to
compile a very simple file, whose only actual Lisp code involves
returning an element from a specialized array, it's great to hit this
error because it seems to signify that we are almost ready. The
cross-compiler is built knowing that some functions (usually as a
result of code transformations) should never be compiled as calls, but
should instead be /translated/ using one of our Virtual OPerations.
<code>data-vector-ref</code>, which the <a href="http://l1sp.org/cl/aref"><code>aref</code></a> in
our
<a href="https://github.com/csrhodes/sbcl/blob/a59598999bf3f7115a59369a1ef0c6fcbe0df923/src/code/scratch.lisp#L3">scratch.lisp</a> file
translates to because of the type declarations, is one such function,
so
we
<a href="https://github.com/csrhodes/sbcl/commit/76186a40c9627309f311f6919555d29724b17aab">need to define it</a>:
and also define it in such a way that it will be used no matter what
the current compiler policy, hence the <code>:policy :fast-safe</code> addition
to the reffer macro. (That, and many other such <code>:policy</code>
declarations, are dotted around everywhere in the other backends; I
may come to regret having taken them out in the initial boilerplate
exercise.)</p>
<p>And then, finally, because there's always an excuse for a good
cleanup: those float array VOPs, admittedly without <code>:generator</code>
bodies, look too similar: let's have
a
<a href="https://github.com/csrhodes/sbcl/commit/a59598999bf3f7115a59369a1ef0c6fcbe0df923"><code>define-float-vector-frobs</code> macro</a> to
tidy things up. (They may not remain that tidy when we come to
actually <em>implement</em> them). At this point, we have successfully
compiled one Virtual OPeration! I know this, because the next error
from running the build with all this is about <code>call-named</code>, not
<code>data-vector-set</code>. Onward!</p>
beginning an sbcl porthttp://christophe.rhodes.io/notes/blog/posts/2018/beginning_an_sbcl_port/2018-08-03T13:55:52Z2018-08-03T13:55:52Z
<p>I'm on a train! (Disclaimer: I'm not on a train any more.)</p>
<p>What better thing to do, during the hot, hot July days, thank
something semi-mechanical involving very little brain power? With
student-facing work briefly calming down (we've just about weathered
the storm of appeals and complaints following the publication of exam
results), and the all-too-brief Summer holidays approaching, I caught
myself thinking that it wasn't <em>fair</em> that Doug and his team should
have
<a href="https://sourceforge.net/p/sbcl/sbcl/ci/e8502552f14e3e603d59af9a057cddfff7cea88d/">all the fun</a> of
porting SBCL to a new architecture; I want a go, too! (The fact that
I have no time notwithstanding.)</p>
<p>But, to what? 64-bit powerpc is
definitely
<a href="https://sourceforge.net/p/sbcl/sbcl/ci/8f768b11b549239940982fc1ecff3cd76afa6d58/">on the move</a>;
I've missed that boat. SBCL already supports most current
architectures, and “supports” a fair number of obsolete ones too. One
thought that did catch my brain, though, was prompted by the recent
publication by ARM of a set of claims purporting to demonstrate the
dangers of the RISC-V architecture, in comparison to ARM's own. Well,
since SBCL already runs on 32-bit and 64-bit ARM platforms, how hard
could porting to RISC-V be?</p>
<p>I don't know the answer to that yet! This first post merely covers
the straightforward – some might even say “tedious” –
architecture-neutral changes required to get SBCL to the point that it
could start about considering compiling code for a new backend.</p>
<p>The <a href="http://research.gold.ac.uk/2336/">SBCL build</a> has roughly seven
phases (depending a bit on how you count):</p>
<ol>
<li>build configuration;</li>
<li>build the cross-compiler using the host Lisp, generating
target-specific header files;</li>
<li>build the runtime and garbage collector using a C compiler and
platform assembler, generating an executable;</li>
<li>build the target compiler using the cross-compiler, generating
target-specific fasl files;</li>
<li>build the initial ("cold") image, using the genesis program to
simulate the effect of loading fasl files, generating a target Lisp
memory image;</li>
<li>run all the code that stage 5 delayed because it couldn't simulate
the effects of loading fasls (<em>e.g.</em> many side-effectful top-level
forms);</li>
<li>build everything else, primarily PCL (a full implementation CLOS)
and save the resulting image.</li>
</ol>
<h3>1. Define a keyword for a new backend architecture</h3>
<p>Probably the most straightforward thing to do, and something that
allows me to (pretty much) say that we're one seventh of the way
there: defining a new keyword for a new backend architecture is
<em>almost</em> as simple as adding a line or two to
<code>make-config.sh</code>.
<a href="https://github.com/csrhodes/sbcl/commit/20e4117b6361c9710109e4dfe21a05d9049d9674">Almost</a>.
We run some devious dastardly consistency checks on our target
<code>*features*</code>, and those need to be updated too. This gets
<code>make-config.sh</code> running, and allows <code>make-host-1.sh</code> to run its
consistency checks.</p>
<h3>2. Construct minimal backend-specific files</h3>
<p>Phase 2, building the cross-compiler, sounds like something
straightforward: after all, if we don't have the constraint that the
cross-compiler will do <em>anything</em> useful, not even run, then surely
just producing minimal files where the build system expects to find
them will do, and Lisp’s usual late-binding nature will allow us to
get away with the rest. No?</p>
<p>Well, that <em>could</em> have been the case, but SBCL itself does impose
some constraints, and the minimal files
are
<a href="https://github.com/csrhodes/sbcl/commit/d2536034a7338cc3937e98eb76b3f12f387caf9f">a lot less minimal</a> than
you might think. The translation of IR2 (second intermediate
representation) to machine code involves calling a number of VOPs
(virtual operations) by name: and those calls by name perform
compile-time checking that the virtual operation template name already
exists. So we have to define stub VOPs for all those virtual
operations called by name, <em>including</em> those in optimizing vector
initialisation, for all of our specialised vector types. These
minimal definitions do nothing other than pacify the safety checks in
the cross-compiler code.</p>
<p>Phase 2 also generates a number of header files used in the
compilation of the C-based runtime, propagating constant and object
layout definitions to the garbage collector and other support
routines. We won't worry about that for now; we just have to ignore
an error about this at first-genesis time for a while. And with this,
<code>make-host-1.sh</code> runs to completion.</p>
<h3>4. Build the target compiler using the cross-compiler</h3>
<p>At this point, we have a compiler backend that compiles. It almost
certainly doesn't run, and even when it does run, we will want to ask
it to compile simple forms of our choosing. For now, we
just
<a href="https://github.com/csrhodes/sbcl/commit/f681d671dd963cce9e8c5c9ceba8f704ab39ad62">add a new file</a> to
the start of the build order, and put in a very simple piece of code,
to see how we get on. (Spoiler alert: not very well). We've added
the <code>:trace-file</code> flag so that the cross-compiler will print
information about its compilation, which will be useful as and when we
get as far as actually compiling things.</p>
<p>The next steps, moving beyond the start of step 4, will involve making
decisions about how we are going to support sbcl on RISC-V. As yet,
we have not made any decisions about register function (some of those,
such as the C stack and frame pointer, will be dictated by the
platform ABI, but many won't, such as exactly how we will partition
the register set), or about how the stack works (again, the C stack
will be largely determined, but Lisp will have its own stack, kept
separate for ease of implementating garbage collection).</p>
<p>(Observant readers will note that phase 3 is completely unstarted.
It's not necessary yet, though it does need to be completed for phase
4 to run to completion, as some of the cross-compilation depends on
information about the runtime memory layout. Some of the details in
phase 3 depend on the decisions made in phase 4, though, so that's why
I'm doing things in this order, and not, say, because I don't have a
RISC-V machine to run anything on -- it was pleasantly straightforward
to bring
up
<a href="https://wiki.qemu.org/Documentation/Platforms/RISCV">Fedora under QEMU</a> which
should be fine as a place to get started.)</p>
<p>And this concludes the initial SBCL porting work that can be done with
very little brain. I do not know where, if anywhere, this effort will
go; I am notionally on holiday, and it is also hot: two factors which
suggest that work on this, if any, will be accomplished slowly. I'd
be very happy to collaborate, if anyone else is interested, or else
plod along in my own slow way, writing things up as I go.</p>
sbcl method tracinghttp://christophe.rhodes.io/notes/blog/posts/2018/sbcl_method_tracing/2018-06-15T21:00:21Z2018-06-15T21:00:21Z
<p>Since
<a href="https://sourceforge.net/p/sbcl/sbcl/ci/9d36021d86b7db7561b2edc40324c8e5229f88b3">approximately forever</a>,
<a href="http://sbcl.org/">sbcl</a> has advertised the possibility of tracing
individual methods of a generic function by passing <code>:methods t</code> as an
argument to <a href="http://www.xach.com/clhs?q=trace"><code>trace</code></a>. Until
recently, tracing methods was only supported using the <code>:encapsulate
nil</code> style of tracing, modifying the compiled code for function
objects directly.</p>
<p>For a variety of reasons, the alternative <code>:encapsulate t</code>
implementation of tracing, effectively wrapping the function with some
code to run around it, is more robust. One problem with <code>:encapsulate
nil</code> tracing is that if the object being traced is a closure, the
modification of the function's code will affect all of the closures,
not just any single one – closures are distinct objects with distinct
closed-over environments, but they share the same execuable code, so
modifying one of them modifies all of them. However, the
implementation of method tracing I wrote in 2005 – essentially,
finding and tracing the method functions and the method fast-functions
(on which more later) – was fundamentally incompatible with
encapsulation; the method functions are essentially never called by
name by CLOS, but by more esoteric means.</p>
<p>What are those esoteric means, I hear you ask?! I'm glad I can hear
you. The Metaobject Protocol defines a method calling convention,
such that method calls receive as two arguments firstly: the entire
argument list as the method body would expect to handle; and secondly:
the list of sorted applicable next methods, such that the first
element is the method which should be invoked if the method
uses
<a href="http://www.xach.com/clhs?q=call-next-method"><code>call-next-method</code></a>. So
a method function conforming to this protocol has to:</p>
<ol>
<li>destructure its first argument to bind the method parameters to the
arguments given;</li>
<li>if <code>call-next-method</code> is used, reconstruct an argument list (in
general, because the arguments to the next method need not be the
same as the arguments to the existing method) before calling the
next method’s method-function with the reconstructed argument list
and the rest of the next methods.</li>
</ol>
<p>But! For a given set of actual arguments, for that call, the set of
applicable methods is known; the precedence order is known; and, with
a bit of bookkeeping in the implementation
of <a href="http://www.xach.com/clhs?q=defmethod"><code>defmethod</code></a>, whether any
individual method actually calls <code>call-next-method</code> is known. So it
is possible, at the point of calling a generic-function with a set of
arguments, to know not only the first applicable method, but in fact
all the applicable methods, their ordering, and the combination of
those methods that will actually get called (which is determined by
whether methods invoke <code>call-next-method</code> and also by the generic
function’s <a href="http://christophe.rhodes.io/notes/blog/posts/2018/sbcl_method-combination_fixes/">method combination</a>).</p>
<p>Therefore, a sophisticated (and by “sophisticated” here I mean
“written by the wizards
at <a href="https://en.wikipedia.org/wiki/PARC_(company">Xerox PARC</a>)”)
implementation of CLOS can compile an effective method for a given
call, resolve all the next-method calls, perform
some
<a href="http://www.sbcl.org/sbcl-internals/Slot_002dValue.html#Slot_002dValue">extra optimizations</a> on
<a href="http://www.xach.com/clhs?q=slot-value"><code>slot-value</code></a> and slot
accessors, improve the calling convention (we no longer need the list
of next methods, but only a single next effective-method, so we can
spread the argument list once more), and cache the resulting function
for future use. So the one-time cost for each set of applicable
methods generates an optimized effective method, making use of
fast-method-functions with the improved calling convention.</p>
<p>Here's the trick, then: this effective method is compiled into a chain
of <code>method-call</code> and <code>fast-method-call</code> objects, which call their
embedded functions. This, then, is ripe for encapsulation; to allow
method tracing, all we need to do is arrange at
<code>compute-effective-method</code> time that those embedded functions are
wrapped in code that performs the tracing, and that any attempt to
<code>untrace</code> the generic function (or to modify the tracing parameters)
reinitializes the generic function instance, which clears all the
effective method caches. And then Hey Bob, Your Uncle’s Presto! and
everything works.</p>
<pre><code>(defgeneric foo (x)
(:method (x) 3))
(defmethod foo :around ((x fixnum))
(1+ (call-next-method)))
(defmethod foo ((x integer))
(* 2 (call-next-method)))
(defmethod foo ((x float))
(* 3 (call-next-method)))
(defmethod foo :before ((x single-float))
'single)
(defmethod foo :after ((x double-float))
'double)
</code></pre>
<p>Here's a generic function <code>foo</code> with moderately complex methods. How
can we work out what is going on? Call the method tracer!</p>
<pre><code>CL-USER> (foo 2.0d0)
0: (FOO 2.0d0)
1: ((SB-PCL::COMBINED-METHOD FOO) 2.0d0)
2: ((METHOD FOO (FLOAT)) 2.0d0)
3: ((METHOD FOO (T)) 2.0d0)
3: (METHOD FOO (T)) returned 3
2: (METHOD FOO (FLOAT)) returned 9
2: ((METHOD FOO :AFTER (DOUBLE-FLOAT)) 2.0d0)
2: (METHOD FOO :AFTER (DOUBLE-FLOAT)) returned DOUBLE
1: (SB-PCL::COMBINED-METHOD FOO) returned 9
0: FOO returned 9
9
</code></pre>
<p>This mostly works. It doesn’t <em>quite</em> handle all cases, specifically
when the CLOS user adds a method and implements <code>call-next-method</code> for
themselves:</p>
<pre><code>(add-method #'foo
(make-instance 'standard-method
:qualifiers '()
:specializers (list (find-class 'fixnum))
:lambda-list '(x)
:function (lambda (args nms) (+ 2 (funcall (sb-mop:method-function (first nms)) args (rest nms))))))
CL-USER> (foo 3)
0: (FOO 3)
1: ((METHOD FOO :AROUND (FIXNUM)) 3)
2: ((METHOD FOO (FIXNUM)) 3)
2: (METHOD FOO (FIXNUM)) returned 8
1: (METHOD FOO :AROUND (FIXNUM)) returned 9
0: FOO returned 9
9
</code></pre>
<p>In this trace, we have lost the method trace from the direct call to
the <code>method-function</code>, <em>and</em> calls that that function makes; this is
the cost of performing the trace in the effective method, though a
mitigating factor is that we have visibility of method combination
(through the <code>(sb-pcl::combined-method foo)</code> line in the trace above).
It would probably be possible to do the encapsulation in the method
object itself, by modifying the function and the fast-function, but
this requires rather more book-keeping and (at least theoretically)
breaks the object identity: we do not have licence to modify the
function stored in a method object. So, for now, sbcl has this
imperfect solution for users to try (expected to be in sbcl-1.4.9,
probably released towards the end of June).</p>
<p>(I can't really believe it’s taken me twelve years to do this. Other
implementations have had this working for years. Sorry!)</p>
sbcl method-combination fixeshttp://christophe.rhodes.io/notes/blog/posts/2018/sbcl_method-combination_fixes/2018-05-26T08:02:23Z2018-05-26T08:02:23Z
<p>At
the
<a href="https://www.european-lisp-symposium.org/2018/index.html">2018 European Lisp Symposium</a>,
the most obviously actionable feedback
for <a href="http://www.sbcl.org/">SBCL</a> from a presentation was
from
<a href="https://www.european-lisp-symposium.org/static/2018/verna.pdf">Didier's remorseless deconstruction</a> of
SBCL's support for method combinations (along with the lack of
explicitness about behavioural details in
the
<a href="http://www.xach.com/clhs?q=define-method-combination">ANSI CL specification</a> and
the
<a href="http://metamodular.com/CLOS-MOP/method-combinations.html">Art of the Metaobject Protocol</a>).
I don't think that Didier meant to imply that SBCL was particularly
bad at method combinations, compared with other available
implementations – merely that SBCL was a convenient target. And, to
be fair, there
was <a href="https://bugs.launchpad.net/sbcl/+bug/309084">a bug report</a> from a
discussion with Bruno Haible back in SBCL's history – May/June 2004,
according to my search – which had languished largely unfixed for
fourteen years.</p>
<p>I <a href="http://christophe.rhodes.io/notes/blog/posts/2018/els2018_reflections/">said</a> that I found the Symposium energising.
And what better use to put that energy than addressing user feedback?
So, I spent a bit of time earlier this month thinking, fixing and
attempting to work out what behaviours might actually be useful. To
be clear, SBCL’s support for <code>define-method-combination</code> was (probably)
standards-compliant in the usual case, but if you follow the links
from above, or listen to Didier’s talk, you will be aware that that’s
not saying all that much, in that almost nothing is specified about
behaviours under redefinition of method combinations.</p>
<p>So, to start with, I solved the cache invalidation (one of
the
<a href="https://martinfowler.com/bliki/TwoHardThings.html">hardest problems in Computer Science</a>),
making sure that discriminating functions and effective methods are
reset and invalidated for all affected generic functions. This was
slightly complicated by the strategy that SBCL has of distinguishing
short and long <code>method-combination</code>s with distinct classes (and distinct
implementation strategies
for
<a href="http://metamodular.com/CLOS-MOP/compute-effective-method.html"><code>compute-effective-method</code></a>);
but this just needed to be methodical and careful. Famous last words:
I think that all method-combination behaviour in SBCL is now coherent
and should meet user expectations.</p>
<p>More interesting, I think, was coming up with test cases for desired
behaviours. Method combinations are not, I think, widely used in
practice; whether that is because of lack of support, lack of
understanding or lack of need of what they provide, I don't know. (In
fact in conversations at ELS we discussed another possibility, which
is that everyone is more comfortable customising
<code>compute-effective-method</code> instead – both that and
<code>define-method-combination</code> provide ways for inserting arbitrary code
for the effect of a generic function call with particular arguments.
But what this means is that there isn’t, as far as I know at least, a
large corpus of interesting method combinations to play with.</p>
<p>One interesting one which came up: <code>Bike</code> on <code>#lisp</code> designed
an
<a href="https://gist.github.com/Bike/5ca14ba142f3ca3fc65e4c912f4cde9f">implementation using method-combinations of finite state machines</a>,
which I adapted to add to SBCL’s test suite. My version looks like:</p>
<pre><code>(define-method-combination fsm (default-start)
((primary *))
(:arguments &key start)
`(let ((state (or ,start ',default-start)))
(restart-bind
(,@(mapcar (lambda (m) `(,(first (method-qualifiers m))
(lambda ()
(setq state (call-method ,m))
(if (and (typep state '(and symbol (not null)))
(find-restart state))
(invoke-restart state)
state))))
primary))
(invoke-restart state))))
</code></pre>
<p>and there will be more on this use
of <a href="http://www.xach.com/clhs?q=restart-bind"><code>restart-bind</code></a> in a
later post, I hope. Staying on the topic of method combinations, how
might one use this <code>fsm</code> method combination? A simple example might
be to recognize strings with an even number of <code>#\a</code> characters:</p>
<pre><code>;;; first, define something to help with all string parsing
(defclass info ()
((string :initarg :string)
(index :initform 0)))
;;; then the state machine itself
(defgeneric even-as (info &key &allow-other-keys)
(:method-combination fsm :yes))
(defmethod even-as :yes (info &key)
(with-slots ((s string) (i index)) info
(cond ((= i (length s)) t) ((char= (char s i) #\a) (incf i) :no) (t (incf i) :yes))))
(defmethod even-as :no (info &key)
(with-slots ((s string) (i index)) info
(cond ((= i (length s)) nil) ((char= (char s i) #\a) (incf i) :yes) (t (incf i) :no))))
</code></pre>
<p>(Exercise for the reader: adapt this to implement a Turing Machine)</p>
<p>Another example of (I think) an interesting method combination was one
which I came up with in the context of generalized specializers, for
an ELS a while ago:
the <a href="http://christophe.rhodes.io/notes/blog/posts/2014/http-content-negotiation-and-generalized-specializers/">HTTP Request method-combination</a> to
be used
with <a href="http://research.gold.ac.uk/9924/">HTTP Accept specializers</a>.
I'm interested in more! A github search
found
<a href="https://github.com/sirherrbatka/herrblog/blob/aa53949b984bd8eecb34a046f7d1c2d1060a4f2d/src/common/method-combinations.lisp">some</a> <a href="https://github.com/sellout/method-combination-utilities/blob/9abd3ffd8c10cbaa16664dec05e0db9c87ab51dc/src/method-combinations.lisp">examples</a> before
I ran out of patience; do you have any examples?</p>
<p>And I have one further question. The method combination takes
arguments at generic-function definition time (the <code>:yes</code> in
<code>(:method-combination fsm :yes)</code>). Normally, arguments to things are
evaluated at the appropriate time. At the moment, SBCL (and indeed
all other implementations I tested, but that's not strong evidence
given the shared heritage) do not evaluate the arguments to
<code>:method-combination</code> – treating it more like a macro call than a
function call. I’m not sure that is the most helpful behaviour, but
I’m struggling to come up with an example where the other is
definitely better. Maybe something like</p>
<pre><code>(let ((lock (make-lock)))
(defgeneric foo (x)
(:method-combination locked lock)
(:method (x) ...)))
</code></pre>
<p>Which would allow automatic locking around the effective method of
<code>FOO</code> through the method combination? I need to think some more here.</p>
<p>In any case: the <code>method-combination</code> fixes are in the current SBCL
<code>master</code> branch, shortly to be released as sbcl-1.4.8. And there is
still time (though not very much!) to apply for
the
<a href="https://www.jobs.ac.uk/search/?keywords=goldsmiths&salary_from=&salary_to=&category=0600&category=0700&jobtype=&location=01&sector=&x=41&y=5">many jobs</a> advertised
at
<a href="https://www.gold.ac.uk/">Goldsmiths</a> <a href="https://www.gold.ac.uk/computing/">Computing</a> –
what better things to do on a Bank Holiday weekend?</p>
algorithms and data structures term2http://christophe.rhodes.io/notes/blog/posts/2018/algorithms_and_data_structures_term2/2018-05-08T19:21:14Z2018-05-08T19:17:43Z
<p>I presented some of the work on teaching algorithms and data
structures at
the
<a href="https://www.european-lisp-symposium.org/2018/index.html">2018 European Lisp Symposium</a></p>
<p>Given that I wanted to go to the symposium (and <a href="http://christophe.rhodes.io/notes/blog/posts/2018/els2018_reflections/">I’m glad I
did!</a>), the most economical method for going was
if I presented research work – because then there was a reasonable
chance that <a href="https://www.gold.ac.uk/">my employer</a> would fund the
expenses (spoiler: they did; thank you!). It might perhaps be
surprising to hear that they don’t usually directly fund attending
events where one is not presenting; on the other hand, it’s perhaps
reasonable on the basis that part of an academic’s job as a scholar
and researcher is to be creating and disseminating new knowledge, and
of course universities, like any organizations, need to prioritise
spending money on things which bring value or further the
organization’s mission.</p>
<p>In any case, I found that I wanted to write about the teaching work
that I have been doing, and in particular I chose to write about a
small, Lisp-related aspect. Specifically, it is now fairly normal in
technical subjects to perform a lot of automated testing of students;
it relieves the burden on staff to assess things which can be
mechanically assessed, and deliver feedback to individual students
which can be delivered automatically; this frees up staff time to
perform targeted interventions, give better feedback on more
qualitative aspects of the curriculum, or work fewer weekends of the
year. A large part of my teaching work for the last 18 months has
been developing material for these automated tests, and working on the
infrastructure underlying them, for my and colleagues’ teaching.</p>
<p>So, the more that we can test automatically <em>and meaningfully</em>, the
more time we have to spend on other things. The main novelty here,
and the lisp-related hook for the paper I submitted to ELS, was being
able to give meaningful feedback on numerical answer questions which
probed whether students were developing a good mental model of the
meaning of pseudocode. That’s a bit vague; let’s be specific and
consider the <code>break</code> and <code>continue</code> keywords:</p>
<pre><code>x ← 0
for 0 ≤ i < 9
x ← x + i
if x > 17
continue
end if
x ← x + 1
end for
return x
</code></pre>
<p>The above pseudocode is typical of what a student might see; the
question would be “what does the above block of pseudocode return?”,
which is mildly arithmetically challenging, particularly under time
pressure, but the conceptual aspect that was being tested here was
whether the student understood the effect of <code>continue</code>. Therefore,
it is important to give the student specific feedback; the more
specific, the better. So if a student answered 20 to this question
(as if the <code>continue</code> acted as a <code>break</code>), they would receive a
specific feedback message reminding them about the difference between
the two operators; if they answered 45, they received a message
reminding them that <code>continue</code> has a particular meaning in loops; and
any other answers received generic feedback.</p>
<p>Having just one of these questions does no good, though. Students
will go to almost any lengths to avoid learning things, and it is easy
to communicate answers to multiple-choice and short-answer questions
among a cohort. So, I needed hundreds of these questions: at least
one per student, but in fact by design the students could take these
multiple-chocie quizzes multiple times, as they are primarily an aid
for the students themselves, to help them discover what they know.</p>
<p>Now of course I could treat the above pseudocode fragment as a
template, parameterise it (initial value, loop bounds, increment) and
compute the values needing the specific feedback in terms of the
values of the parameters. But this generalizes badly: what happens
when I decide that I want to vary the operators (say to introduce
multiplication) or modify the structure somewhat (<em>e.g.</em> by swapping
the two increments before and after the <code>continue</code>)? The
parametrization gets more and more complicated, the chances of (my)
error increase, and perhaps most importantly it’s not any fun.</p>
<p>Instead, what did I do? With some sense of grim inevitability, I
evolved (or maybe accreted) an interpreter (in emacs lisp) for a
sexp-based representation of this pseudocode. At the start of the
year, it's pretty simple; towards the end it has developed into an
almost reasonable mini-language. Writing the interpreter is
straightforward, though the way it evolved into one gigantic <code>case</code>
statement for supported operators rather than having reasonable
semantics is a bit of a shame; as a bonus, implementing a
pretty-printer for the sexp-based pseudocode, with correct indentation
and keyword highlighting, is straightforward. Then armed with the
pseudocode I will ask the students to interpret, I can mutate it in
ways that I anticipate students might think like (replacing <code>continue</code>
with <code>break</code> or <code>progn</code>) and interpret that form to see which wrong
answer should generate what feedback.</p>
<p>Anyway, that was the hook. There's some evidence
in <a href="https://research.gold.ac.uk/23155/">the paper</a> that the general
approach of repeated micro-assessment, and also the the consideration
of likely student mistakes and giving specific feedback, actually
works. And now that the (provisional) results are in, how does this
term compare with <a href="http://christophe.rhodes.io/notes/blog/posts/2018/algorithms_and_data_structures_term1/">last term</a>?
We can look at the relationship between this term’s marks and last
term’s. What should we be looking for? Generally, I would expect
marks in the second term’s coursework to be broadly similar to the
marks in the first term – all else being equal, students who put in a
lot of effort and are confident with the material in term 1 are likely
to have an easier time integrating the slightly more advanced material
in term 2. That’s not a deterministic rule, though; some students
will have been given a wake-up call by the term 1 marks, and equally
some students might decide to coast.</p>
<p><a href="http://christophe.rhodes.io/notes/blog/posts/2018/algorithms_and_data_structures_term2/term2-vs-term1.png"><img src="http://christophe.rhodes.io/notes/blog/posts/2018/algorithms_and_data_structures_term2/200x-term2-vs-term1.png" width="200" height="150" alt="plot of term 2 marks against term 1: a = 0.82, R² = 0.67" class="img" /></a></p>
<p>I’ve asked R to draw the regression line in the above picture; a
straight line fit seems reasonable based on the plot. What are the
statistics of that line?</p>
<pre><code>R> summary(lm(Term2~Term1, data=d))
Call:
lm(formula = Term2 ~ Term1, data = d)
Residuals:
Min 1Q Median 3Q Max
-41.752 -6.032 1.138 6.107 31.155
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.18414 4.09773 0.777 0.439
Term1 0.82056 0.05485 14.961 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 10.46 on 107 degrees of freedom
(32 observations deleted due to missingness)
Multiple R-squared: 0.6766, Adjusted R-squared: 0.6736
F-statistic: 223.8 on 1 and 107 DF, p-value: < 2.2e-16
</code></pre>
<p>Looking at the summary above, we have a strong positive relationship
between term 1 and term 2 marks. The intercept is approximately zero
(if you got no marks in term 1, you should expect no marks in term 2),
and the slope is less than one: on average, each mark a student got in
term 1 tended to convert to 0.8 marks in term 2 – this is plausibly
explained by the material being slightly harder in term 2, and by the
fact that some of the assessments were more explicitly designed to
allow finer discrimination at the top end – marks in the 90s. (A note
for international readers: in the UK system, the pass mark is 40%,
excellent work is typically awarded a mark in the 70% range – marks of
90% should be reserved for exceptional work). The average case is,
however, only that: there was significant variation from that average
line, and indeed (looking at the quartiles) over 50% of the cohort was
more than half a degree class (5 percentage points) away from their
term 2 mark as “predicted” from their mark for term 1.</p>
<p>All of this seems reasonable, and it was a privilege to work with this
cohort of students, and to present the sum of their interactions on
this course to the audience I had. I got the largest round of
applause, I think, for revealing that as part of the peer-assessment I
had required that students run each others’ code. I also had to
present some of the context for the work; not only because this was an
international gathering, with people in various university systems and
from industry, but also because of the large-scale disruption caused
by
<a href="https://www.ucu.org.uk/strikeforuss">industrial</a> <a href="https://ussbriefs.com/">action</a> over
the <a href="https://www.uss.co.uk/">Universities Superannuation Scheme</a> (the
collective, defined benefit pension fund for academics at about 68
Universities and ~300 other bodies associated with Higher Education).
Perhaps most gratifyingly, students were able to continue learning
despite being deprived of their tuition for three consecutive weeks;
judging by their performance on the various assessments so far,</p>
<p>And now? The students will sit an exam, after which I and colleagues
will look in detail at those results and the relationship with the
students’ coursework marks (as I did <a href="http://christophe.rhodes.io/notes/blog/posts/2017/analysing_algorithms_and_data_structures_data/">last
year</a>). I will
continue developing this material (my board for this module currently
lists 33 todo items), and adapt it for next year and for new cohorts.
And maybe you will join me?
The <a href="https://www.doc.gold.ac.uk/computing/">Computing department</a>
at <a href="https://www.gold.ac.uk">Goldsmiths</a> is hiring lecturers and senior
lecturers to come and participate in research, scholarship and
teaching in computing:
a
<a href="https://jobs.gold.ac.uk/vacancy/lecturer-in-creative-computing-348799.html">lecturer in creative computing</a>,
a
<a href="https://jobs.gold.ac.uk/vacancy/lecturer-in-computer-games-348630.html">lecturer in computer games</a>,
a
<a href="https://jobs.gold.ac.uk/vacancy/lecturer-in-data-science-348531.html">lecturer in data science</a>,
a
<a href="https://jobs.gold.ac.uk/vacancy/lecturer-in-physical-and-creative-computing-348527.html">lecturer in physical and creative computing</a>,
a
<a href="https://jobs.gold.ac.uk/vacancy/lecturer-in-computer-science-348441.html">lecturer in computer science</a> and
a
<a href="https://jobs.gold.ac.uk/vacancy/senior-lecturer-in-computer-science-348301.html">senior lecturer in computer science</a>.
Anyone reading this is welcome
to <a href="mailto:c.rhodes@gold.ac.uk">contact me</a> to find out more!</p>
ifcomp game playinghttp://christophe.rhodes.io/notes/blog/posts/2018/ifcomp_game_playing/2018-05-04T15:10:37Z2018-05-04T15:10:37Z
<p>I didn't quite get round to voting in
<a href="https://ifcomp.org/comp/2017"><s>this</s> last year's IFComp</a>.
(Oops, I didn't quite get round to posting this when it was topical.)</p>
<p>I <em>did</em> play some of the submissions, though. In particular I played
under competition terms most of the Inform parser games, and I had fun
doing it. My personal favourite, which probably reflects my
background as some flavour of mathematician,
was
<a href="http://ifdb.tads.org/viewgame?id=y9y7jozi0l76bb82"><em>A beauty cold and austere</em></a>:
you have twelve hours to learn enough mathematics to pass your
end-of-term test. The pharmaceutical-induced dream conceit allowed
the somewhat unconnected puzzles to be combined without worrying too
much about a theme – beyond mathematics! – and those puzzles were
generally fair. I played for the competition-mandated time of two
hours without the walkthrough, though with enough formal mathematical
education that using a walkthrough would <em>really</em> have felt like
cheating; I did need some help of the “what to do next” kind to finish
the game.</p>
<p><a href="http://ifdb.tads.org/viewgame?id=2jil5vbxmbv8riv1"><em>The wand</em></a> I
played for a while. It was nice to see a different game mechanic in
play (the only things you can do in this game is wave a magic wand,
and change the colour combinations of the wand), and the game gave
clear direction on how to get started. But I am a person of
insufficient persistence, and my reaction to the discovery that I was
going to have to remember tens of colour combinations in order to do
anything was to write grouchy stuff in my notes about how it would
have been better to unlock verbs rather than mess around with the
colours all the time and writing down all the combinations I'd
discovered...</p>
<p>... which meant that I missed about 90% of the game, I later
discovered. If I had been consciously aware that <em>The wand</em> was
written
by
<a href="http://ifdb.tads.org/search?searchfor=author%3AArthur+DiBianca">Arthur DiBianca</a>,
who
wrote
<a href="http://ifdb.tads.org/viewgame?id=r3ed77yd9te07y5d"><em>Grandma Bethlinda's Variety Box</em></a>,
I might have put in the time investment. Learn from my mistake!</p>
<p>The winner of IFComp
was
<a href="http://ifdb.tads.org/viewgame?id=uq18rw9gt8j58da"><em>The Wizard Sniffer</em></a>,
which like <em>The wand</em> restricts your actions to an almost absurd
degree (maybe not absurd for a pig). Unlike <em>The wand</em>, which is
essentially entirely puzzle-focussed, <em>The Wizard Sniffer</em> has a
surface narrative that is engaging, as well as a more subversive
narrative that unfolds as the puzzles in the game are solved. I can
absolutely see why this appeals to game-players; it was just below <em>A
beauty cold and austere</em> in my personal ratings, and I definitely did
not have the patience to solve some of the puzzles without walkthrough
assistance, but I recognize that the framing (reminiscent of Lloyd
Alexander’s Taran and Hen-Wen cycle, with additional social
commentary) and the quality of the writing in <em>The Wizard Sniffer</em>
make it a worthy competition winner.</p>
<p>Some of the other parser games I found less
successful:
<a href="http://ifdb.tads.org/viewgame?id=fnhon72qmwlvwb8"><em>Rainbow bridge</em></a>
was simple and charming, but without any particular
challenge;
<a href="http://ifdb.tads.org/viewgame?id=dd4psaxxggdxkw66"><em>Ultimate escape room</em></a> didn't
really use the medium for anything, and had one particularly annoying
guess-the-verb and-also-the-noun
puzzle;
<a href="http://ifdb.tads.org/viewgame?id=b0g9m4j2yx89mhsr"><em>8 shoes on the shelves</em></a> I
simply did not understand – quite possibly I missed something. I also
started
<a href="http://ifdb.tads.org/viewgame?id=32u49mceyst7p8ey"><em>The Owl consults</em></a> and
<a href="http://ifdb.tads.org/viewgame?id=yutkd9u0oeog4br1"><em>Eat me</em></a>, but at
this point ran out of time by being thoroughly enmired in developing
teaching materials for my Algorithms & Data Structures class, and in
fact got so <a href="http://christophe.rhodes.io/notes/blog/posts/2018/algorithms_and_data_structures_term1/">thoroughly</a>
<a href="http://christophe.rhodes.io/notes/blog/posts/2018/algorithms_and_data_structures_term2/">consumed</a> by that that it
is only now, seven months later, that I revisit this blog entry.</p>
<p>What of non-parser games? I find it interesting that the top-placed
games are almost all parser-based, whether Inform, TADS or something
else; given that, I'm intrigued by the very-highly-scoring <em>Harmonia</em>,
which is built in the Liza Daly (the author's) own Windrush game
engine; she also has a post-competition article about
the
<a href="https://medium.com/@liza/interactive-marginalia-f39424877d73">design of interactive marginalia</a> used
in the story. It being past the end of teaching term now, I might
have hoped that I could indulge, except that I need to start work on
next term's materials – and no, I'm not convinced that playing more IF
is sufficiently high on the priority list of preparation for teaching
Narrative & Interactive Fiction in January. I will play vicariously
through my students, and wait for Easter.</p>
els2018 reflectionshttp://christophe.rhodes.io/notes/blog/posts/2018/els2018_reflections/2018-05-26T08:02:23Z2018-05-04T09:29:07Z
<p>A few weeks ago, I attended
the
<a href="https://www.european-lisp-symposium.org/2018/index.html">2018 European Lisp Symposium</a>.</p>
<p>If you were at
the
<a href="https://www.european-lisp-symposium.org/2017/index.html">2017 iteration</a>,
you might (as I do) have been left with an abiding image of greyness.
That was not at all to fault the location (Brussels) or the
organization (co-located with ) of the symposium;
however,
the <a href="https://twitter.com/ascii19/status/848796581822386176">weather</a>
was <a href="https://twitter.com/ascii19/status/848797424902602752">unkind</a>,
and that didn't help with my own personal low-energy mood at the time.</p>
<p>This year, the event was organized
by <a href="https://www.ravenpack.com/">Ravenpack</a> in Marbella. And the first
thing that I noticed on landing was the warmth. (Perhaps the only
“perk” of low-cost airlines is that it is most likely that one will
disembark on to <em>terra firma</em> rather than a passenger boarding bridge.
And after quite a miserable winter in the UK, the warmth and the
sunshine at the bus stop, while waiting for the bus from Malagá
airport to Marbella, was very welcome indeed. The sense of community
was strong too; while waiting for the bus, I hopped on to
<code>#lisp</code> IRC and Nic Hafner volunteered to meet me at the Marbella bus
station and walk with me to my hotel; we ended up going for drinks at
a beachside bar that evening with Dimitri Fontaine, Christian
Schafmeister, Mikhail Raskin and others – and while we were sitting
there, enjoying the setting, I saw other faces I recognised walking
past along the beach promenade.</p>
<p>The setting for the conference itself,
the
<a href="http://www.marbella.es/cultura/centros/item/35-c-c-cortijo-de-miraflores.html">Centro Cultural Cortijo de Miraflores</a>,
was charming. We had the use of a room, just about big enough for the
90ish delegates; the centre is the seat of the Marbella Historical
Archives, and our room was next to the olive oil exhibition.</p>
<p><a href="http://christophe.rhodes.io/notes/blog/posts/2018/els2018_reflections/els2018.jpeg"><img src="http://christophe.rhodes.io/notes/blog/posts/2018/els2018_reflections/els2018.jpeg" width="778" height="583" alt="2018 European Lisp Symposium opening" class="img" /></a></p>
<p>We also had an inside courtyard for the breaks; sun, shade, water and
coffee were available in equal measure, and there was space for good
conversations – I spoke with many people, and it was great to catch up
and reminisce with old friends, and to discuss the finer points of
language implementation and release management with new ones. (I
continue to maintain that the SBCL time-boxed monthly release cadence
of <code>master</code>, initiated by Bill Newman way back in the SBCL 0.7 days in
2002 [!], has two distinct advantages, for our situation, compared
with other possible choices, and I said so more than once over
coffee.)</p>
<p>The formal programme, curated by Dave Cooper, was great too. Zach's
written
about
<a href="http://lispblog.xach.com/post/173467550358/thoughts-on-els-2018">his highlights</a>;
I enjoyed hearing Jim Newton's latest
on
<a href="https://www.european-lisp-symposium.org/static/2018/newton.pdf">being smarter about typecase</a>,
and Robert Smith's presentation about quantum computing, including a
walkthrough of a quantum computing simulator. As well as quantum
computing, application areas mentioned in talks this year included
computational chemistry, data parallel processing, pedagogy, fluid
dynamics, architecture, sentiment analysis and enterprise resource
planning – and there were some implementation talks in there too, not
least from R. “it’s part of the brand” Matthew Emerson, who gave a
paean to <a href="https://ccl.clozure.com/">Clozure Common Lisp</a>. Highly
non-scientific SBCL user feedback from ELS presenters: most of the
speakers with CL-related talks or applications at least mentioned
SBCL; most of those mentions by speaker were complimentary, but it's
possible that most by raw count referred to bugs – solely due to
Didier Verna's presentation
about
<a href="https://www.european-lisp-symposium.org/static/2018/verna.pdf">method combinations</a> (more
on this in a future blog post).</p>
<p>Overall, I found this year's event energising. I couldn't be there
any earlier than the Sunday evening, nor could I stay beyond Wednesday
morning, so I missed at least the organized social event, but the two
days I had were full and stress-free (even when chairing sessions and
for my own talk). The local organization was excellent; Andrew
Lawson, Nick Levine and our hosts at the foundation did us proud; we
had a great conference dinner, and the general location was
marvellous.</p>
<p>Next year's event will once again be co-located with <Programming>,
this time in Genova (sunny!) on 1st-2nd April 2019. Put it in your
calendar now!</p>