pages tagged hearing wagnernoteshttp://christophe.rhodes.io/notes/tag/hearing_wagner/notesikiwiki2014-12-31T22:27:37Za year in reviewhttp://christophe.rhodes.io/notes/blog/posts/2014/a_year_in_review/2014-12-31T22:27:37Z2014-12-31T22:27:37Z
<p>A brief retrospective, partly brought to you by <code>grep</code>:</p>
<ul>
<li><a href="http://en.wikipedia.org/wiki/Credit_Accumulation_and_Transfer_Scheme">CATS credits</a> earnt: 30 (<a href="http://christophe.rhodes.io/notes/wiki/structured_discussion_and_knowledge_acquisition/">15</a> + <a href="http://christophe.rhodes.io/notes/wiki/plagiarism__3a___is_it_a_rational_response__3f__/">7.5</a> + <a href="http://christophe.rhodes.io/notes/wiki/employability_blogging/">7.5</a>) at level 7</li>
<li>crosswords solved: >=40</li>
<li>words blogged: 75k</li>
<li>words blogged, excluding crosswords: 50k</li>
<li><a href="http://www.sbcl.org/">SBCL</a> releases made: 12 (<a href="http://www.sbcl.org/news.html#1.2.7">latest today!</a>)</li>
<li>functioning ARM boards sitting on my desk: <a href="http://christophe.rhodes.io/notes/tag/hearing_wagner/arms.jpg">3</a> (number doing anything actually useful, beyond SBCL builds: 0 so far, working on it)</li>
<li>emacs packages worked on: 2 (<a href="https://github.com/csrhodes/iplayer-el">iplayer</a> <a href="https://github.com/csrhodes/squeeze-el">squeeze</a>)</li>
<li>public engagement events co-organized: <a href="http://beinghumanfestival.org/event/hearing-wagner/">1</a></li>
<li>ontological inconsistencies resolved: <a href="https://github.com/motools/timelineonology/pull/1">1</a></li>
<li>R packages made: <a href="http://christophe.rhodes.io/notes/tag/hearing_wagner/christophe.rhodes.io/notes/blog/posts/2014/ref2014_data_in_R_package_form/">1</a> (University PR departments offended: not enough)</li>
</ul>
<p>Blogging’s been a broad success; slightly tailed off of late, what
with</p>
<ul>
<li><a href="http://crawl.develz.org/">Crawl</a> Dungeon Sprint wins: 2 (The Pits, Thunderdome: both GrFi)</li>
</ul>
<p>so there’s an obvious New Year’s Resolution right there.</p>
<p>Happy New Year!</p>
hearing wagner data preparationshttp://christophe.rhodes.io/notes/blog/posts/2014/hearing_wagner_data_preparations/2014-11-17T12:49:11Z2014-11-17T10:58:17Z
<p>Last week’s activity – in between the paperwork, the teaching, the
paperwork, the paperwork, the teaching and the paperwork – was mostly
taken up in preparations for the
<a href="http://www.hearingwagner.net/"><em>Hearing Wagner</em></a> event, part of the
<a href="http://www.ahrc.ac.uk/">AHRC</a>’s
<a href="http://beinghumanfestival.org/"><em>Being Human</em></a> festival.</p>
<p>Being a part of the <em>Being Human</em> festival gave us the opportunity to
work to collect data that we wouldn’t otherwise have had access to:
because of the fortuitous timing of the
<a href="http://www.mariinskytrust.org.uk/">Mariinsky Theatre</a>’s
<a href="http://www.mariinskytrust.org.uk/press/the-ring/">production of the <em>Ring</em></a>
at the Birmingham Hippodrome between 5th and 9th November, we were
able to convince funders to allow us to offer free tickets to
<a href="http://www.bcu.ac.uk/conservatoire">Birmingham Conservatoire</a>
students, in exchange for being wired up to equipment measuring their
<a href="http://en.wikipedia.org/wiki/Skin_conductance">electrodermal activity</a>,
<a href="http://en.wikipedia.org/wiki/Photoplethysmogram">blood flow</a>, and
hand motion.</p>
<p>Why collect these data? Well, on of the themes of the <em>Transforming
Musicology</em> project as a whole is to examine the perception of
leitmotive, particularly Wagner’s use of them in the <em>Ring</em>, and the
idea behind gathering these data is to have ecologically-valid (in as
much as that is possible when there’s a device strapped to you)
measurements of participants’ physical responses to the performance,
where those physical responses are believed to correlate with
emotional arousal. Using those measurements, we can then go looking
for signals of responses to leitmotives, or to other musical or
production cues: as well as the students attending the performance,
some of the research team were present backstage, noting down the
times of events in the staging of (subjective) particular significance
– lighting changes, for example.</p>
<p>And then all of these data come back to base, and we have to go
through the process of looking for signal. And before we can do
anything else, we have to make sure that all of our data are aligned
to a reference timeline. For each of the operas, we ended up with
around 2GB of data files: up to 10 sets of data from the individual
participants, sampled at 120Hz or so; times of page turns in the vocal
score, noted by a musicologist member of the research team (a coarse
approximation to the sound experienced by the participants);
timestamped performance annotations, generated by a second
musicologist and dramaturge. How to get all of this onto a common
timeline?</p>
<p>Well, in the best of all possible worlds, all of the clocks in the
system would have been synchronized by <a href="http://www.ntp.org/">ntp</a>, and
that synchronization would have been stable and constant throughout
the process. In this case, the Panglossians would have been
disappointed: in fact none of the various devices was sufficiently
stably synchronized with any of the others to be able to get away with
no alignment.</p>
<p>Fortunately, the experimental design was carried out by people with a
healthy amount of paranoia: the participants were twice asked to clap
in unison: once in the backstage area, where there was only
speed-of-sound latency to the listeners (effectively negligible), and
once when seated in the auditorium, where there was additional latency
from the audio feed from the auditorium to backstage. Those claps
gave us enough information, on the rather strong assumption that they
were actually simultaneous, to tie everything together: the first clap
could be found on each individual measuring device by looking at the
accelerometer data for the signature, which establishes a common
timeline for the measurement data and the musicologists; the second
clap gives a measure for the additional latency introduced by the
audio feed. Since the participants’ claps weren’t <em>actually</em>
simultaneous – despite the participants being music students, and the
clap being conducted – we have a small error, but it’s likely to be no
more than about one second.</p>
<p>And this week? This week we’ll actually be looking for interesting
signal; there’s reason to believe that electrodermal activity
(basically, the change in skin conductance due to sweat) is indicative
of emotional arousal, and quite a sensitive measure of music-induced
emotion. This is by its nature an exploratory study: at least to
start with, we’re looking at particular points of interest (specified
by musicologists, in advance) for any correlation with biosignal
response – and we’ll be presenting initial results about anything we
find at the
<a href="http://www.hearingwagner.net/"><em>Hearing Wagner</em></a>
event in Birmingham this weekend. The clock is ticking...</p>
<p><strong>edit</strong>: see also a <a href="http://www.transforming-musicology.org/blog/2014-11-17_does-wagner-do-your-head-in/">similar post on the project blog</a></p>