I sort of hate manuscript stemma. Don’t get me wrong, they have their uses, and they take some incredibly diligent and intelligent work (work that I’m really glad someone else is doing so that I don’t have to). But stemma are also one of those devices that are occasionally put to great evil, in my book. They are sometimes used for recensional editing (the worst of evils), used to abject and even reject certain manuscripts deemed worthless based on its distance from an authoritative text, and they’re just plain mind-numbing to try to consume intellectually.
In addition to being put to evil use, stemma are also sometimes misleading, laden with jargon tending to make them indecipherable to non-experts, and just plain confusing.
Take this A MSS stemma, reproduced (absolutely without permission) from A.V.C. Schmidt’s Parallel Text Edition.
Material Piers is back after a slight summer holiday. I know you weren’t pining because you were enjoying your own last opportunities to …. xxx… whatever it is you do in the summertime.
We last left off with a discussion of data aesthetics, in which I pointed out that the way you present your data is itself imposing interpretations, or at least interpretive structures, onto the “raw” data itself (whatever that means, amirite Lisa Gitelman??).
Equally important to presenting transparent information is defining the parameters of your data. In a science setting, data is only useful insofar as it is replicable by other scientists. They need to know not just the results (i.e. conclusions you draw from your data) of your experiment, but how you interpreted it, what it looked like pre-interpretation (“raw”), but also how you built the data-finding apparatus, and the question the apparatus was designed to answer. If, for example, you use a laser for something, you are only asking your experiment a question answerable through optical data collection.
The very way that data is “collected” (i.e. created, but more on that later) creates limitations to the kinds of answers you can get from your data.
THERE IS NOTHING INHERENTLY OBJECTIVE ABOUT DATA. Continue reading
In a recent talk she gave for the Medieval Forum and the Anglo Saxon Studies Colloquium, Dorothy Kim discussed the importance of aesthetics in designing and implementing digital architectures that are not only “user-friendly,” but also that are inviting to the potential consumers of the information that the Archive of Early Middle English was trying to make available.
Kim’s talk got me to thinking about something inherent in the visual presentation of data that doesn’t get a lot of discussion. We (i.e. the people doing data visualizations and writing about them) are all so consumed with presenting information, that often discussion of the way information is presented and the choices involved gets left out of conversations about big data. Continue reading
In honor of the New Chaucer Society conference going on in Reykjavik right now, I’m going to bring back network graphs to see how Langland and Chaucer might network together.
Ok, well, if you’ve been reading this blog at all (which maybe you haven’t, and that’s ok–here’s what you’ve been missing), you already know that they really don’t. Nevertheless, let’s put some Piers and Chaucer network graphs side by side anyway to compare.
In an earlier blog, we looked at the Piers Plowman corpus as one big networked graph. Continue reading
Talking about touching parchment, this week’s code blog is all about how to add touch data to our JSON descriptions of manuscripts right now, before parchment surface experimentation is perfected (watch this space, it might happen).
I’ve chosen to use Oxford, Bodleian Library MS Laud Misc. 656 for this particular endeavor because it has some really fascinating, really feature-rich, really bad parchment. And, bad parchment is really the best, because you can see and touch so many features of it. Unlike really high quality parchment, which has eradicated so many of its distinguishing features that it makes you forget–however momentarily–that it is skin, bad parchment carries reminders of what it was, where it came from, marks of class and production, and so much more.
So, this blog is about capturing that kind of awesome badness in code. Now, the code to date has a few features that allow me to talk about this thing that fascinates me (the touch of parchment):
This week’s post is about synaesthetic data. Literally, it’s about visualizing the data we (the human instrument) collect by touching. To really show you what I mean, I’m going to jump right into our visualizations, which (as discussed last week) are a different style of data visualizations, but they are such nonetheless.
To put what I mean in perspective, I’m going to use something I know you’ve touched, and something you may have touched (the likelihood of which increases significantly if you’re a medievalist working on manuscripts). I’m going to compare the two materializations of post-calf and post-goat flesh we know as leather and (loosely) parchment.
Calf Goat Continue reading
The visualization post this week was about making a manuscript and the various different economies, ecologies, and congealing materialities that bring forth parchment, the material support on which a text is eventually written.
One of the things I want to draw attention to in that post and the upcoming ones is the fact that a manuscript is specifically more than a text, and thus requires us to look at it with different eyes and tools than we use for looking at texts. A manuscript contains, or even embodies texts, but it also is and does myriad other things that all affect the way that it matters–the way it signifies in both a material and symbolic sense.
To highlight what I mean, today we are going to encode and examine Trinity College Cambridge B.15.17, a manuscript with a little more known history than most, and I’m going to draw attention to choices I make in coding that aim to bring the manuscript object itself into focus, rather than just simply the texts contained therein.