Talking about touching parchment, this week’s code blog is all about how to add touch data to our JSON descriptions of manuscripts right now, before parchment surface experimentation is perfected (watch this space, it might happen).
I’ve chosen to use Oxford, Bodleian Library MS Laud Misc. 656 for this particular endeavor because it has some really fascinating, really feature-rich, really bad parchment. And, bad parchment is really the best, because you can see and touch so many features of it. Unlike really high quality parchment, which has eradicated so many of its distinguishing features that it makes you forget–however momentarily–that it is skin, bad parchment carries reminders of what it was, where it came from, marks of class and production, and so much more.
So, this blog is about capturing that kind of awesome badness in code. Now, the code to date has a few features that allow me to talk about this thing that fascinates me (the touch of parchment):
This week’s post is about synaesthetic data. Literally, it’s about visualizing the data we (the human instrument) collect by touching. To really show you what I mean, I’m going to jump right into our visualizations, which (as discussed last week) are a different style of data visualizations, but they are such nonetheless.
To put what I mean in perspective, I’m going to use something I know you’ve touched, and something you may have touched (the likelihood of which increases significantly if you’re a medievalist working on manuscripts). I’m going to compare the two materializations of post-calf and post-goat flesh we know as leather and (loosely) parchment.
The visualization post this week was about making a manuscript and the various different economies, ecologies, and congealing materialities that bring forth parchment, the material support on which a text is eventually written.
One of the things I want to draw attention to in that post and the upcoming ones is the fact that a manuscript is specifically more than a text, and thus requires us to look at it with different eyes and tools than we use for looking at texts. A manuscript contains, or even embodies texts, but it also is and does myriad other things that all affect the way that it matters–the way it signifies in both a material and symbolic sense.
To highlight what I mean, today we are going to encode and examine Trinity College Cambridge B.15.17, a manuscript with a little more known history than most, and I’m going to draw attention to choices I make in coding that aim to bring the manuscript object itself into focus, rather than just simply the texts contained therein.
We are going to take a tiny digression from Piers specific visualizations to get a sneak peak at visual data I’m presenting at the New Chaucer Society Congress in July.
What you are going to see here is a little different from the kind of data visualizations we’ve been looking at to date. So far, most of the data we’ve been highlighting here has been primarily data abstracted from a material phenomenon, and then reconfigured into slices meaningful to us. I’m going to talk more next week on what is “data,” so put a pin in those thoughts and we’ll come back to them.
This week, we have a series of images of parchment surfaces (to be featured in NCS panel 5F on Parchment, organized by Bruce Holsinger) that have been translatedvia a few very intense scientific apparatuses into something that we think we understand intuitively. There is so much at work in that intuitive leap, however, that I think it’s worth breaking it down step by step.
This week’s code post is going to be short and sweet, unlike this week’s manuscript!
Encoded this week is Cambridge University Library Dd.1.17, one of my favorite manuscripts, and the second largest compilation manuscript in the Piers corpus. It is, however, a far cry from the Vernon, with only twenty-some-odd items as opposed to close to four hundred.
What I find so amazingly interesting about this manuscript is its tight concentration on a very particular type of text. In particular, its concentration on history and travel literature. Now, I am by no means an expert in all these contents, so if you happen to work on one of the texts named here, I’d welcome any input you have in the comments below!
What I can see, however, is a very distinct trend towards cosmic history and Orientalia. If we take the larger network graph, and make it its own graph, with full articulation, it would look like this: Continue reading CUL Dd.1.17 in Spacetime→
Over the last two weeks, we’ve been looking at visualizations of Piers in space (with maps and regions) and Piers copyingin time. Of course, neither of these slices of data exists exclusive of the other, so this week we are going to put both space and time into animated graphics to help us to visualize the movement of Piers through both space and time.
This week’s Material Piers is dedicated to one of my very favorite manuscripts, the Vernon. That is, Oxford, Bodleian Library MS Eng. poet. a.1, the single most deluxe manuscript of Middle English in existence. Yeah, really, it’s that epic. And if you don’t know about it already, check it out. Odds are, though, if you know me, you already know about my thing for the Vernon.
The reason this week’s blog is all about the Vernon is two-fold.
B. Because the Vernon is so big and so complex, it merited its own über post. What better time to talk about it than when we’re talking about the temporality of Piers Plowman anyway (which both last week‘s and next week’s blogs will be about).
GeoJSON is compatible with JSON-LD; both different uses of JSON are trying to link object data (or “feature” data) to other data. In the case of GeoJSON, that is very specifically linking data with spatial components, particularly in such a way that this data can be mapped.
In order to write GeoJSON, all data has to exist with a single object,which is something we’ve covered before. A GeoJSON object can only represent very specific types of data, though, that comes as a geometry, a feature, or a collection of features.