For a more detailed account of this topic, see the article, “Finn’s Hotel and the Joycean Canon”, which appeared in Issue 14 (Spring 2014) of Genetic Joyce Studies.
Ithys Press controversially published Finn’s Hotel in June 2013, describing the collection as “almost certainly the last unpublished title by James Joyce”. Ithys and Rose contend that the fragments warrant consideration as a standalone collection, the style in which they are written suggesting that such was Joyce’s intention: “The prose pieces of Finn’s Hotel … are written in a unique diversity of styles, much more so than Ulysses. Taken together, they form the true and hitherto unknown precursor to the multi-modulated voices of the Wake—but these first utterings from Finn’s Hotel are far easier to understand.” This view is not unanimously accepted, with some scholars countering that the writings are merely early drafts for what would later become Finnegans Wake, and thus should not be published as an independent addition to the Joycean canon. Terence Killeen notes, in The Irish Times, that “the pieces scream of Finnegans Wake itself”. He states: “It is true that one or two of them did not end up in the final text, but it is quite normal for a writer to draft and then abandon various passages in the initial stages of a major work.”
Experimental texts pose something of a quandary to electronic textual analysis in that they tend to abandon those typical statistical trends required to form an authorial signature. Computational stylistics, for all its analytic diversity, is utterly dependent on the integrity of its authorial signature if it is to be used as an approach to analysis. If you want to see why computational stylistics is the realm of digital humanists and not purebred statisticians or computer scientists, run Finnegans Wake through R. Of course, the nature of experimental texts, while problematic in relation to any analysis based on computational linguistics, also presents the opportunity for textual explorations of a refreshingly unpredictable fashion – it is in experimental works that the digital humanist can hope to produce results that are truly unexpected, even if the unexpected is precisely that which is expected. Enter Dave Lordan, and the wonderfully crafted First Book of Frags, his recent collection of experimental short stories. At first I had intended to offer a traditional review of the text, but these will undoubtedly be in plentiful supply, and with time against me and my curiosity piqued at the prospect of running a brand new experimental text through the digital gauntlet, I couldn’t resist but take a computational approach. This decision was of course influenced by the fact that this is a collection of experimental short stories – 16 unique segments – mouth-watering to a cluster fiend such as myself.
Amongst his many other accomplishments, Ian Fellows will long be remembered as the scholar who gave us empirical word clouds. Using his innovative R package, I generated such a visualization of the top 50 most frequently used words in Lordan’s collection, excluding those that would be considered common. Common words only have significance in the development of an authorial signature, and thus would have served little purpose to this particular aspect of the analysis.
50 most frequent uncommon words in Dave Lordan’s First Book of Frags
From The Boolean, 2012: http://publish.ucc.ie/boolean/2012/00. The Boolean offers a snapshot of doctoral research at University College Cork, aimed at non-specialist audiences.
A Myriad of Terminology
While not quite a neologism at this point, the term “digital humanities” for some still bears a significant measure of ambiguity. What separates digital humanities from the humanities? Throughout this article, I will attempt to offer some clarity on this separation, outlining what it is that makes digital humanities, digital.
The field of scholarship now recognised as the digital humanities has not always held this particular mantle. Initially, this emerging discipline was referred to as “humanities computing”, a term that gathered momentum as early as the late ‘70s, the evidence for which can be found in a quick n-gram of Google Books. N-grams offer an approach to probabilistic language modelling that can be used for a variety of purposes, in this case, to identify the frequency of a sequence of words in a set of texts.
Figure 1: Digital Humanities vs. Humanities Computing