The trouble with electronic textual analysis – like all interpretive practices it is not without its flaws – is that it requires specialist expertise. In addition, it requires reliable sources from which literary and textual critics can extract data – data that can be used to form meaning; shape and justify interpretations.
Many digital humanists are of the belief that digital humanities will one day be ‘just humanities’, but this will never come to pass unless both groups of scholars and practitioners agree to give something up (which they shouldn’t). One cannot expect all humanists to understand logic and programming, and by the same token, digital humanists should not be expected to halt their exploration of technology’s new avenues in an effort to rethink how we approach and answer age-old questions. The disciplines will remain separate because the people and processes of discovery will remain separate. That is not to say that the disciplines are not related, but they are not, nor will they ever be, the same. There will be those few who possess considerable expertise in both fields, but this will be the exception. Herein lies the first issue with electronic textual analysis: generally, those who are interested in the study of literature are not familiar with the construction of scripts suited to textual analysis. There will always be ‘out-of-the-box’ solutions, but these are limited in that they cannot be adapted to meet a specific purpose without a familiarity with the language through which they have been developed. The flip side of this is that those who are familiar with programming languages are often too analytic for interpretative assessments, or rather, are more concerned with more objective pursuits. Mastering one discipline is difficult enough – mastering two is for many unachievable, if not undesirable.
The first issue can be overcome through collaboration. However, the second issue – the provision of reliable sources – is perhaps more pressing in terms of literary analysis, and it is a weakness that is, for some reason, often touted as something of a strength. Many scholars who have dipped their toes in the field of electronic textual analysis will tell you that it is liberating, liberating in the sense that it frees you from many of the typical restrictions presented in any traditional scholarly pursuit. Accessibility, they say, is one such liberating factor – studying the great texts is no longer reserved for those with access to the libraries in which they are housed, for digitisation and internetworking has made the study of text geographically independent. The reality is anything but, and it was in fact easier for me to acquire a physical copy of Ulysses – the first edition facsimile being offered by Martino Publishing – than it was for me to prepare a digital edition of the text suited to electronic textual analysis. Accessibility, it would seem, has ironically remained with the print edition, and will remain so until some group with appropriate funding and expertise decides to provide a scholarly (perhaps TEI-compliant) Project Gutenberg. It is unlikely that any such project will emerge.