Position Paper. Digital Humanities (sometimes stated as Humanities Computing) has long been a walled garden and generally misidentified as the mere use of computer or information technology. This has led many humanists to mistakenly believe themselves to engage the discipline and many other humanists to regard it as inaccessible. My definition seeks to cure both ills.
MLA CITE AS Woodruff, John. "Digital Humanities for the Lone Scholar." https://www.johnwoodruff.com?page_id=2535. Accessed 25 May 2020. Web.
A quick search through The Chronicle of Higher Education yields an overwhelming number of results containing the phrase “digital humanities” (with and without capital letters and sometimes hyphens) but none really seems to proffer a definition that is relevant and helpful to the lone scholar. And if the casually bantered digital-this and digital-that in reader comments are any indication, many of us amble along without a clear notion of how this new thing operates at the level of the individual scholar. As Kathleen Fitzpatrick explains, “Digital Humanities” does not signify merely “rendering stuff digital” but rather “a nexus of fields within which scholars use computing technologies to investigate the kinds of questions that are traditional to the humanities.” But what does that mean for the analog-trained humanist?
For whatever reason, the concept of Digital Humanities is either popularly misunderstood or has evolved beyond the popular conceptualization. To this end, I propose a more fluid, atemporal characterization relevant to the lone scholar: Digital Humanities scholarship is not simply using technology as a vehicle for scholarly work; it is the production of scholarly work that could not be achieved realistically by other means. Under this lens, JStor is not part of the Digital Humanities because it is not scholarship. However, Valley of the Shadow aptly belongs to the Digital Humanities because its aggregation, presentation, and deployment could not realistically exist in other forms, even though its underlying data could (and does). It is very easy indeed to recognize certain major innovations as Digital Humanities while at the same time abstracting its architects as specialists whose expertise will always elude us. But if we reduce Digital Humanities in the manner that I have suggested, then it becomes much more tangible to our own careers.
Many of us probably assumed that teaching via Blackboard fell under the umbrella of “Digital Humanities,” not to mention our Prezis, faculty web pages, wikis, and the like. Paradoxically, that might have been true at one time. By way of example, in the very early 1990s, protocols such as Gopher, Archie, and Veronica transformed humanities research. Under my characterization, the development of that technology and the scholarship that depended upon that technology, for its day, could be appropriately considered digital humanities (although “Digital Humanities” had a different nomenclature in those days). But one could not use that technology today and claim Digital Humanities status. By extension of Moore’s Law, the power and the reach of technology doubles every two years which means that “Digital Humanities” exists as an intensely evolving entity. So it should not be surprising that what seems to be part of Digital Humanities today might not be the case when we next come up for tenure or promotion. For this reason, it would seem that productions of brevity and of scale are actually much more appropriate for most academics. So how then do we faculty find our scale and stay ahead of the evolution?
To use another metaphor, the strike zone in baseball is not a static aperture in space and time but rather a region of space that opens and closes according to the batter’s stance . . . even after the pitch is thrown. So, if we are working on a William Blake article right now, does that mean we can alter its scope just a bit to, say, discuss word frequency of gender pronouns in the Blake Archive and call it Digital Humanities? I think so. So hooray! Our research is not lost! But this might not be the case in twenty years, if for no other reason than that it will have been done by then and thus violates my proposed definition that we must use technology for research not otherwise possible (and if it has already been done then emerging technology must be employed in another manner) . So to employ yet another analogy, Heisenberg’s uncertainty principle states that precise determinations of location and momentum through a given space are inversely correlated. In other words, our awareness of the rate of Digital Humanities’ evolution imputes a less precise understanding of its coordinates. At the turn of the twenty-first century, the coordinates were CD-ROM. Then the coordinates shifted to HTML. I suggest that by 2012 the coordinates had already begun to drift away from web publishing and have since trended toward stealth research.
For the individual scholar, “Digital Humanities” can seem a walled garden. How do we, as the only specialists in our fields at our respective universities, create something that belongs to the Digital Humanities when all of the celebrated projects seem to require hundreds of thousands of dollars and tens of thousands of hours? It is tempting to piggyback on those projects, but merely utilizing a vetted project for derivative scholarly work does not necessarily confer the same status unless such derivative scholarship implements some sort of novel exploitation (for example, constructing algorithms to extract specific data, analyzing utilization behaviors, etc.) On some subconscious level, we tacitly understand that using the internet to research world periodicals is absolutely not Digital Humanities. However, populating a relational database with content from electronically available world periodicals by which one analyzes sociopolitical trends, or analyzes language morphology, or analyzes literacy—well, that would be part of the Digital Humanities (and one would have no duty to put such database online either). With the definition that I proffer, Digital Humanities participation does not require that the final product be digital, only that the final product depend upon the digital. And herein lies a theorem that the lone scholar should keep in mind: Digital Humanities can be as much about investigation as it is about presentation, and it can, but need not, be both.
I confess that populating a relational database would be daunting, but probably not for all of my digitally native students who have been sandboxing these technologies for years. What if we integrate the database construction into our course and capitalize upon our students’ skill sets to collectively build the medium of investigation while also conducting the investigation? Students get an awesome learning experience, we get the database for our ongoing research, and we are all able to stake a claim in the Digital Humanities! Moreover, if we are the student-centered educators that we all claim to be, we should be willing to let students have modest control of the reins (as unsettling as that is for our research). All these considerations lead me to believe that the evolution from mere educational technology to full-blown Digital Humanities will introduce an assessment of a faculty member’s use of technology that is based to some degree upon a portfolio of her or his students’ digital work. Frankly, that is a bit unnerving, but it will happen.
Many individuals and institutions very well might be mistaking humanities technology for Digital Humanities which are not the same thing! I would argue that the inflection point is the exploitation of technology to engage a discipline’s theoretical basis, and it matters not whether the discipline is investigative or expressive. One point on which most Chronicle articles do agree is that Digital Humanities work requires collaboration. I suggest that students—particularly graduate students whose weaker backgrounds in the discipline are overshadowed by a demonstrable technical ability—might well be our best collaborators.