Evaluative Criteria for Digital Humanities

Background: This is a summary of the results of a digital humanities workship. Four graduate students got together to determine evaluative criteria for the digital humanities. The result was a philosophical debate and the consensus that follows.


We believe that conventional criteria of assessment can be useful, but are not entirely germane to the Digital Humanities as a contemporary and emergent practice. As we are in the midst of defining what DH even is and what best practices could mean, and so we think that a focus on process over end product would be an essential criterion. We would ask, “does the project advance a unique view on the subject matter, whether through deformance, distant reading, or other means?” In other words, through the SAMR model of technology in education, does the project Redefine how we think of a topic, or even what is under consideration that may uniquely contribute to the field.

We also strongly believe that clarity and transparency of process and result is paramount to well-formed work in DH. Does the project present results that are visually affordable to a scholar unfamiliar with the process used? Are the results also well explained? Is it visually or rhetorically persuasive, and also accurate in terms of research methodology? Is there an awareness and due diligence in the preparation of the corp*? In many ways, these are traditional guards against bias that must be upheld and not forgotten as we venture into DH practices.

We also believe that any DH project should be evaluated based on whether, how, and to what extent the project has been open-sourced, including proper adherence to standards like Creative Commons, availability on resources like GitHub, or another place to share work freely depending on the format of said work, such as Scribd. As well, the vetted results of the research could be shared on Wikipedia or another such platform. If we are concerned about the quality of the work to be shared, a hypothetical account of plans to share the material could be evaluated. In most cases, no real plan to share should be prescribed as part of the evaluation.

What are the implications of your criteria for your own primary field of study?


The implications for a musicologist with these evaluative criteria are profound. Tools such as Music21 and Humdrum exist to analyze bodies of musical work and can offer new levels of access and transparency on greater bodies of digitized musical texts, and the clarity that distance reading can provide on a greater body of musical material has been proven to provide insights on the nature of bodies of music.


For sociology, these two broad axes of evaluation are deceptively subtle. That academic work should ‘redefine the topic’ is an impetus that prohibits strictly deductive research and demands new critical insights. The push to share these insights is also important, as ‘public sociology’ is really only beginning to emerge.


The discipline of history would greatly benefit from a discussion of these evaluative criteria. For instance, many historians gravitate towards obscure topics and some work silently within that obscurity. However, many historians also understand the importance of sharing their research. The criteria would move these implied understandings in history to explicit discussions.

Applied Linguistics

Digital Humanities had a vast impact on the field of Applied Linguistics and Discourse Studies. The introduction of technology aids such as textual mining tools for both quantitative or qualitative methodologies of analysis are extremely useful but cannot be entirely interchangeable with more traditional means of analysis. Corpora data tools, the Google N-Grams Viewer or other topic-modeling tools benefit the discourses that the field advances however they are additional ways of exploring the data portrayed. For the scope of evaluation criteria, a creative use of these tools is advisable. ​

  • “Corpi” is my own special plural for the word corpus.