The Textual Optics Lab was created in response to the changing face of literary and textual scholarship in the digital age. With the rise of the Digital Humanities has come the promise of new methods of exploring literature on an unprecedented scale. This project centers on the concepts and practices that are forming around these scalable reading methods, many of which are imported from the sciences -- from data mining and visualization to machine learning and network analysis. For several years, a diverse group of scholars at the University of Chicago have been grappling with the possibilities and challenges offered to the field of literary studies by the creation of massive digital archives and new computational tools to aid in discovery. With the Textual Optics Lab, we are building on our existing research and collaborations to become a new center of gravity for digital literary methods at the University. We will do this through the coordination of a number of hands-on initiatives: the constitution of a nationally recognized set of highly curated databases that cross a variety of languages and intellectual domains; the production of case studies that model the rigorous humanistic inquiry that these new methods are capable of generating; the implementation of technical interfaces to help evaluate the potential of these methods in the domain of textual studies; the development of means to increase both the accessibility and reproducibility of these methods for other scholars; and the exploration of ways to apply these means and methods across multiple linguistic and cultural contexts. The Textual Optics Lab will ultimately position UChicago as a key contributor to the expanding field of digital humanities and cultural analytics.
At the very heart of the Textual Optics Lab lies a question dialogical in nature: between close and distant readings, but also between scholars with different levels of literary and technical expertise and who often work on different literatures and cultures. How, in practice, does one proceed from simple word searches and move out to larger literary systems? How can different scales of reading work together, and using what software? How can they work together across languages? Scholars intuitively do scaling like this all the time, but with the introduction of computational techniques, the order of magnitude changes, as does the need to reflect critically on the impact of these techniques on implicit research behavior. Textual optics will put such reflection at the foreground in order to begin conceiving an interpretive method that allows us to move from word-based, close-reading techniques to progressively larger scales of reading/modeling. It is often said that what is needed is to be able to ‘drill down’ from large-scale distant reading approaches into the texts they treat, and this is undoubtedly true; but it is equally fruitful to take the opposite tack: to start with a word or phrase, a single occurrence, and follow it up through the various scales of literary expression (chapter, work, author, genre, culture, history, language, sociability, modes of production, dissemination, readership, reception, etc.) in order to see how that instance relates to the entire system, and to make manifest this relationship or these relationships.