Bethany Nowviskie has written recently about black boxes:
“Nobody lives with conceptual black boxes and the allure of revelation more than the philologist or the scholarly editor. Unless it’s the historian—or the archaeologist—or the interpreter of the aesthetic dimension of arts and letters. Okay, nobody lives with black boxes more than the modern humanities scholar, and not only because of the ever-more-evident algorithmic and proprietary nature of our shared infrastructure for scholarly communication. She lives with black boxes for two further reasons: both because her subjects of inquiry are themselves products of systems obscured by time and loss (opaque or inaccessible, in part or in whole), and because she operates on datasets that, generally, come to her through the multiple, muddy layers of accident, selection, possessiveness, generosity, intellectual honesty, outright deception, and hard-to-parse interoperating subjectivities that we call a library.” (Nowviskie 2015 – her emphases)
Leaving aside the textual emphasis that is frequently the focus of digital humanities, these “multiple, muddy layers” certainly speaks to the archaeologist in me. The idea that digital archaeologists (and archaeologists using digital tools for that matter) work with black boxes has a long history – for instance, the black-boxing of archaeological multivariate quantitative analyses in the 1960s and 1970s was a not uncommon criticism at the time. During the intervening forty-odd years, however, it has become a topic that we rarely discuss. What are the black boxes we use? Where do they appear? Do we recognise them? What is their effect? Nowviskie talks of black boxes in terms of the subjects of enquiry – which as archaeologists we can certainly understand! – and the datasets about them, but, as she recognises, black boxing extends far beyond this.
Black boxes are present in the technological tools we use to collect those data – cameras, survey instruments etc. – which introduce their own layers of accident, selection, deception, opacity and so on. Black boxes are also present in our digital tools through the interfaces we create and the algorithms we use. To what extent do these affect our practice? And to what extent do these become subsumed within others, making it even more difficult to evaluate their effect as the output from one black box becomes the input for another?
For instance, in her work on digital media in archaeology, Colleen Morgan has examined aspects of the use of digital photography within archaeology – most recently at CAA 2015 in Siena, she discussed the ‘post-photographic’ and the way in which photography is moving beyond the traditional with the increasing levels of manipulation and modifications of the ‘original’ image. Of course, a lot of these manipulations are undertaken as deliberate post-processing actions, ranging from simple cropping of images through to alterations to their brightness, contrast, saturation or sharpness, for instance. Others are undertaken automatically within the camera, its software making adjustments to address issues it perceives with lighting, focus etc. and cameras may allow some degree of access and control over this. Still other elements may be hard-baked into the camera hardware, such as noise reduction to address different kinds of noise introduced by the sensor (for instance, variations between the individual pixels in the sensor, or signal interference in low-light conditions). As a consequence, pictures taken with equivalent cameras from different manufacturers of the same subject under the same conditions can look very different – as reviews of new cameras frequently demonstrate. Where does that leave the ‘truth’ of an image? And when those images themselves become the input into further processes how significant are these factors? We can apply the same consideration to the operation and functioning of total stations, laser scanners, proton magnetometers, portable XRF, and a host of other technical means of data collection in archaeology.
I’ve commented elsewhere about black boxing in relation to software – for instance, the apparent creation of search filter bubbles or the application of algorithms (which Nowviskie also discusses in her post), and there is always the suspicion that the accessibility of functions within complex software such as GIS has an effect on the regularity of their application. We’re becoming more aware that software restricts at the same time as it enables, but inevitably it is in the interests of those who create and design the tools to promote their enabling capabilities to a greater extent than their restrictions, to the extent that the latter may effectively be disguised. We only have to look at the current controversies surrounding the ways that the new Windows 10 operating system ‘phones home’ to Microsoft regardless of the privacy settings used (for example Glance 2015, Bright 2015), limited details of which were embedded in lengthy license terms which, as security firm F-Secure demonstrated last year, most people don’t read.
Bruno Latour refers to black boxing as a process which makes the productions of actors and artefacts opaque (1999, 183), and the degree of invisibility of that process is a measure of its success. It is only when something stops working or behaves unexpectedly that we become aware of it as constituting one or more black boxes – especially when we realise we are unable to fix it or make it work again. Such success – and hence invisibility – is perhaps why it is all too easy to overlook the need to understand the structures, operation, and outcomes of a technology by opening the box. But one way or another, our use of these tools is mediated by the actions and decisions of those who designed and created them – and may well have done so without our mode of use specifically in mind.
References
Peter Bright 2015 ‘Even when told not to, Windows 10 just can’t stop talking to Microsoft‘, Ars Technica
David Glance 2015 ‘Windows 10 is not really free: you are paying for it with your privacy’, The Conversation
Bruno Latour 1999 Pandora’s hope: essays on the reality of science studies, Cambridge, Massachusetts: Harvard University Press.
Colleen Morgan 2015 ‘The Death (and Afterlife) of Archaeological Photography’, Middle Savagery
Bethany Nowviskie 2015 ‘A Game Nonetheless’, nowviskie.org