Buttons figure large in the world around us. Just in the last year we’ve seen everything from presidents boasting about the size of their nuclear buttons to Apple being faced with a class action over the failure of their new ‘improved’ butterfly keys to Amazon’s Dash buttons being barred in Germany for not providing information about price prior to being pressed. In archaeology, we’ve become accustomed to buttons and button-presses generating data, performing analyses, and presenting results, ranging across the digital instruments we employ and the software tools we rely on. So, to pick a random example, “researchers will be able to compare ceramics across thousands of sites with a click of the button.” (Smith et al 2014, 245).
Rachel Plotnick has recently discussed the place of buttons in our cultural imaginary:
… push a button and something magical begins. A sound erupts that seems never to have existed before. A bomb explodes. A vote registers. A machine animates, whirling and processing. A trivial touch of the finger sets these forces in motion. The user is all powerful, sending the signal that turns on a television, a mobile phone, a microwave. She makes everything go. Whether or not she understands how the machine works, she determines the fate of the universe. (Plotnick 2018, xiv).
To what extent does our use of digital devices to capture and process archaeological data affect our perceptions of what was there? Mark Altaweel (2018) has recently asked a similar question in relation to GPS technologies – how do these affect our understanding and experience of place? He suggests that they diminish our sense of place and experiences that we might otherwise have as we navigate according to their recommendations. Certainly, satnavs are notorious for taking our navigational cognitive load upon themselves and consequently leading drivers who are insufficiently aware of their surroundings into undesirable, even dangerous situations. We might think that the human cognitive load that is thereby freed up by such devices ought to be capable of being diverted into more useful, more extensive, areas – we literally have the space to think about bigger and deeper things as a consequence of their application. This kind of argument frequently arises in relation to the value of automation, for instance, and can be seen in the kinds of discussions surrounding the use of structure-from-motion photogrammetric recording on archaeological excavations, for example. But is this supposed release of cognitive space an unalloyed good? Or is this a case of the technologies distancing us from the physicality of the archaeological material and space in front of us?
Social media have been the focus of much attention in recent weeks over their unwillingness/tardiness in applying their own rules. Whether it’s Twitter refusing to consider Trump’s aggressive verbal threats against North Korea to be in violation of their harassment policy, or YouTube belatedly removing a video by Logan Paul showing a suicide victim (Matsakis 2018, Meyer 2018), or US and UK government attempts to hold Facebook and Twitter to account over ‘fake news’ (e.g. Hern 2017a, 2017b), there is a growing recognition that not only are ‘we’ the data for these social media behemoths, but that these platforms are optimised for this kind of abuse (Wachter-Boettcher 2017, Bridle 2017).
Nicholas Carr has just pointed to some recently published research which suggests that the presence of smartphones divert our attention, using up cognitive resources which would otherwise be available for other activities, and consequently our performance on those non-phone-related activities suffers. In certain respects, this might not seem to be ‘news’ – we’re becoming increasingly accustomed to the problem of technological interruptions to our physical and cognitive activities: the way that visual and aural triggers signal new messages, new emails, new tweets arriving to distract us from the task in hand. However, this particular study was rather different.
In this case, the phones were put into silent mode so that participants would be unaware of any incoming messages, calls etc. (and if the phone was on the desk, rather than in their pocket or bag or in another room altogether, it was placed face-down to avoid any visible indicators) (Ward et al. 2017, 144). Despite this, they found that
“… the mere presence of one’s smartphone may reduce available cognitive capacity and impair cognitive functioning, even when consumers are successful at remaining focused on the task at hand” (Ward et al, 2017, 146).
Quartz, the digital news outlet, recently published an interview by Adrienne Matei with Peter Kahn, a psychology professor at the University of Washington. In it, they discuss how technology is affecting our lives and becoming a means to mediate the real world. The item references some of the research that Kahn and his colleagues at the Human Interaction with Nature and Technological Systems Lab (HINTS) have undertaken, aspects of which have direct relevance for understanding technology within archaeology. They raise issues such as the limitations of technological devices, questions of authenticity, changing perspectives, and what they call the ‘shifting baseline problem’, all of which have their echoes within digital archaeology.
For example, in one study, they compared the experience of subjects presented with natural views from a window to those given real-time views of the same view on a large plasma screen (Kahn et al. 2008). The physiological recovery of subjects from low-level stress was faster with the glass window, while there was no difference between the display and a blank wall. Problems identified with the plasma display included the inability of viewers to change their perspective on outside objects by shifting their position (the parallax problem), as well as issues to do with pixilation and depth perception (Kahn et al. 2008, 198). They also report that subjects made judgments about what it means for a view to be ‘real’ as opposed to ‘represented’ and how these judgments then fed back into the physiological and psychological system to affect the outcome of the experiment.
We’re accustomed to the fact that much archaeology is collaborative in nature: we work with and rely on the work of others all the time to achieve our archaeological ends. However, what we overlook is the way in which much of what we do as archaeologists is dependent upon invisible collaborators – people who are absent, distanced, even disinterested. And these aren’t archaeologists working remotely and accessing the same virtual research environment as us in real time, although some of them may be archaeologists who developed the specialist software we have chosen to use. The majority of these are people we will never know, cannot know, who themselves will be ignorant of the context in which we have chosen to apply their products, and indeed, to compound things, will generally be unaware of each other. They are, quite literally, the ghosts in the machine.
Infrastructures are all around us. They make the modern world work – whether we’re thinking of infrastructures in terms of gas, electric or water supply, telephony, fibre networks, road and rail systems, or organisations such as Google, Amazon and others, and so on. Infrastructures are also what we are building in archaeology. Data distribution systems have increasingly become an integral part of the archaeological toolkit, and the creation of a digital infrastructure – or cyberinfrastructure – underpins the set of grand challenges for archaeology laid out by Keith Kintigh and colleagues (2015), for example. But what are the consequences and challenges associated with these kinds of infrastructures? What are we knowingly or unknowingly constructing?
Patrik Svensson (2015) has pointed to a lack of critical work and an absence of systemic awareness surrounding the developments of infrastructures within the humanities. While he points to archaeology as one of the more developed in infrastructural terms, this isn’t necessarily a ‘good thing’ in the light of his critique. As he says, “Humanists do not … necessarily think of what they do as situated and conditioned in terms of infrastructures” (2015, 337) and consequently:
“A real risk … is that new humanities infrastructures will be based on existing infrastructures, often filtered through the technological side of the humanities or through the predominant models from science and engineering, rather than being based on the core and central needs of the humanities.” (2015, 337).
As the end of 2014 approaches, Facebook has unleashed its new “Year in Review” app, purporting to show the highlights of your year. In my case, it did little other than demonstrate a more or less complete lack of Facebook activity on my part other than some conference photos a colleague had posted to my wall; in Eric Meyer’s case, it presented him with a picture of his daughter who had died earlier in the year. In a thoughtful and thought-provoking piece, he describes this as ‘Inadvertent Algorithmic Cruelty’: it wasn’t deliberate on the part of Facebook (who have now apologised), and for many people it worked well as evidenced by the numbers who opted to include it on their timelines, but it lacked an opt-in facility and there was an absence of what Meyer calls ‘empathetic design’. Om Malik picks up on this, pointing to the way Facebook now has an ‘Empathy Team’ apparently intended to make designers understand what it is like to be a user (sorry, a person), although Facebook’s ability to highlight what people see as important is driven by crude data such as the number of ‘likes’ and comments without any understanding of the underlying meanings which are present.