“The archaeological record is intrinsically digital, not in the sense that it turns digital once the data have been entered and processed, but, more radically, in the sense that it is by its very nature digital, in its genesis and its structure.” (Buccellati 2017, 232)
would pique the interest of any digital archaeologist. But strangely, that seems not to be the case: Giorgio Buccellati’s book appears to be currently unreviewed and largely, it seems, unremarked upon. Two exceptions to this generalisation are Gavin Lucas and Bill Caraher. In his latest book, Gavin Lucas suggests that Buccellati’s characterisation of archaeology as natively digital is problematic (2019, 91), but the critique is limited as the book’s focus lies elsewhere, on textuality. In his response to Sara Perry and James Taylor’s ‘Theorising the Digital’ paper (2018), in which they point to the disconnect between the demonstrable impact of digital archaeology on archaeological method relative to its comparative lack of effect on archaeological theory, Bill Caraher suggests (2018) that Buccellati’s book represents a rare example of the interplay between digital theory and broader archaeological theory. So why does Buccellati argue that archaeology is natively digital? And is his characterisation of digitality useful to digital archaeology, as well as to archaeology more broadly?
This seemed fair enough at the time, if admittedly challenging. What I hadn’t appreciated, though, was the controversial nature of such a claim. For sure, in that piece I referred to Yoshua Bengio’s argument that we don’t understand human experts and yet we trust them, so why should we not extend the same degree of trust to an expert computer (Pearson 2016)? But it transpires this is quite a common argument posited against claims that systems should be capable of explaining themselves, not least among high-level Google scientists. For example, Geoff Hinton recently suggested in an interview that to require that you can explain how your AI systems works (as, for example, the GDPR regulations do) would be a disaster:
Is everything in archaeology computable? Does digital archaeology reach to the furthest corners of archaeology, or are there limits to what can be addressed digitally or computationally? This was a question that came up at the CAA conference in Tübingen earlier this year: Juan Barceló argued that the term ‘digital’ was fundamentally associated with numbers and formal logics, and that this constituted an unrealised ‘dark side’ to digital archaeology which required us
… to rebuild the way we think about the connection from the past to the present using a new language – mathematics, geometry – that should allow new explanations. The trouble is that most researchers do not know the language nor the tool, and they are still linked to a traditional way of doing things. (Barceló 2018).
Juan’s proposal was met with a mixed response: while some (a few) recognised this and agreed that the future success of digital archaeology lay in this language of mathematics and associated analytical methods (see, for example, Iza Romanowska’s (2018) abstract in the same conference session), others clearly felt that they did not recognise this view of digital archaeology. So why was the response so ambivalent?
Quartz, the digital news outlet, recently published an interview by Adrienne Matei with Peter Kahn, a psychology professor at the University of Washington. In it, they discuss how technology is affecting our lives and becoming a means to mediate the real world. The item references some of the research that Kahn and his colleagues at the Human Interaction with Nature and Technological Systems Lab (HINTS) have undertaken, aspects of which have direct relevance for understanding technology within archaeology. They raise issues such as the limitations of technological devices, questions of authenticity, changing perspectives, and what they call the ‘shifting baseline problem’, all of which have their echoes within digital archaeology.
For example, in one study, they compared the experience of subjects presented with natural views from a window to those given real-time views of the same view on a large plasma screen (Kahn et al. 2008). The physiological recovery of subjects from low-level stress was faster with the glass window, while there was no difference between the display and a blank wall. Problems identified with the plasma display included the inability of viewers to change their perspective on outside objects by shifting their position (the parallax problem), as well as issues to do with pixilation and depth perception (Kahn et al. 2008, 198). They also report that subjects made judgments about what it means for a view to be ‘real’ as opposed to ‘represented’ and how these judgments then fed back into the physiological and psychological system to affect the outcome of the experiment.
Infrastructures are all around us. They make the modern world work – whether we’re thinking of infrastructures in terms of gas, electric or water supply, telephony, fibre networks, road and rail systems, or organisations such as Google, Amazon and others, and so on. Infrastructures are also what we are building in archaeology. Data distribution systems have increasingly become an integral part of the archaeological toolkit, and the creation of a digital infrastructure – or cyberinfrastructure – underpins the set of grand challenges for archaeology laid out by Keith Kintigh and colleagues (2015), for example. But what are the consequences and challenges associated with these kinds of infrastructures? What are we knowingly or unknowingly constructing?
Patrik Svensson (2015) has pointed to a lack of critical work and an absence of systemic awareness surrounding the developments of infrastructures within the humanities. While he points to archaeology as one of the more developed in infrastructural terms, this isn’t necessarily a ‘good thing’ in the light of his critique. As he says, “Humanists do not … necessarily think of what they do as situated and conditioned in terms of infrastructures” (2015, 337) and consequently:
“A real risk … is that new humanities infrastructures will be based on existing infrastructures, often filtered through the technological side of the humanities or through the predominant models from science and engineering, rather than being based on the core and central needs of the humanities.” (2015, 337).
In 2014 the European Union determined that a person’s ‘right to be forgotten’ by Google’s search was a basic human right, but it remains the subject of dispute. If requested, Google currently removes links to an individual’s specific search result on any Google domain that is accessed from within Europe and on any European Google domain from wherever it is accessed. Google is currently appealing against a proposed extension to this which would require the right to be forgotten to be extended to searches across all Google domains regardless of location, so that something which might be perfectly legal in one country would be removed from sight because of the laws of another. Not surprisingly, Google sees this as a fundamental challenge to accessibility of information.
As if the ‘right to be forgotten’ was not problematic enough, the EU has recently published its General Data Protection Regulation 2016/679 to be introduced from 2018 which places limits on the use of automated processing for decisions taken concerning individuals and requires explanations to be provided where an adverse effect on an individual can be demonstrated (Goodman and Flaxman 2016). This seems like a good idea on the face of it – shouldn’t a self-driving car be able to explain the circumstances behind a collision? Why wouldn’t we want a computer system to explain its reasoning, whether it concerns access to credit or the acquisition of an insurance policy or the classification of an archaeological object?
[To interrupt the blogging hiatus, here’s the introduction to a recently published paper …]
Since the mid-1990s the development of online access to archaeological information has been revolutionary. Easy availability of data has changed the starting point for archaeological enquiry and the openness, quantity, range and scope of online digital data has long since passed a tipping point when online access became useful, even essential. However, this transformative access to archaeological data has not itself been examined in a critical manner. Access is good, exploitation is an essential component of preservation, openness is desirable, comparability is a requirement, but what are the implications for archaeological research of this flow – some would say deluge – of information?
In an earlier post I wrote about the importance of understanding the legibility, agency and negotiability of archaeological data as we increasingly depend on online data delivery as the basis for the archaeologies we write and especially as those archaeologies show signs of being partly written by the delivery systems themselves.
A simple illustration of this is the idea of filter bubbles. This term was coined in 2011 by Eli Pariser to describe the way in which search algorithms selectively return results depending on their knowledge of the person who asked the question. It’s an idea previously flagged by, amongst others, Jaron Lanier who wrote about ‘agents of alienation’ in 1995, but it came to the fore through the recognition of the personalisation of Google results and Facebook feeds (and is the counter-selling point of the alternative search engine, DuckDuckGo, for example). So can we see this happening with archaeological data? Perhaps not to the extent described by Pariser, Lanier and others, but still …
As the end of 2014 approaches, Facebook has unleashed its new “Year in Review” app, purporting to show the highlights of your year. In my case, it did little other than demonstrate a more or less complete lack of Facebook activity on my part other than some conference photos a colleague had posted to my wall; in Eric Meyer’s case, it presented him with a picture of his daughter who had died earlier in the year. In a thoughtful and thought-provoking piece, he describes this as ‘Inadvertent Algorithmic Cruelty’: it wasn’t deliberate on the part of Facebook (who have now apologised), and for many people it worked well as evidenced by the numbers who opted to include it on their timelines, but it lacked an opt-in facility and there was an absence of what Meyer calls ‘empathetic design’. Om Malik picks up on this, pointing to the way Facebook now has an ‘Empathy Team’ apparently intended to make designers understand what it is like to be a user (sorry, a person), although Facebook’s ability to highlight what people see as important is driven by crude data such as the number of ‘likes’ and comments without any understanding of the underlying meanings which are present.
Emma Bryce (2014) has recently written about her autistic brother’s interest in technology – something that is quite commonly associated with folk on the spectrum. I deliberately wound up a conference audience some years ago by characterising computer-usage amongst archaeologists as fetishistic, but I’m not about to claim that digital archaeologists are autistic. However, one phrase at the end of her article jumped out at me: that regardless of where we are, on or off the spectrum, we all use technology as a form of comfort and security.
“By its very structure, technology invites us to practice repetitive behaviours and keep familiar habits alive. It transports us to places we feel comfortable…”