Cory Doctorow recently coined the term ‘enshittification’ in relation to digital platforms, which he defines as the way in which a platform starts by maximising benefits for its users and then, once they are locked in, switches attention to building profit for its shareholders at the expense of the users, before (often) entering a death-spiral (Doctorow 2023). He sees this applying to everything from Amazon, Facebook, Twitter, Tiktok, Reddit, Steam, and so on as they monetise their platforms and become less user-focused in a form of late-stage capitalism (Doctorow 2022; 2023). As he puts it:
… first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die. (Doctorow 2023).
For instance, Harford (2023) points to the way that platforms like Amazon run at a loss for years in order to grow as fast as possible and make their users dependent upon the platform. Subsequent monetisation of a platform can be a delicate affair, as currently evidenced by the travails of Musk’s Twitter and the increasing volumes of people overcoming the inertia of the walled garden and moving to other free alternatives such as Mastodon, Bluesky, and, most recently, Threads. The vast amounts of personal data collected by commercial social media platforms strengthens their hold over their users, a key feature of advanced capitalism (e.g., Srnicek 2017), making it difficult for users to move elsewhere and also raising concerns about privacy and the uses to which such data may be put. Harford (2023) emphasises the undesirability of such monopolisation and the importance of building in interoperability between competing systems to allow users to switch away as a means of combatting enshittification.
Sometimes words or phrases are coined that seem very apposite in that they appear to capture the essence of a thing or concept and quickly become a shorthand for the phenomenon. ‘Digital twin’ is one such term, increasingly appearing in both popular and academic use with its meaning seemingly self-evident. The idea of a ‘digital twin’ carries connotations of a replica, a duplicate, a facsimile, the digital equivalent of a material entity, and conveniently summons up the impression of a virtual exact copy of something that exists in the real world.
For example, there was a great deal of publicity surrounding the latest 3D digital scan of the Titanic, created from 16 terabytes of data, 715,000 digital images and 4K video footage, and having a resolution capable of reading the serial number on one of the propellors. The term ‘digital twin’ was bandied around in the news coverage, and you’d be forgiven for thinking it simply means a high-resolution digital model of a physical object although the Ars Technica article hints at the possibility of using it in simulations to better understand the breakup and sinking of the ship. The impression gained is that a digital twin can simply be seen as a digital duplicate of a real-world object, and the casual use of the term would seem to imply little more than that. By this definition, photogrammetric models of excavated archaeological sections and surfaces would presumably qualify as digital twins of the original material encountered during the excavation, for instance.
Digital tools increasingly permeate our world, supporting, enhancing, or replacing many of our day-to-day activities in archaeology as elsewhere. Many of these devices lay claim to being ‘smart’, even intelligent, though more often than not this has more to do with sleight of hand and invisible software functionality than any actual intelligence. As Ian Bogost has recently observed, the key characteristic of these so-called smart devices is not intelligence so much as online connectivity, the realisation of which brings with it external surveillance and data-gathering (Bogost 2022). Such perceptions of ‘smartness’ might also point to a tendency for us to overestimate the capabilities of digital tools while at the same time minimise their influence.
In this light, I came across an interesting quotation from an anonymous archaeologist cited in Smiljana Antonijević’s book Amongst Digital Humanists: An Ethnographic Study of Digital Knowledge Production who said:
In archaeology, digital technologies such as GIS applications, laser scanning, or databases have been used for decades, and they are as common as a trowel or any other archaeological tool. (Antonijević 2016, 49).
We’ve all experienced that rush of recollection when we uncover some long-hidden or long-lost object from our past in the bottom of a drawer or box, triggering memories of encounters, activities, people, and places. We’re accustomed to the idea that we use evocative things as stored memories, deliberately or inadvertently, and as distributed extensions of our embodied memory (e.g. Heersmink 2018). Is it the same with digital objects? For example, van Dijck asks:
Are analog and digital objects interchangeable in the making, storing, and recalling of memories? Do digital objects change our inscription and remembrance of lived experience, and do they affect the memory process in our brains? (2007, xii).
Perhaps it’s a neurosis brought on by the contemplation of my excavation backlog, but I think there is a difference: that not all analog objects are equally interchangeable with digital equivalents in terms of their functioning as distributed memories, and that this difference is significant when we consider the archaeological narratives we are able to construct from our digital records. It may be that this perspective is coloured by the physical nature of my backlog from the 1980s and 1990s which for various reasons sits on the cusp of analog/digital recording. Although Ruth Tringham recalls how in the 1980s the digital recording of hitherto paper records was distrusted (Tringham 2010, 87), not least due to concerns about the fragility of the hardware and impermanence of the product, in my case it was rather more prosaic: as someone working with computers full-time in my day job I had no desire to turn my excavation experience into a busman’s holiday as the on-site computer technician. The downside was that I subsequently gave myself the monumental task of manually entering the record sheets into the database and scanning/digitising the plans and sections in the off-season. In retrospect, however, this provides the opportunity to consider the different affordances of the two sets of analog and digital records, a perception that is reinforced by the pre-pandemic experience of packing my office which incorporated two days of sorting and moving the physical archive and about five minutes transferring the digital files.
“research data which has not been shared or published by any means and is thus in contravention of the ‘FAIR’ principles which require data to be Findable Accessible, Interoperable and Reusable”.
Although the DPC jury hopes that this is a small group, I rather suspect that there is an unseen mountain of unpublished research data in archaeology (and in the interest of full disclosure: reader, I have some).
This crossed my screen at the same time as a paper published in the Harvard Data Science Review by Stephen Stigler: ‘Data Have a Limited Shelf Life’, in which he argues that data, unlike wines, do not improve with age. He suggests that old data are “Often … no more than decoration; sometimes they may be misleading in ways that cannot easily be discovered”, while emphasising this is not the same as saying they have no value. Using three examples of old statistical data, he shows how misleading and incomplete they can be if their full background is not known. In each case, the data were selected from a prior source, not always accurately referenced if at all. In some instances, uncovering the original data flagged problems with the sample that had been taken, in others it revealed a greater breadth and depth of information which had gone un-used because the particular research question had stripped them away.
I recently published a paper, ‘Resilient Scholarship in the Digital Age’, which looked at the tensions between digital practice and academic labour (Huggett 2019). My focus was on the nature of academic experience within the modern university and the way in which the professional and personal life of the university academic is influenced by the digital technologies which enable and support the neoliberal commodification and commercialisation of universities (at least in the UK, North America and Australasia). It was a difficult paper to write, not least because of a strong personal interest and involvement, but also because of the way it ranged across digital sociology, the sociality of labour, resilience theory, management theory, feminist and Marxist theory, and so on, most of which was entirely new to me.
The referees were very positive in their comments (thankfully!), but one particular observation they made was that in focussing on university academia, I overlooked the implications for archaeological scholarship more widely, given that much of it occurs within the realms of Cultural Resource Management and related contract work, within governmental departments and non-governmental agencies, as well as within community initiatives. This is certainly true, as is underlined in the periodic surveys of archaeological employment in the UK (e.g. Aitchison 2019). However, in my response to the editors I argued that this was too broad a definition of scholarship for the scope of this particular paper, and, perhaps more importantly, would require a level of knowledge about the scholarly experience outside the university environment that I simply didn’t have – it’s some 30 years since I worked in contract archaeology, for example. Other people are better qualified than I to discuss scholarship in these working contexts.
Discussion of digital ethics is very much on trend: for example, the Proceedings of the IEEE special issue on ‘Ethical Considerations in the Design of Autonomous Systems’ has just been published (Volume 107 Issue 3), and the Philosophical Transactions of the Royal Society A published a special issue on ‘Governing Artificial Intelligence – ethical, legal and technical opportunities and challenges’ late in 2018. In that issue, Corinne Cath (2018, 3) draws attention to the growing body of literature surrounding AI and ethical frameworks, debates over laws governing AI and robotics across the world and points to an explosion of activity in 2018 with a dozen national strategies published and billions in government grants allocated. She also notes the way that many of the leaders in both debates and the technologies are based in the USA which itself presents an ethical issue in terms of the extent to which AI systems mirror the US culture rather than socio-cultural systems elsewhere around the world (Cath 2018, 4).
Agential devices, whether software or hardware, essentially extend the human mind by scaffolding or supporting our cognition. This broad definition therefore runs the gamut of digital tools and technologies, from digital cameras to survey devices (e.g. Huggett 2017), through software supporting data-driven meta-analyses and their incorporation in machine-learning tools, to remotely controlled terrestrial and aerial drones, remotely operated vehicles, autonomous surface and underwater vehicles, and lab-based robotic devices and semi-autonomous bio-mimetic or anthropomorphic robots. Many of these devices augment archaeological practice, reducing routinised and repetitive work in the office environment and in the field. Others augment work by developing data-driven methods which represent, store, and manipulate information in order to undertake tasks previously thought to be uncomputable or incapable of being automated. In the process, each raises ethical issues of various kinds. Whether agency can be associated with such devices can be questioned on the basis that they have no intent, responsibility or liability, but I would simply suggest that anything we ascribe agency to acquires agency, especially bearing in mind the human tendency to anthropomorphize our tools and devices. What I am not suggesting, however, is that these systems have a mind or consciousness themselves, which represents a whole different ethical set of questions.
“The archaeological record is intrinsically digital, not in the sense that it turns digital once the data have been entered and processed, but, more radically, in the sense that it is by its very nature digital, in its genesis and its structure.” (Buccellati 2017, 232)
would pique the interest of any digital archaeologist. But strangely, that seems not to be the case: Giorgio Buccellati’s book appears to be currently unreviewed and largely, it seems, unremarked upon. Two exceptions to this generalisation are Gavin Lucas and Bill Caraher. In his latest book, Gavin Lucas suggests that Buccellati’s characterisation of archaeology as natively digital is problematic (2019, 91), but the critique is limited as the book’s focus lies elsewhere, on textuality. In his response to Sara Perry and James Taylor’s ‘Theorising the Digital’ paper (2018), in which they point to the disconnect between the demonstrable impact of digital archaeology on archaeological method relative to its comparative lack of effect on archaeological theory, Bill Caraher suggests (2018) that Buccellati’s book represents a rare example of the interplay between digital theory and broader archaeological theory. So why does Buccellati argue that archaeology is natively digital? And is his characterisation of digitality useful to digital archaeology, as well as to archaeology more broadly?
Some time ago, David Berry introduced the term ‘infrasomatization’ (Berry 2016) which he defines as the production of constitutive infrastructures; specifically the way that digital algorithms are deployed and change existing infrastructures, and how they alter rationalities by introducing computational interdependencies and structural brittleness into our systems (Berry 2018). In the process, he has just coined another new term: the Datanthropocene, the data-intensive society. This is closely linked to ‘big data’ approaches, data-intensive science, and he suggests that it “creates new economic structures but also new social realities and data-intensive subjectivities and hence new problems for society to negotiate”.
Of course, debates continue about the Anthropocene, not least whether or not it can even be defined as a specific epoch – does it start with the atomic era, for instance, or maybe even with the introduction of agriculture, or is it primarily associated with human-created climate change, pollution and extinctions?
Is everything in archaeology computable? Does digital archaeology reach to the furthest corners of archaeology, or are there limits to what can be addressed digitally or computationally? This was a question that came up at the CAA conference in Tübingen earlier this year: Juan Barceló argued that the term ‘digital’ was fundamentally associated with numbers and formal logics, and that this constituted an unrealised ‘dark side’ to digital archaeology which required us
… to rebuild the way we think about the connection from the past to the present using a new language – mathematics, geometry – that should allow new explanations. The trouble is that most researchers do not know the language nor the tool, and they are still linked to a traditional way of doing things. (Barceló 2018).
Juan’s proposal was met with a mixed response: while some (a few) recognised this and agreed that the future success of digital archaeology lay in this language of mathematics and associated analytical methods (see, for example, Iza Romanowska’s (2018) abstract in the same conference session), others clearly felt that they did not recognise this view of digital archaeology. So why was the response so ambivalent?