Slow AI

Slow Down road sign
Image by Tristan Schmurr (CC 2.0)

None of us can be unaware of the hype surrounding artificial intelligence at the moment. We see multi-billion-dollar companies being built around generative AI while older more established companies are placing multi-billion-dollar bets on the development of generative AI to ensure they don’t miss out. Many of the chief executives of those same companies have warned of the imminent danger of artificial general intelligence when human cognition is surpassed by machines, arguing for greater regulation while at the same time ensuring that they invest in attempts to develop precisely what they warn about. Others are more alarmist still: Yuval Harari, the historian and writer, dramatically claimed last year that “What nukes are to the physical world…AI is to the virtual and symbolic world”, and that artificial intelligence is an alien threat potentially resulting in the end of human history. We’re told we have to take the welfare rights of artificial intelligence seriously because it will become conscious or at least robustly agentic within the next ten years (Long et al. 2024), even if a Google engineer was fired in 2022 for claiming its AI technology had become sentient and had a soul. Even the notorious failures – the tendencies of ChatGTP and its ilk to hallucinate and simply make things up, for instance – do little more than introduce a brief pause while such predispositions are inevitably programmed around.

It’s easy for us to get caught up in this drive for artificial intelligence – perhaps through enthusiasm for the exciting potentiality of the tools, or through fear of missing out, or because we simply feel powerless in the face of the inevitable AI onslaught. But a long time ago, the philosopher and media ecologist Marshall McLuhan observed that “there is absolutely no inevitability as long as there is a willingness to contemplate what is happening” (McLuhan 1967, 25). The problem is that contemplating what is happening, thinking through some of the implications of artificial intelligence for archaeology, is not necessarily a simple thing to do amidst the very developments we seek to analyse. McLuhan recognised this, emphasising the importance of considering not only the figure – what appears obvious to us, what jumps out at us – but also the ground, the perceptual and implicit impacts of a technology which tend to be overlooked.

Or to think of it a different way, Lars Hallnäs and Johan Redström (2001) emphasised the value of a slow versus fast approach to technology as a means of highlighting the importance of understanding not just how something works, but the consequences of its use, with the slow approach used to expose the technology and encourage reflection. The slow approach has previously seen some application in archaeology in recent years. Most notable is the work by Bill Caraher and his ‘slow archaeology’ (2019) which he saw as reinstating a focus on craft practice in fieldwork in the face of an emphasis on digital tools as a means of improving efficiency. Linked to this is the characterisation by myself and others of ‘slow data’ (Huggett 2022) which emphasises the performativity and creativity of data, providing a mindful and humanistic approach to data in the face of data that is otherwise perceived as scientific givens and – especially relevant in the context of artificial intelligence methods – resisting the idea that data can speak for itself. Similarly Marko Marila (2019) has written about the importance of decelerating archaeology in order to reveal the intellectual, technological, and social processes that become concealed through methodological streamlining, which he identifies as including an emphasis on computation methods, statistics, and so on; again relevant in relation to the growth of artificial intelligence applications.

It’s important, therefore, to contemplate what might be revealed by a deceleration of artificial intelligence in archaeology by adopting a slow approach in the face of the fast and furious artificial intelligence technologies. One way of approaching this is through focusing on the implications of three specific and linked areas: the agency of AI, the affordances introduced by AI, and the accountability of AI.

What is our relationship with these things? And what is their relationship with us? This question of the relationships between humans and things, and the ways in which they interact with each other, act on each other, is something archaeologists have investigated through the concept of agency (e.g. Huggett 2021). How much agency are we prepared to cede to machines? How much control do we want to keep over the tools we employ? How much machine input into the creation of archaeological knowledge do we accept? Indeed, does machine sense-making at that level constitute archaeological knowledge in the first place?

If agency is broadly defined as the capacity to act in an intentional, meaningful way, then affordances are the opportunities presented for action by any particular thing. Particularly relevant in the context of artificial intelligence are the ways in which technologies can extend human capacity to act and perform. However, thinking only of affordances and embracing the potentialities they represent is a mistake: we also need to consider the disaffordances, the ways in which things can constrain action rather than facilitate it. This is less common in part because of the irresistable attraction of the affordances offered by digital technologies, but also because the disaffordances may only become apparent sometime later – there is frequently a lag between implementing a new technology and becoming aware of constraints or limitations imposed. However, it is more complex still since there is no simple binary division between affordances and disaffordances. There is no straightforward right or wrong, good or bad, advantage or disadvantage: it’s more frequently a question of balance. But understanding the affordances and disaffordances – not simply the practical or technical ones but social and political ones too – enables us to be clearer about what we might exploit and at the same time what we might need to resist when it comes to these technologies.

To a degree, where we stand in relation to the agency of these devices will determine which of the affordances/disaffordances we focus upon, and which we choose to set aside – and the operative word here is ‘choose’: it’s something we should do knowledgeably, not conforming to external pressures and expectations or just drifting with the flow. This is because artificial intelligence can potentially influence the way in which we think about and approach archaeology. For instance, several observers in recent years have suggested that we are witnessing a resurgence of empiricism in archaeology, where data – and big data in particular – are incorporated along with scientific and digital methodologies into what Tim Flohr Sørensen has suggested is a new empiricism that sees archaeological knowledge as constituted from quantitative data and absolute facts (2017, 101). Archaeology is not unique in this respect: for instance, in the context of the recent Nobel Prizes in Physics and Chemistry awarded to different aspects of artificial intelligence, Xiao-Li Meng (2024, 6) recently observed that

The rapid rise of machine learning and generative AI has vastly expanded our ability to seek, simulate, and synthesize patterns – often without any deep understanding of their underlying generative mechanisms, assuming such mechanisms even exist. … AI’s pattern-seeking paradigm has effectively broken an epistemic barrier, earning genuine respect from the hard sciences without providing the reasoning and explanations that scientific fields typically demand.

This lays bare the challenge: are we as archaeologists content to allow this epistemic barrier to be broken, to permit reasoning and explanations to be blackboxed or should they be transparently available? Should our expectations of accountability grow as artificial intelligence assumes (or is allowed) greater agency?

Determining the nature of that accountability is in part reliant on the degree of agency we allocate, and how that accountability will be operationalised. Trust in systems is something we increasingly have to resort to in situations where we are unable to evaluate the system (whether through opaqueness, lack of knowledge/expertise etc), especially where they are allowed to operate more or less autonomously as black boxes. This raises a problem: for example, Meghan Dennis in her study of archaeological digital ethics has argued that using a digital tool whose workings or processes cannot be understood by the user is inherently unethical (Dennis 2020, 215). Similarly, Inkeri Koskinen (2023, 7-8) sees the black-boxing of artificial intelligence applications as undermining approaches to trust since:

… the scientists who use them are epistemically dependent on the instruments, but there is no accountable agent who could fully grasp how the instrument works and therefore be able to take full responsibility of it, and thus no one the scientist could trust, in a rationally grounded way, to have all the relevant knowledge.

(which harks back to Xiao-Li Meng’s earlier comment about the breaking of the epistemic barrier). Of course, explainable AI is often seen as desirable, if difficult to achieve, but even if the artificial intelligence can be made to explain itself, can its explanation be trusted, or does its self-justification risk another hallucination, for instance?

James Taylor and Nicoló Dell’Unto’s ‘skeuomorphic’ model of archaeological practice (2021) emphasises the importance of negotiating the place of new technologies within existing practice, at the same time recognising that some elements of existing practice may become no longer fit for purpose and will benefit from change. On the basis of their model, artificial intelligence approaches in archaeology are still at an early stage of adoption, sitting in the area of emulation of existing practices (for instance, feature identification from imagery, pottery identification, lithic classification, and so on). Reassuringly, perhaps, this places it within their category of ‘concerns for efficiency and epistemological integrity acting as a constraint on the introduction of new technology’ (Taylor and Dell’Unto 2021, 487). They talk of the need to test a new technology – “can it at least do what we already do as a baseline?” – and challenge it – “what happens when it breaks?” (2021, 496) – in a cycle of implementation accompanied by reflexive critique if that technology is to ultimately transform our practice. And this reflexive critique needs to properly consider the agency, affordances, and the accountability that we allow and require of our devices as we develop and introduce them in our practice. Along the way, this will help to heighten the chances that we resist the excesses of the hype expressed in the wider world and at the same time avoid the onset of disillusionment and another archaeological AI winter. That’s the challenge for artificial intelligence in archaeology as it looks to find its place within our discipline.

[This post is based on part of a keynote presentation given to the Archaeo-Informatics 2024 ‘Use and Challenges of AI in Archaeology’ conference. My thanks to Lutgarde Vandeput and Nurdan Atalan Çayırezmez and colleagues for the invitation to speak.]

References

Caraher, W. (2019). Slow Archaeology, Punk Archaeology, and the ‘Archaeology of Care’. European Journal of Archaeology, 22(3), 372–385. https://doi.org/10.1017/eaa.2019.15

Dennis, L. M. (2020). Digital Archaeological Ethics: Successes and Failures in Disciplinary Attention. Journal of Computer Applications in Archaeology, 3(1), 210–218. https://doi.org/10.5334/jcaa.24

Hallnäs, L., & Redström, J. (2001). Slow Technology – Designing for Reflection. Personal and Ubiquitous Computing, 5(3), 201–212. https://doi.org/10.1007/PL00000019

Huggett, J. (2021). Algorithmic Agency and Autonomy in Archaeological Practice. Open Archaeology, 7(1), 417–434. https://doi.org/10.1515/opar-2020-0136

Huggett, J. (2022). Is Less More? Slow Data and Datafication in Archaeology. In K. Garstki (Ed.), Critical Archaeology in the Digital Age (pp. 97–110). UCLA Cotsen Institute of Archaeology Press. https://escholarship.org/uc/item/0vh9t9jq#page=112

Koskinen, I. (2023). We Have No Satisfactory Social Epistemology of AI-Based Science. Social Epistemology, 38(4), 458–475. https://doi.org/10.1080/02691728.2023.2286253

Long, R., Sebo, J., Butlin, P., Finlinson, K., Fish, K., Harding, J., Pfau, J., Sims, T., Birch, J., & Chalmers, D. (2024). Taking AI Welfare Seriously (arXiv:2411.00986). arXiv. https://doi.org/10.48550/arXiv.2411.00986

Marila, M. M. (2019). Slow science for fast archaeology. Current Swedish Archaeology, 27, 93–114. https://doi.org/10.37718/CSA.2019.05

Meng, X.-L. (2024). AI Has Won Nobel Prizes in Hard Science: Can Humans Be Smarter—and Softer on Each Other? Harvard Data Science Review, 6(4). https://doi.org/10.1162/99608f92.71926d9f

McLuhan, M. (1967). The medium is the massage: An inventory of effects. Gingko Press.

Sørensen, T. F. (2017). The Two Cultures and a World Apart: Archaeology and Science at a New Crossroads. Norwegian Archaeological Review, 50(2), 101–115. https://doi.org/10.1080/00293652.2017.1367031

Taylor, J., & Dell’Unto, N. (2021). Skeuomorphism in Digital Archeological Practice: A Barrier to Progress, or a Vital Cog in the Wheels of Change? Open Archaeology, 7(1), 482–498.  https://doi.org/10.1515/opar-2020-0145