AI and its Affordances

image_pdfPDFimage_printPrint
Original image by Mike MacKenzie (CC BY 2.0.)

Putting aside the hype associated with artificial intelligence (and generative AI in particular), there’s a great deal of talk surrounding the application of AI within archaeology with numerous examples of AI approaches to automated classification, feature recognition, image analysis, and so on. But much of it seems fairly uncritical – perhaps because it is largely associated with exploratory and experimental approaches seeking to find a place for AI, although there is also something of a publication/presentation bias favouring positive outcomes (e.g., see Sobotkova et al. 2024, 7). AI is typically presented as offering greater efficiencies and productivity gains, and any negative effects, where noted, can be managed out of the equation. In many respects this conforms to the broader context of AI where it is often seen as possessing an almost mythical, ideological status, sold by large organisations as the solution to everything from overflowing mailboxes to the climate crisis.

Cobb’s survey of generative AI in archaeology (Cobb 2023) focusses on key application areas where genAI may be of meaningful use: research (e.g., Spenneman 2024), writing, illustration (e.g., Magnani and Clindaniel 2023), teaching, and programming (e.g., Ciccone 2024), although Cobb sees programming as the most useful with mixed results elsewhere. However, Spenneman (2024, 3602-3) predicts that genAI will affect how cultural heritage is documented, managed, practiced, and presented, primarily through its provision of analysis and decision-making tools. Likewise, Magnani and Clindaniel (2023, 459) boldly declare genAI to be a powerful illustrative tool for depicting and interpreting the past, enabling multiple perspectives and reinterpretations, but at the same time, admit that

Continue reading

Slow AI

image_pdfPDFimage_printPrint
Slow Down road sign
Image by Tristan Schmurr (CC 2.0)

None of us can be unaware of the hype surrounding artificial intelligence at the moment. We see multi-billion-dollar companies being built around generative AI while older more established companies are placing multi-billion-dollar bets on the development of generative AI to ensure they don’t miss out. Many of the chief executives of those same companies have warned of the imminent danger of artificial general intelligence when human cognition is surpassed by machines, arguing for greater regulation while at the same time ensuring that they invest in attempts to develop precisely what they warn about. Others are more alarmist still: Yuval Harari, the historian and writer, dramatically claimed last year that “What nukes are to the physical world…AI is to the virtual and symbolic world”, and that artificial intelligence is an alien threat potentially resulting in the end of human history. We’re told we have to take the welfare rights of artificial intelligence seriously because it will become conscious or at least robustly agentic within the next ten years (Long et al. 2024), even if a Google engineer was fired in 2022 for claiming its AI technology had become sentient and had a soul. Even the notorious failures – the tendencies of ChatGTP and its ilk to hallucinate and simply make things up, for instance – do little more than introduce a brief pause while such predispositions are inevitably programmed around.

Continue reading

The Data Interface

image_pdfPDFimage_printPrint
Painting of woman looking out of window by Friedrich
Detail from Woman at a Window, by Casper David Friedrich (1822)

We understand knowledge construction to be social and combinatorial: we build on the knowledge of others, we create knowledge from data collected by ourselves and others, and so on. Although we pay a lot of attention to the processes behind the collection, recording, and archiving of our data, and are concerned about ensuring its findability, accessibility, interoperability, and reusability into the future, we pay much less attention to the technological mediation between ourselves and those same data. How do the search interfaces which we customarily employ in our archaeological data portals influence our use of them, and consequently affect the knowledge we create through them? How do they both enable and constrain us? And what are the implications for future interface designs?

As if to underline the lack of attention to interfaces, it’s often difficult to trace their history and development. It’s not something that infrastructure providers tend to be particularly interested in, and the Internet Archive’s Wayback Machine doesn’t capture interfaces which use dynamically scripted pages, which writes off the visual history of the first ten years or more of development of the Archaeology Data Service’s ArchSearch interface, for example. The focus is, perhaps inevitably, on maintaining the interfaces we do have and looking forward to developing the next ones, but with relatively little sense of their history. Interfaces are all too often treated as transparent, transient – almost disposable – windows on the data they provide access to.

Continue reading

Is Now the Winter of AI Discontent?

image_pdfPDFimage_printPrint
Snow-covered road at Kanangra-Boyd National Park, NSW, Australia. Image by Toby Hudson CC BY-SA 3.0 via Wikimedia Commons

With Google’s introduction of ‘AI Overviews’ beginning to replace its traditional search engine, Apple launching its ‘Apple Intelligence’ system embedded in its latest variants of iOS, Adobe incorporating an AI Photo Editor in Photoshop, and so on, it’s fair to say that artificial intelligence – in the form of generative AI, at least – is infiltrating many of the digital tools and resources we are accustomed to rely upon. While many uncritically embrace such developments, others are asking whether such developments are desirable or even useful. Indeed, John Naughton (2023) suggests that we are currently in the euphoric stage of AI development and adoption which he predicts will soon be followed by a period of profit-taking before the AI bubble bursts.

In many ways, we’ve been here before. Haigh (2023, 35) describes AI as “… born in hype, and its story is usually told as a series of cycles of fervent enthusiasm followed by bitter disappointment”, and similarly,

Continue reading

Faith, Trust, and Pixie Dust

image_pdfPDFimage_printPrint
Trust - broken egg
Adapted from original image by Kumar’s Edit, CC BY 2.0 Deed

It’s been some time since I last blogged, largely because my focus has lain elsewhere in recent months writing long-form pieces for more traditional outlets. The most recent of these considers the question of trust in digital things, a topic spurred by the recent (and ongoing) scandal surrounding the Post Office Horizon computer system here in the UK which saw the false conviction for theft, fraud, and false accounting of hundreds of people. One of the things that came to the fore as a result of the scandal was the way that English law presumes the reliability of a computer system:

In effect, the ‘word’ of a computational system was considered to be of a higher evidential value than the opinion of legal professionals or the testimony of witnesses. This was not merely therefore a problem with digital evidence per se, but also the response to it. (McGuire and Renaud 2023: 453)

Continue reading

Discovery Machines

image_pdfPDFimage_printPrint
A model robot reading a kindle
Adapted from the original by Brian J. Matis (CC BY-NC-SA 2.0)

Michael Brian Schiffer is perhaps best-known (amongst archaeologists of a certain age in the UK at least), for his development of behavioural archaeology, which looked at the changing relationships between people and things as a response to the processual archaeology of Binford et al. (Schiffer 1976; 2010), and for his work on the formation processes of the archaeological record (Schiffer 1987). But Schiffer also has an extensive track record of work on archaeological (and behavioural) approaches to modern technologies and technological change (e.g., Schiffer 1992; 2011) which receives little attention in the digital archaeology arena, in part because despite his interest in a host of other electrical devices involved in knowledge creation (e.g., Schiffer 2013, 81ff) he has little to say about computers beyond observing their use in modelling and simulation or as an example of an aggregate technology constructed from multiple technologies and having a generalised functionality (Schiffer 2011, 167-171).

In his book The Archaeology of Science, Schiffer introduces the idea of the ‘discovery machine’. In applying such an apparatus,

Continue reading

Data Archives as Digital Platforms

image_pdfPDFimage_printPrint
From Cory Doctorow’s article, based on a 1936 original drawing by Wanda Gag for ‘Hansel and Gretel’ by the Brothers Grimm.

Cory Doctorow recently coined the term ‘enshittification’ in relation to digital platforms, which he defines as the way in which a platform starts by maximising benefits for its users and then, once they are locked in, switches attention to building profit for its shareholders at the expense of the users, before (often) entering a death-spiral (Doctorow 2023). He sees this applying to everything from Amazon, Facebook, Twitter, Tiktok, Reddit, Steam, and so on as they monetise their platforms and become less user-focused in a form of late-stage capitalism (Doctorow 2022; 2023). As he puts it:

… first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die. (Doctorow 2023).

For instance, Harford (2023) points to the way that platforms like Amazon run at a loss for years in order to grow as fast as possible and make their users dependent upon the platform. Subsequent monetisation of a platform can be a delicate affair, as currently evidenced by the travails of Musk’s Twitter and the increasing volumes of people overcoming the inertia of the walled garden and moving to other free alternatives such as Mastodon, Bluesky, and, most recently, Threads. The vast amounts of personal data collected by commercial social media platforms strengthens their hold over their users, a key feature of advanced capitalism (e.g., Srnicek 2017), making it difficult for users to move elsewhere and also raising concerns about privacy and the uses to which such data may be put. Harford (2023) emphasises the undesirability of such monopolisation and the importance of building in interoperability between competing systems to allow users to switch away as a means of combatting enshittification.

Continue reading

Digital Twins

image_pdfPDFimage_printPrint
Adapted from an original by MikeRun; CC BY-SA 4.0

Sometimes words or phrases are coined that seem very apposite in that they appear to capture the essence of a thing or concept and quickly become a shorthand for the phenomenon. ‘Digital twin’ is one such term, increasingly appearing in both popular and academic use with its meaning seemingly self-evident. The idea of a ‘digital twin’ carries connotations of a replica, a duplicate, a facsimile, the digital equivalent of a material entity, and conveniently summons up the impression of a virtual exact copy of something that exists in the real world.

For example, there was a great deal of publicity surrounding the latest 3D digital scan of the Titanic, created from 16 terabytes of data, 715,000 digital images and 4K video footage, and having a resolution capable of reading the serial number on one of the propellors. The term ‘digital twin’ was bandied around in the news coverage, and you’d be forgiven for thinking it simply means a high-resolution digital model of a physical object although the Ars Technica article hints at the possibility of using it in simulations to better understand the breakup and sinking of the ship. The impression gained is that a digital twin can simply be seen as a digital duplicate of a real-world object, and the casual use of the term would seem to imply little more than that. By this definition, photogrammetric models of excavated archaeological sections and surfaces would presumably qualify as digital twins of the original material encountered during the excavation, for instance.

Continue reading

Data as Mutable Mobiles

image_pdfPDFimage_printPrint
How Standards Proliferate
(c) Randall Monroe https://xkcd.com/927/ (CC-BY-NC)

As archaeologists, we frequently celebrate the diversity and messiness of archaeological data: its literal fragmentary nature, inevitable incompleteness, variable recovery and capture, multiple temporalities, and so on. However, the tools and technologies that we have developed to use and reuse that data do their best to disguise or remove that messiness. Most of the tools and technologies that we employ in the recording, location, analysis, and reuse of our data generally try to reduce its complexity. Of course, there is nothing new in this – by definition, we always tend to simplify in order to make data analysable. However, those technical structures assume that data are static things, whereas they are in reality highly volatile as they move from their initial creation through to subsequent reuse, and as we select elements from the data to address the particular research question in hand. This data instability is something we often lose sight of.

The illusion of data as a fixed, stable resource is a commonplace – or, if not specifically acknowledged, we often treat data as if this were the case. In that sense, we subscribe to Latour’s perspective of data as “immutable mobiles” (Latour 1986; see also Leonelli 2020, 6). Data travels, but it is not essentially changed by those travels. For instance, books are immutable mobiles, their immutability acquired through the printing of multiple identical copies, their mobility through their portability and the availability of copies etc. (Latour 1986, 11). The immutability of data is seen to give it its evidential status, while its mobility enables it to be taken and reused by others elsewhere. This perspective underlies, implicitly at least, much of our approach to digital archaeological data and underpins the data infrastructures that we have created over the years.

Continue reading

Productive Friction

image_pdfPDFimage_printPrint
Mastodon vs Twitter meme
Mastodon vs Twitter meme (via https://mastodon.nz/@TheAtheistAlien/109331847144353101)

Right now, the great #TwitterMigration to Mastodon is in full flood. The initial trickle of migrants when Elon Musk first indicated he was going to acquire Twitter surged when he finally followed through, sacked a large proportion of staff and contract workers, turned off various microservices including SMS two-factor authentication (accidentally or otherwise), and announced that Twitter might go bankrupt. Growing numbers of archaeologists opened accounts on Mastodon, and even a specific archaeology-focussed instance (server) was created at archaeo.social by Joe Roe.

Something most Twitter migrants experienced on first encounter with Mastodon was that it worked in a manner that was just different enough from Twitter to be somewhat disconcerting. This was nothing to do with tweets being called ‘toots’ (recently changed to posts following the influx of new users), or retweets being called ‘boosts’, or the absence of a direct equivalent to quote tweets. It had a lot to do with the federated model with its host of different instances serving different communities which meant that the first decision for any new user was which server to sign up with, and many struggled with this after the centralised models of Twitter (and Facebook, Instagram etc.) though older hands welcomed it as a reminder of how the internet used to be. It also had a lot to do with the feeds (be they Home, Local, or Federated) no longer being determined by algorithms that automatically promoted tweets but simply presenting posts in reverse chronological order. And it had to do with anti-harassment features that meant you could only find people on Mastodon if you knew their username and server, and the inability to search text other than hashtags. These were deliberately built into Mastodon, together with other, perhaps more obviously, useful features like Content Warnings on text and Sensitive Content on images, and simple alt-text handling for images.

Continue reading