Artificial Archaeologies

image_pdfPDFimage_printPrint
Adapted from original by Adam Purves CC BY 2.0

In his book Homo Deus, Yuval Noah Harari rather randomly chooses archaeology as an example of a job area that is ‘safe’ from Artificial Intelligence:

The likelihood that computer algorithms will displace archaeologists by 2033 is only 0.7 per cent, because their job requires highly sophisticated types of pattern recognition, and doesn’t produce huge profits. Hence it is improbable that corporations or government will make the necessary investment to automate archaeology within the next twenty years (Harari 2015, 380; citing Frey and Osborne 2013).

It’s an intriguing proposition, but is he right? Certainly, archaeology is far from a profit-generating machine, but he rather assumes that it’s down to governments or corporations to invest in archaeological automation: a very limited perspective on the origins of much archaeological innovation. However, the idea that archaeology is resistant to artificial intelligence is something that is worth unpicking.

That’s because resistance to artificial intelligence isn’t the impression you get looking around digital archaeology: in reality it appears to be a very active area of research and development. For instance, looking at the conference sessions for CAA 2019 in Krakow (PDF), we find session 38 on Artificial Intelligence and Cultural Heritage (organised by George Pavlidis, Dimitris Kalles, Athos Agapiou, and Chairi Kiourt) which talks of AI used in areas such as element/mineral identification, virtual museums, historical document analysis, natural language processing, semantics and knowledge extraction, automated processes in digitization, recommenders, storytelling and personalization. The session abstract goes on to seek contributions in additional areas such as digitization and on-site documentation, cultural content/object analysis, content-based classification and retrieval, archaeometry and data analysis, computational archaeology; spatial and temporal analysis, simulations, and intelligent crowdsourcing – a pretty comprehensive range of topics which permeate across and throughout the archaeological discipline.

Perhaps the presumption that archaeology is resistant to artificial intelligence is because much of this research flies under the radar: we don’t tend to refer to digital automation across this range of application areas as artificial intelligence. Why is this? Part of this is doubtless to do with the perceived failure of artificial intelligence approaches in the 1980s which found that, beyond very limited application areas, archaeology was insufficiently formalised and standardised to make AI feasible (or desirable, for that matter). It may also be due to suspicion surrounding AI as being associated with the simulation of high-level general purpose intelligence, something akin to a Skynet or Nick Bostrom’s ‘superintelligence’. In fact, the success of artificial intelligence in recent years has been through the identification of specific, restricted, and well-defined application areas, each of which requires its own largely bespoke set of solutions, data, and routines. This is precisely what comes out of the CAA session description: a set of discrete application areas, each potentially solvable, but no sense of any overarching general purpose archaeological intelligence – no artificial digital archaeologist. Individually each area – be it classification, spatial analysis, simulation studies, or whatever – have no overt link with artificial intelligence, and indeed, many if not all are areas of research established long before artificial intelligence was a glint in the Terminator’s eye. In that respect, artificial intelligence techniques are simply the latest tool in the digital (black) box, and their use in restricted application areas detaches them from some of their more undesirable baggage. More sceptically this could be seen as a somewhat covert infiltration of artificial intelligence into archaeological methodologies: camouflaging the AI component by incorporating it within existing frameworks.

But Harari also sees archaeology as problematic for AI approaches because of its reliance on highly sophisticated types of pattern recognition. Arguably, many of the discrete application areas referred to above entail different kinds and degrees of pattern recognition – whether it involves picking out terms and labels from texts or classifying artefacts or features on the basis of their shapes and profiles. In fact, image recognition is reckoned to be one of the strengths of current AI systems: over the past seven years performance has increased from correctly categorising around 70% of images to 98%, higher than the human benchmark of 95% accuracy (now we know what they’ve done with all those Captcha’s). So it may be that, practically at least, (and admittedly extrapolating wildly) pattern recognition isn’t the AI stop function that Harari suggests it might be, even if it does raise a host of more philosophical and methodological issues.

Indeed, the key issue may not be not so much pattern recognition but the way that much archaeology relies on complex sensorimotor skills. After all, we frequently point to the craft of our discipline, and proudly wield our worn trowels and compare our arthritic knees. Computer scientists working in AI refer to Moravec’s Paradox:

… it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility (Moravec 1988, 15).

For example, David Autor argues that high-level reasoning uses a set of formal logical tools: counting,  mathematics, and logical deduction, for example, which are computable, whereas:

In contrast, sensorimotor skills, physical flexibility, common sense, judgment, intuition, creativity, and spoken language are capabilities that the human species evolved, rather than developed. Formalizing these skills requires reverse-engineering a set of activities that we normally accomplish using only tacit understanding. (Autor 2015, 12).

Indeed, intelligent robotics are noticeably absent from CAA conference sessions, although semi-autonomous drones are increasingly a feature of field survey, for instance. Again, however, these are a long way from the early conceptions of robotics. As Benedict Evans observes, we’ve ended up with washing machines instead of humanoid robots. Furthermore,

Washing machines are robots, but they’re not ‘intelligent’. They don’t know what water or clothes are. Moreover, they’re not general purpose even in the narrow domain of washing – you can’t put dishes in a washing machine, nor clothes in a dishwasher (or rather, you can, but you won’t get the result you want). (Evans 2018)

Like artificial intelligence, robotics are suited to narrow, well-defined applications, and each will require its own robotic solution. Their success lies in biting off small, manageable chunks of human practice.

Earlier this month, Stanford University launched its Human-Centered Artificial Intelligence Initiative, intended to develop artificial intelligence that understands human language, emotions, intentions, behaviors, and interactions at multiple scales. The Initiative has three underpinning themes (Li and Etchemendy 2018) which we might borrow and slightly re-phrase for archaeological purposes. Hence we might suggest:

  1. That for AI to better serve archaeology, it must incorporate more of the versatility, nuance, and depth of the archaeological intellect;
  2. That the development of AI must be coupled with the ongoing study of its social and ethical implications for archaeology and archaeologists, and guided accordingly; and
  3. That the ultimate purpose of AI must be enhancing archaeology, not diminishing or replacing it.

Worthy objectives, to be sure, but who determines whether archaeology is enhanced or diminished as a consequence?

References

Autor, David (2015) ‘Why Are There Still So Many Jobs? The History and Future of Workplace Automation’, Journal of Economic Perspectives 29 (3), 3-30. http://dx.doi.org/10.1257/jep.29.3.3

Evans, Benedict (2018) ‘Ways to think about machine learning’, blog post June 22, 2018. https://www.ben-evans.com/benedictevans/2018/06/22/ways-to-think-about-machine-learning-8nefy

Frey, Carl and Osborne, Michael (2013) ‘The Future of Employment: How Susceptible are Jobs to Computerisation?’, Oxford Martin Programme on Technology and Employment, University of Oxford. https://www.oxfordmartin.ox.ac.uk/publications/view/1314

Harari, Yuval Noah (2015) Homo Deus: A Brief History of Tomorrow (Vintage, London).

Li, Fei-Fei and Etchemendy, John (2018) ‘Introducing Stanford’s Human-Centered AI Initiative. A common goal for the brightest minds from Stanford and beyond: putting humanity at the center of AI.’ http://hai.stanford.edu/news/introducing_stanfords_human_centered_ai_initiative/

Moravec, Hans (1988) Mind Children: The Future of Robot and Human Intelligence (Harvard University Press, Cambridge MA).

2 thoughts on “Artificial Archaeologies

  1. «Worthy objectives, to be sure, but who determines whether archaeology is enhanced or diminished as a consequence?»

    The same ones that introduce innovations and these will obviously be positive from their point of view.
    Only later, if the innovations will have diffusion, there may be criticism. I guess it is like the saying “first come, first serve”. The trends in academic research seem to favor competition between groups rather than a collective discussion.

  2. Pattern recognition is a very delicate field of work and a very useful one.I suppose that as AI algorithms are being enhanced , at some point they will find their more extensive use in the archaeology field.Apart from artifacts it could potentially interesting the use of machine learning on manuscripts from older ages eg the decyphering of phaistos disc

Leave a Reply

Your email address will not be published. Required fields are marked *