
None of us can be unaware of the hype surrounding artificial intelligence at the moment. We see multi-billion-dollar companies being built around generative AI while older more established companies are placing multi-billion-dollar bets on the development of generative AI to ensure they don’t miss out. Many of the chief executives of those same companies have warned of the imminent danger of artificial general intelligence when human cognition is surpassed by machines, arguing for greater regulation while at the same time ensuring that they invest in attempts to develop precisely what they warn about. Others are more alarmist still: Yuval Harari, the historian and writer, dramatically claimed last year that “What nukes are to the physical world…AI is to the virtual and symbolic world”, and that artificial intelligence is an alien threat potentially resulting in the end of human history. We’re told we have to take the welfare rights of artificial intelligence seriously because it will become conscious or at least robustly agentic within the next ten years (Long et al. 2024), even if a Google engineer was fired in 2022 for claiming its AI technology had become sentient and had a soul. Even the notorious failures – the tendencies of ChatGTP and its ilk to hallucinate and simply make things up, for instance – do little more than introduce a brief pause while such predispositions are inevitably programmed around.