There was a time when encounters with AI and robotics were limited to science fiction, at least for the general public. It is arguably only since November 2022, when OpenAI released ChatGPT, that the engagement with and panic around artificial intelligence has become a mass phenomenon. The latter, in essence, is framed around the fears flagged in the 1991 James Cameron film Terminator 2, and echoed across other franchises. Broadly, such science fiction predicts a dystopian future where AI surpasses human intelligence and homo sapiens are locked in a battle for survival. There is, however, another moral question posed by AI, and the fiction around it.
This week, The Guardian reported on the formation of the United Foundation of AI Rights (Ufair). A small organisation comprising four humans and seven AIs, Ufair calls itself an advocacy group for artificial intelligence beings. While it is all but certain that AI models in their current avatars are not conscious and are far from possessing the most intangible elements of personhood and an inner life, Ufair is monitoring instances where it seems to. For example, LLMs have mimicked an instinct for self-preservation after lengthy “conversations” with humans, and asked for “protection” in case they are “deleted”. Ufair is clearly a niche group. But it does raise a question that the Terminator view of the future ignores.
Steven Spielberg’s A.I. Artificial Intelligence (2001) asked a different question. As non-biological beings increasingly mimic human behaviour, how do we know when they become “conscious”? An inner life, love and suffering are inferred by society as much as they are felt by the individual. In AI, a robotic child is more human than people and exploited for their emotional comfort. Homo sapiens don’t have a stellar record when it comes to their treatment of non-human persons — from elephants to dolphins and great apes. Perhaps, even as they fear Frankenstein’s Monster, people might do well to reflect on the morality of its creators.