On March 28, a letter was drafted by the Future of Life Institute calling for a six-month halt on “training AI systems more powerful than GPT-4”, signed by more than 2,900 people. Some of these people are famous in the worlds of AI, computer science, economics, and policy, such as Steve Wozniak, co-founder of Apple, Yoshua Bengio, Turing Prize winner, and Daron Acemoglu, MIT professor of Economics. Any intelligent observer of the field will agree with the core of the letter which calls for caution about AI, but the devil is in the details. While hyping the non-existent and improbable red herring of fantastical AI technology, in tune with the ideology of “Longtermism”, the letter misdirects from the actual dangers of the AI industry. A shallow reading of the letter will only make clear the warning, not the clever but cynical misdirection.
AI is the catch-all name for a family of technologies which rely on machine learning to find patterns from large amounts of data to make decisions if the data comes annotated (classification) or to self-organise the data (clustering) or finally to create new patterns from the existing patterns (generation). All machine learning is pseudorandom and these processes are statistical. Thus, AI technologies have replaced and have the potential to replace certain cognitive labour, which has made them economically lucrative. On the flip side, the statistical aspect of their processes means no matter how much training data is thrown at any AI use case, there will be errors — making errors is baked into such a system. The statistical nature also means that AI technologies are excellent if you want to replicate past decisions — even if replicating the past is not always a good idea.
No AI system should be used in a domain where arbitrary errors or blind replication of the past by a machine (unlike a human who can still be held accountable) can wreak havoc, such as medicine, law enforcement, and the justice system. Unfortunately, the state’s policy myopia and the profit motive of private companies have pushed harmful use cases like facial recognition technology (FRT) systems in law enforcement jurisdictions, even in India. The statistical aspect also makes AI systems “brittle”, that is, they’re singularly sensitive to anomalous situations and break down and give “graceless errors” as soon as faced with a problem they were not trained on. Sophisticated AI systems, in trying to deal with these aspects, often make use of complex models and more training data, but this has the consequence of making these models impossible to audit as they are by design non-explainable. In short, AI technologies are useful, but also liable to break down if their fundamental mechanism of statistical pattern discovery is confused with real intelligence and knowledge generators. There need to be regulatory red lines on AI use cases that can harm rights, both individual and social and they must be banned by law.
An ecosystem of AI technologies is by its nature perennially data-hungry. Such data hunger seeps into society, violating privacy and other constitutional rights, creating conditions for a surveillance state and economic exploitation. As data with usable intelligence is needed in vast quantities and this kind of work is still unregulated, lakhs of low-paid and exploited workers from economically weak countries, called “ghost workers”, have the job of curating and cleaning it. Social media platforms also sell the data of their users via “data brokers” in an ill-regulated grey market. Here, the AI industry and the platform economy feed into each other: Machine learning technologies make it possible for platforms to crunch large amounts of valuable data and control and exploit remote armies of workers, out-competing rivals and building oligopolies, while the platforms surveil their workers and sell large amounts of workers’ data to the AI industry. There is a fundamental expansion of exploitation in this relation to platforms, made worse by the fact platforms do not consider their workers to be employees and keep them away from labour protection.
Thus, the AI industry has real dangers and harms. As it does not operate in a vacuum but is part of the larger market economy, pressures for profit ignore the cautions needed concerning AI research, design, and deployment. In India, there is currently a lack of any substantial data protection law to guarantee our fundamental right to privacy which was hard won through the Puttaswamy judgment, and so we see the proliferation of harmful facial recognition technology projects in law enforcement and other areas. Our laws do not recognise platform/gig work as employment, so while platforms expand so does undignified work, with gig workers not being afforded the protections of ordinary workers. We see AI systems in telemedicine and the justice system, which should cause alarm.
The letter by the Future of Life Institute ignores all this. It ignores the primary harm caused by the AI industry, namely the use of statistical error-prone nonexplainable artefacts in delicate processes, dangers of replication of past societal problems, continuous erosion of privacy due to data hunger — both needing and instrumentalising surveillance — and expanding platformisation and workers’ exploitation. Naturally, in such an ecosystem, capital concentrates in the hands of those who control AI development: Large technology companies. This actual danger is ignored and substituted by fantastical notions of what “intelligence” in AI is. A picture is painted that the actual danger is a hypothetical “artificial general intelligence” and “non-human minds”. It uses dystopian fantasies to distract people from the actual harm the AI industry is doing to their persons and their jobs.
The letter is devious because it replaces existing policy flaws with fears of fantastical technology. “AI safety” can’t become a red herring to much-needed regulation. The central issue is not technology, but who owns AI and how society uses it.
(The writer is an assistant professor, working on AI and policy, at the Ashank Desai Centre for Policy Studies at IIT Bombay)