Premium
This is an archive article published on July 22, 2023
Premium

Opinion Menaka Guruswamy writes: Can AI be communist?

How different nations could legislate AI will be contingent on the underpinnings of their politico-legal systems. Chinese and American regulations suggest diverse trajectories before nations

Menaka Guruswamy writes, AI communismDifferent jurisdictions all over the world are making efforts to introduce laws that will regulate AI. (Illustration by C R Sasikumar)
July 22, 2023 09:15 AM IST First published on: Jul 22, 2023 at 07:00 AM IST

Will ideology influence Artificial Intelligence (AI)? Will Marx or Mao or Ayn Rand also shape AI just as they have shaped human minds? Or is state control likely to ensure that the only ideology that AI will have is ‘statism’? You may ask, how in the world can AI have any ideology? After all, is AI able to think? Can you have a socialist or capitalist or feminist AI?

I got thinking about this when I read the notice, released on April 11, by the Cyberspace Administration of China regarding the “Public Solicitation of Comments on the Measures for the Administration of Generative Artificial Intelligence Services”. For those who have not read my many columns on AI, a small explainer. As the Cyberspace Administration of China explains, “generative artificial intelligence refers to technologies that generate text, pictures, sounds, videos, codes and other content based on algorithms, models and rules”. Here’s an example: Chat-GPT is a generative artificial intelligence service. It is simply machine learning fed with vast amounts of data and biases that programmers input which enables Chat-GPT to provide the answers to our questions. Depending on how it’s been programmed, the answers might be true, false, incomplete, inaccurate, mature or immature.

Advertisement

Now getting back to the Cyberspace Administration of China. Let me lay out the content of the draft, “Provisions on the Administration of Deep Synthesis of Internet Information Services,” they have released for comments from the public. The regulations are meant to apply to research and development and the use of generative AI products to provide services to the public within China.

Article 4 provides that generative artificial intelligence products or services shall comply with the requirements of laws and regulations, and respect social morality, public order and good customs. It also mandates that content generated AI shall “embody core socialist values”, and must not contain anything that contributes to “subversion of state power”, “overthrow of the socialist system”, incite separatism, undermine national unity, advocate terrorism and extremism, ethnic hatred, violence, false information, amongst many other prohibited grounds. Additionally, this content shall be truthful and accurate, and measures should be taken to prevent generation of false information.

I laughed when I read the draft regulation mandating that chatbots will embody socialist values. I immediately imagined stressed, bespectacled programmers frantically feeding Marx-Engels speak or the French socialist Charles Fourier’s writings into “Ernie” the Chinese company Baidu’s chatbot. Ernie is meant to rival Chat-GPT, except that the former has to comply with these many regulations unlike the latter.

Advertisement

In reality, China is a capitalist society, far removed from the socialism of the past. So, the mandate of socialism that Ernie must comply with is simply a mechanism introduced by the State to screen and censor Ernie. Time will tell if AI will be controllable by any state — Chinese or otherwise. As many in the AI world, including the founder of Chat-GPT Sam Altman, have warned the world that as AI is being developed, it may become impossible for humans to regulate AI when it starts thinking for itself. It might become impossible to mind the machine.

For now, let’s get back to the regulations of the Cyber Administration of China. The onus of meeting the mandate of Article 4 is placed on the organisations and individuals that use AI to provide services. These providers “will bear the responsibility of content generated by that product”. Importantly, the draft provisions mandate that all AI products that provide services to the public must be first submitted for a “security assessment”, presumably by the state, before they can be introduced to the market. If providers violate these regulations, punishment shall be given in accordance with the “Cybersecurity Law of the People’s Republic of China and, Data Security Law and the Personal Information Protection Law of the People’s Republic of China.”

China’s initial law-drafting effort intended to regulate AI reveals a two-prong approach. First, to make the provider responsible for breaking laws that prohibit hate, violence, falsity, or shore up ideology/the state. The language used is broad enough to enable the state to have wide latitude to control providers with stringent punishments. Second, to mandate that the AI product is first submitted to the regulators to ensure that it conforms to the expectations of the state before it is opened up to the public. This is clearly screening and censorship in its plainest form.

Both prongs are premised on the belief that all AI can be controlled and will never be independent. To that extent, the laws that will pertain to AI are premised on the way the Chinese state has dealt with speech by its citizens in general — whether on social media or in person. For AI, the penalty of the speech is laid at the doorstep of the provider. There is no clarity on whether this will also fasten liability on the programmer(s). What if the AI product passes the initial screening by the state, and through constant use by the public eventually transforms into something more?

Different jurisdictions all over the world are making efforts to introduce laws that will regulate AI. The United States White House’s Blueprint for an AI Bill of Rights was released in October 2022. This Bill of Rights epitomises American constitutionalism that is rooted in sturdy individual rights, and free speech laws that strongly protect expression. For instance, this Bill of Rights provides that an individual should not face algorithm discrimination and must have the benefit of protection. According to the White House Bill, algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impact people based on their race, colour, ethnicity, sex (that includes gender identity, sexual orientation), religion, age and national origins amongst other grounds. I will discuss American and European efforts to regulate AI in future columns.

For now, it’s clear how different nations legislate AI in due course will be contingent on the underpinnings of their politico-legal systems. For nations that put a premium on state control, you will see more of the Chinese model. For those that prefer individual-centric rights, you will see more traditional constitutional frameworks for AI — product-use contingent on non-discrimination laws. What will be intriguing as AI grows is how ideology is infused into the products to shape opinion and control the end users — us humans.

The writer is a Senior Advocate at the Supreme Court.