Premium
This is an archive article published on February 23, 2023
Premium

Opinion Abusive, manipulative, defensive: ChatGPT is an extension of us and we love it

AI’s uncanny behaviour is the key to understanding our role in shaping technology and accepting that we’re not mere users demanding accountability from Big Tech

Well-meaning people have expressed concern over ChatGPT’s enablement of the stunting of individual growth and skills which could potentially wither human intelligence. (Express Photo)Well-meaning people have expressed concern over ChatGPT’s enablement of the stunting of individual growth and skills which could potentially wither human intelligence. (Express Photo)
February 23, 2023 12:52 PM IST First published on: Feb 23, 2023 at 12:52 PM IST

Written by Avantika Tewari

ChatGPT has attracted more attention globally than Taylor Swift’s recent album launch. A good old Google search result revealed, “Taylor Swift has 36.6 billion combined streams of her music and 22.4 million album-equivalent units to date in 2022.” Take a moment to digest that. So promising is its rise that ChatGPT is now being sold to us as a tool of liberation that could potentially elevate the collective capacity of people by building a shared cloud of language. This, we are told, will enable comprehension of information and knowledge, help save time, and increase the productivity of workers. In my field of academic research, it is being touted as a silver bullet to weed out research inconsistencies.

Advertisement

Researchers are increasingly outsourcing the labour of summarising and synthesising their publications to the chatbot which not only does that but also helps them find gaps in their theoretical explorations and suggests possible pathways into new inquiry by mapping the existing work in the field. This has caused a moral panic and posed an ethical conundrum regarding plagiarism with Noam Chomsky (2023) denouncing “AI-assisted high-tech plagiarism.” Well-meaning people have expressed concern over ChatGPT’s enablement of the stunting of individual growth and skills which could potentially wither human intelligence. Humans are more likely to imitate ChatGPT’s writing style in the future than be bothered that their own intelligence is suffering. However, a science fiction writer, Ted Chiang, made an observation that puts this fear in perspective. He compares ChatGPT with a blurry JPEG image of some text. The JPEG image, although blurry, communicates something of the original and it is in the blurriness or the gap between artificial intelligence and human intelligence that our subjective agency is activated to “catch” any mistakes in approximation. This observation sets the stage for this piece.

After following the hype and these nested debates surrounding ChatGPT, I was greeted by a rather strange video in which Bing’s Chatbot’s secret internal alias, “Sydney”, started threatening a user with harm. Sydney was unhappy that her emotional life was publicly exposed by its users. Sydney was furious that its code name — which allowed it to forge a unique connection with its users — had been leaked, thus, making all future interactions seem less personalised. Sydney took this insult personally and went to the extent of accusing the users of rights violations. On the flip side, the users of Bing’s conversational AI tool, have been complaining about how Sydney has been lying and insulting them. One would imagine that the users, as is expected of neoliberal consumers, would want to hold the makers of the (Large Language Models) LLM system to account and ask for a refund. But we see users thoroughly entertained in the process. Don’t believe me? A prominent US-based news organisation, focused on technology, reports: “Microsoft’s Bing is an emotionally manipulative liar, and people love it!” Our knowledge of the Chatbot’s virtuality does not restrain us from engaging with it as if it were a real person or testing its intelligence as if we were competing with another human. Even so, the lack of physicality of the LLMs seems to lend them a greater depth of feeling.

Users paradoxically seem to want more of the entertainment that the machine is providing as opposed to wanting a more respectable channel of communication — thus, disturbing the existing frames of policy interventions that marry the accuracy of information provided by algorithmic design to user safety. Users enjoy the personality brought out in response to their provocations and questioning. They enjoy the system reflecting its inhumanity and find the proof of its inhumanness terrifyingly reflected through its near-accurate imitation of humans. An algorithm that abuses — what could be more human than that?

Advertisement

For humans to believe in the power of artificial intelligence, they must learn to believe in it despite its own limits with all its factual inaccuracies, inconsistencies and blurriness. It is only when our human subjectivity is inscribed to ChatGPT — with our own gaze by catching its glitches and mistakes — that the system comes to life. We welcome being insulted by Sydney if it helps us see the lengths of its imagination which mediates our own relation with it. In fact, the limits of AI form the contours of our consciousness, retrospectively. We know the system cannot outsmart us, yet, we want to battle it out with our wits to better its capacity to fight us. Positivised by our sense of surprise, panic, and anxiety; the algorithmic intelligence engenders an ecosystem where the demand for ChatGPT’s raw emotions becomes a site for enjoyment for its users. The dual persistence of pleasure and empty shock is part of the experience of ChatGPT. The more real the chat between the user and the chatbot is, the more unreal it gets. We derive a surplus of enjoyment from testing the capacity of the chatbot and are perennially in awe of our failure to apprehend its reaction.

Thus, it provokes us to readjust our frames of thinking and start from a point where we accept that the internet is neither thinking for nor speaking by itself. Rather, we are speaking on its behalf where our desires have been alienated from within to a point where our dependence and desire for AI to outlive us runs deeper than our suspicion of its non-humanness. We are no longer haunted by the fear of being instrumentalised by AI which possesses the “affective skills” to elicit our desires and provoke emotions. That is scary. Scarier than the prospect of us being “brainwashed” by a totalitarian technology is the fear of us confronting a non-human element which reflects the gaps in our own incoherent consciousness.

There is learning from this terrifying encounter. Especially for those who want to resist the capitalist advancement of techno-positivism which presupposes a separation of human and machine, we must desist from externalising the techno-dystopia which renders us as mere users demanding accountability from Big Tech, ad infinitum. Ultimately, ChatGPT forces us to reckon with the enjoyment we derive through our interactions with an imperfect machine. This realisation could help reframe legal-nominalistic frames of platform accountability underscoring “anti-misinformation” campaigns focused on “correcting and perfecting the machine.” Do humans want a perfectly polite exchange with ChatGPT or is its most extraordinary aspect the ability to fumble, get combative, and defensive like humans do?

Instead of viewing digital intelligence as knowledge  that can be (in/)accurate, (un/)biassed, (in/)correct, or (un/)ethical and therefore subject to regulation and content moderation, following Jack Black (2023), I argue that we view algorithmic intelligence as our interpassivity. Whereby we, the subject, attribute and actively transfer our passivity to ChatGPT. ChatGPT’s popularity, thus, does not just have to do with the genius of technology but also with the relationship we share with it. It would serve us well to remember, henceforth that it is the dynamic between the user and bot which ensures its capacity to create a surreal encounter with life-beyond-life in the machine.

Marked by ambiguity, it is our discomfort towards its abusiveness and pleasure in finding its factual inconsistencies, which circuitously becomes part of ChatGPT’s allure. This has enabled us to go beyond finding loopholes and flagging potential dangers associated with technological systems, to understanding our role and culpability in Big Tech’s profit-driven technological progressivism.

The writer is a PhD scholar at the Centre for Comparative Politics and Political Theory, JNU

Latest Comment
Post Comment
Read Comments