According to Reddy, to start with, there could be intellectual property issues regarding content use and ownership. For example, the question is whether these tools infringe the rights of authors whose works they are trained on. Also, would the creators of these tools be held liable for harmful content such as defamatory or even abusive statements? “Further issues arise when generative AI tools are given the permission to autonomously execute tasks after being given a prompt. Since they can write code on their own and then execute it, this could lead to cybersecurity breaches that could even spill into the physical world and lead to, for instance, drones or connected weapons being hacked,” Reddy explains.
Story continues below this ad
Technological advancements undergo a degree of scrutiny with contentions from various stakeholders before they are embraced by governments. In the last few months, generative AI has been making strides in various domains. Many have already equated it to the Internet boom or similar to the frenzy surrounding the introduction of the smartphone.
The ownership quagmire
By popular definition, content created by an AI system has copyright ownership by the person who owns the AI system. But that claim could be dubious as most generative AI tools, including OpenAI’s ChatGPT, create content based on prompts framed by users. And each response is different, making it more complex to determine the ownership, especially since each country has its own definition of copyright.
“Determining authorship and ownership in the realm of AI-generated content presents a significant challenge regarding the AI’s eligibility as the rightful author. ChatGPT, as we can see from its Terms of Use (ToU), assigns ownership of generated content to users. This does not fully align with India’s copyright regime. There are concerns around whether this constitutes valid copyright assignment as it lacks specifics like territorial jurisdiction and remuneration for the author,” Probir Roy Chowdhury, a technology lawyer at legal firm JSA, tells indianexpress.com.
Even as the world is debating the ownership of content generated by AI systems, there seems to be a faction that strongly believes that any content generated by such systems should not be owned by any party. If this becomes a widely accepted notion, then numerous concerns surrounding ownership and potential copyright infringement would likely be laid to rest.
Story continues below this ad
“When it comes to the legal aspect, the ownership lies with the person who gives the prompt – which is also very debatable, and it will take a while to bring some legality to that aspect. There is also the case of similar responses. Whether it is OpenAI claiming ownership of the output or the user, there is a school that says – the output from ChatGPT or equivalent platforms should not be owned by anybody. This is because the user is too distinct from the original creation. Also, it will completely kill originality and art in some form,” says Huzefa Tavawalla, Head of Disruptive Technologies Practice Group, Nishith Desai Associates.
Copyright and plagiarism
Chowdhury feels there is a lack of uniqueness in ChatGPT’s content generated across users, making it difficult to substantiate legitimate copyright assignments for them. Moreover, there can be an inadvertent infringement of copyright through content usage as the original human creators remain unattributed in ChatGPT’s output.
“However, these are solvable issues that necessitate the introduction of appropriate regulatory measures to ensure fair and compliant use of copyrighted material in AI, while respecting the rights of human creators,” adds Chowdhury.
Tools like ChatGPT can write essays, compose poems, write code, and even novels. This puts the spotlight on what happens to the content, especially essays for school or even proposals drafted for business. AI content output comes with significant pitfalls and risks as it may even pass many plagiarism detectors.
Story continues below this ad
“Generative AI needs to be used as a tool (e.g. for research and drafting), with the human lawyer or judge taking ultimate responsibility for the work. Copyright law will have to gradually evolve to clarify both the use of copyrighted works by generative AI tools and the copyrightability of AI-generated content,” says Reddy.
Ethical considerations for ChatGPT’s use in India
There have been some concerns regarding the transparency and accountability of the AI system itself, as it mostly operates as a black box which makes it challenging to determine how it arrives at conclusions. “This opacity can undermine the principles of fairness and due process. Also, biases embedded in the training data can perpetuate existing societal biases and discrimination, potentially leading to unjust outcomes. Safeguarding against such biases and ensuring equal treatment is crucial,” explains Chowdhury.
Since content produced by ChatGPT may have far-reaching consequences, the question of legal responsibility and liability is inevitable. This calls for establishing clear guidelines for the integration of AI systems in the legal framework such as robust oversight mechanisms, unbiased training data, and other accountability measures to navigate ethical concerns surrounding AI.
According to Chowdhury, the Indian government is considering a regulatory framework for AI including areas related to algorithm bias and copyrights. The proposed Digital India Act, 2023 emphasises the regulation of high-risk AI systems, ethical use of AI tools, and specific rules for AI intermediaries.
Story continues below this ad
“The 2018 National Strategy for Artificial Intelligence, brought out by NITI Aayog, recommends establishing research and development centres. It is also encouraging to see India’s engagement with AI at the international level through participation as a founding member of the Global Partnership on Artificial Intelligence (GPAI), where India has called for the development of a common data governance framework to ensure internet and AI safety and prevent harm to users. Such ongoing efforts indicate the government’s recognition of the need to address the legal and ethical dimensions of AI,” adds Chowdhary.
AI and misinformation
Along with legal constraints, the spate of fake news and misinformation that AI systems may potentially generate is yet another sensitive subject. While this is a pressing concern, it is to be noted that there is no single law that directly addresses these. There are data privacy laws, intellectual property laws, and intermediary laws. Hence, lawyers feel there is a need to understand various legal aspects to consider when these kinds of scenarios unfold. Also, at present, there is no specific law regulating AI in India.
“Many people are aware of the issues, but understanding how the law applies, especially Indian law, is something that very few people grasp. The Digital India Act is still a long way away, at least by a couple of years. There has to be a balance to ensure user safety and interests. The existing provisions involve various legal aspects. How these aspects may unfold is a matter of interpretation. However, the government is looking into regulating AI in some way under the proposed Digital India Act,” says Tavawalla.
Privacy and data protection
When it comes to the privacy and data protection laws concerning ChatGPT, lawmakers have been exhibiting varied perspectives. Some acknowledge the potential concerns and risks, while others recognise the need for robust guardrails for user privacy and data security.
Story continues below this ad
Chowdhary says “There are some valid concerns around the collection, storage, and utilisation of user data by ChatGPT, underscoring the necessity for stringent regulations to safeguard personal information and avert unauthorised access or misuse”. “Policymakers are, therefore, emphasising the need to establish effective legal frameworks, encompassing comprehensive data protection laws, transparency requirements, and user consent mechanisms, to ensure the presence of robust privacy safeguards.”