
Generative artificial intelligence (AI) means algorithms that can be used to make new content, including text, images, audio and videos. But as fascinating as it seems, several factors are ethically worrying, such as transparency, misinformation, responsibility, and ethical and legal framework.
The Indian Express spoke to AI experts to discuss the ethical implications of Generative AI.
Generative AI tools can be categorised as ‘RoughDraft AI’. These tools can act as productivity tools, assisting people with domain knowledge by providing draft responses. Ethical use requires AI solutions to be accessible and transparent.
When asked about the ethical use of AI, Abhivardhan, Chairperson and Managing Trustee, Indian Society of Artificial Intelligence and Law, said, “The primary ethical concerns are the lack of transparency in data processing, unclear explanations of algorithmic functions, and varying impacts across different sectors.” He also noted that the promises made about Gen AI tools often lack accessibility and transparency.
Dr Azahar Machwe, AI SME at Lloyds Banking Group, thinks ethical use starts from attributing the source of any content. Additionally, it is important to implement broader checks for hate, aggression or profanity, and ensure compliance with relevant legal frameworks.
Megha Mishra, Internet sociologist and Gen AI ethics expert, believes that any new technology comes with both positive and negative aspects, which is why there is a global effort to uphold community values.
Bias can never be fully eliminated
Abhivardhan highlighted that bias mitigation is an ongoing practice. He emphasised that technology teams should routinely examine and implement bias mitigation strategies to ensure safer AI solutions. Citing research from Carnegie Mellon University, he pointed out that de-biasing AI might affect creative processes.
Potential consequences of bias in marginalised communities include discrimination, unfair treatment and reinforcement of existing societal inequalities.
Speaking about bias mitigation, Machwe mentioned that bias can never be fully eliminated. In certain cases, bias is even necessary, such as when the topic is pollution or natural disasters. He said we should carefully uncover and direct bias by changing prompts. Nonetheless, he acknowledged that the consequences can be devastating, spreading myths, biases and falsehoods.
Mishra argued that companies need to focus on quality data as poor data leads to poor output. She also emphasised understanding AI’s limitations, the unintended biases from skewed data, and the biases of those building these models.
Accountability lies with the creator
On accountability of the decisions made by Gen AI, Abhivardhan opined that entities and technology teams owning, building and maintaining AI systems should be held accountable. Core AI model teams should be liable for the lack of accessible policies and guidelines, companies using AI models for specific products should share accountability, and high-risk AI systems, even by EU AI Act standards, should be held liable. The expectation should be that accountability is distributed across use cases and industry sectors.
Machwe stated that initially, the creator of the artefacts should be held responsible, followed by the user of the artefact. This distinction is crucial as it emphasises the need for individuals to be mindful and accountable not only for the content they create but also for the content they disseminate.
Mishra suggested that a tech company making AI tools cannot do away with their responsibility. Both the developers of these tools and those who use them should be held accountable.
Biggest feature needed is Gen AI to detect Gen AI
Talking about ensuring transparency of Gen AI models, Abhivardhan mentioned that technology teams, corporate boards and other entities must explain how their training data and outputs are governed by internal policies and their involvement in processing inputs. Standardising practices around AI and knowledge management is crucial for determining their ethical and economic accountability. This can help evaluate their commitment to transparency in building and deploying AI solutions, he said.
Machwe commented that the biggest feature needed is for Gen AI to be able to detect Gen AI and this could be done through watermarking. This will ensure source attribution. Also, the use of training data sets must be made clear, to identify copyright, he said. Finally, opening the model dataset and ensuring no lock-in of IP will ensure few companies no longer control this capability. Mishra also pointed out that flagging the source could be one of the ways to ensure transparency.
Clear policies are needed
Generative AI, according to Abhivardhan, changes the understanding of privacy, affecting consumer data revelation. AI systems should explain why certain prompts are needed to generate responses and companies must ensure clear, transparent policies, he said. Measures should follow data law principles like privacy, consent, and data quality. Google DeepMind’s paper on AI Assistants highlights potential privacy issues with unclear prompts and the risk of manufacturing consent.
Machwe emphasised the need for a ‘right to forget’ feature to ensure user data is scrubbed and not used for training when asked about the protection of user privacy. Another feature must be to anonymise interaction with Gen AI whereby a profile of data cannot be built around a user. The other thing that many companies are doing is that they are giving a clear ‘data use’ statement and also indemnifying users against future copyright claims, he said.
Balancing the need for data with the right to privacy
Abhivardhan mentioned that balancing depends on the company’s business model and the use of non-sensitive data for profit should avoid intrusive practices and licensing models like General Licensing or BSD (Berkeley Source Distribution) can help maintain balance.
Machwe said one way is to create a partnership with those who are sharing data in terms of revenue sharing, another could be to use a mix of real and synthetic data.
Mishra suggested that proper data selection, cleaning, and inclusivity can balance AI training needs with privacy rights.
Combating spread of misinformation and deepfakes
Abhivardhan noted that the Ministry of Electronics and Information Technology (MeiTY) issued an advisory on AI models and deepfakes on March 1, 2024, under Rule 3(1)(b) of the 2021 Information Technology Rules. Due to unclear language, MeiTY revised the advisory within two weeks. Even without a Digital India Act, MeiTY can regulate deepfakes by creating an open-source repository of technical and human methods for detection. Sensitising end users and businesses about the manipulative potential of deepfakes is crucial in a society with low digital literacy. Mishra opined that massive literacy efforts are the key to tackling misinformation and deepfakes.
Long-term ethical considerations of Gen AI
Abhivardhan stated that there are three things to take into account. First, keeping data flow mapping intact is necessary to know where the data is going, be it with third parties or end users. Second, making company policies accessible and easy to understand. Third, examining business models and seeing what data privacy considerations can be properly decided in the beginning to make data transactions for commercial reasons sensible. This may work for open-source and proprietary models, both.
Machwe opined that there are several considerations, most of which are relevant only while Generative AI is limited to entities like OpenAI and Google. Once it becomes possible to build such AI using our smartphones, controlling its development will become challenging, he said. Long-term ethical considerations remain similar to those of today. What will be interesting is if, in the long run, Generative AI can be taught ethical considerations so that we do not have to establish all these checks and balances around it, he added.
Mishra thought that one of the long-term ethical considerations could be ‘social exclusion’.
Preparing society for the broader impacts of Generative AI
Educating people that GenAI tools are not just consumerist toys but productivity enhancers, encouraging human agents to leverage AI despite professional risks, clarifying the dynamics of consent and perception shaped by GenAI tools, informing people about the deterministic nature of AI models and addressing the misconception that GenAI tools always provide accurate responses are some points listed by Abhivardhan.
Machwe opined that we don’t have a decade and that impacts are already evident. People must focus on three key areas – understanding the future of work, determining ownership rights for AI trained by individuals and staying updated on technological advancements – he added.
Legal and ethical framework, the way forward
Machwe said, “There is already extensive work in this area, particularly with the EU AI Act, the UK PRA SS1/23, and the US AI Bill of Rights, alongside existing data use regulations like GDPR. What’s more important to me is for AI to understand these laws and frameworks rather than just having them in place. Self-regulating AI is a crucial step. For instance, when we ask a Gen AI model to draw a dog, it knows the key characteristics of a dog. Similarly, it should recognize the key characteristics of legal and ethical frameworks and generate.”