In a bid to address the ethical and regulatory challenges pertaining to the use of Artificial Intelligence (AI), a private member’s bill was introduced by BJP MP Bharti Pardhi in Parliament, which proposes penalties of up to Rs 5 crore for misuse of AI.
Edited excerpts follow:
The bill states that the use of AI in surveillance should be limited to lawful purposes. Who defines lawful purpose? And on what basis?
Grandhi: Section 5(1) of the Bill requires prior approval of the Committee for AI used in surveillance. That means, at least in terms of the Bill, the Committee will decide whether a proposed use is “lawful”. However, “lawfulness” isn’t “created” by an ethics committee alone. It has to be measured against the Constitution (Articles 14, 19, and 21) and general and/or sector-specific statutes (CrPC, telecom/interception laws, IT Act, banking rules).
The bill makes the Committee an ex-ante reviewer of AI deployments by using the term “prior approval of the Committee”. However, the Committee itself cannot be the primary source of legality, so its approvals will be tested against objective legal standards.
Therefore, practically speaking, if this Bill passes into law, the Committee should frame rules that allow it to apply constitutionally clear tests for approvals, which include necessity, proportionality, specificity, and DPDP-compliant tests such as time limitation and data minimisation.
This ensures that the Committee’s decisions are reviewable and constitutionally defensible (note that this approach is similar to the EU AI Act, which, instead of defining “lawful purpose”, relies on compliance with existing EU and national laws).
Can the proposed law be shot in the arm for law enforcement agencies and financial institutions such as banks?
Grandhi: Potentially yes. The key is to draft the rules in such a way that the documentation, transparency, human oversight, and audit obligations are extremely clear to financial institutions and/or law enforcement. The bill could legitimise high-risk uses such as credit scoring, fraud detection, and investigative analytics, while reducing legal uncertainty. This approach is very similar to the EU AI Act’s approval pathway for “high-risk” systems.
Story continues below this ad
But here is my concern. If the upfront reviews are made of opaque processes, and/or if the committee is slow or under-resourced, or if what is a “lawful purpose” is not properly ascertainable because of bad “tests” being applied for approvals, then in that case, the entire system will be bogged down by administrative delays, and potentially judicial interference.
To be enabling, the rules framed under this bill must provide risk-based, time-bound pathways (fast-track reviews, sandboxes), formal integration with sector regulators (RBI/SEBI/telecom) and secure audit channels that protect trade secrets while allowing oversight.
Do you think there needs to be special courts to deal with cases of misuse of AI to alleviate the burden of regular courts since enactment of such laws will increase the number of cases filed in the courts?
Grandhi: Not at the moment. The current AI-based crimes or AI-based civil offences can be handled through the existing laws. For example, the IPC/BNS, the IT Act, specialised statutes such as the Copyright Act, and sector-specific regulations such as those of the RBI and SEBI. So, any AI offences that violate these laws are handled through the current judicial mechanism that governs such laws.
Creating separate AI courts risks fragmentation and jurisdictional conflicts, as AI issues will inevitably intersect with data protection (DPDPA), consumer law, competition law, IP, and sectoral regulations.
Story continues below this ad
Specialised regulators like the proposed Data Protection Board under India’s DPDPA, coupled with trained benches in existing courts, will be more workable than entirely new court systems. The Bill already contemplates complaints to the Committee and remedial action (Section 7). That model mirrors the DPDP Act and other regulator-led frameworks, in which specialist statutory bodies conduct first-line enforcement, while courts serve as the appellate forum.
However, the statutory right of appeal to ordinary courts or to designated technical benches of High Courts should be preserved. If caseloads actually surge, the state can create specialised benches within existing courts (not separate courts) or a tribunal with clear appellate routes, to respect Article 323B limits and preserve judicial review.
What are your suggestions that you feel can refine this bill further in terms of achieving its objective?
Grandhi: Here are the suggestions:
- Statutory definition of “lawful purpose” plus tests to arrive at what is a lawful purpose (e.g., necessity, proportionality).
- Risk-based classification of AI systems like the EU model (prohibited/high/limited/minimal) with tailored obligations for each tier.
- Mandatory Algorithmic/Data Impact Assessments.
- Accredited independent audits with secure, confidential disclosure channels for IP protection.
- Human-in-the-loop for critical decisions; clear notice, explanation and contestation rights.
- Clear remedies, interim stop orders and a compensation pathway.
- Formal coordination with sector regulators; statutory safeguards for Committee independence (appointments, tenures, conflict rules).
- Judicial review of Committee Decisions.
According to the Artificial Intelligence (Ethics and Accountability) Bill, 2025 it aims to establish an ethics and accountability framework for the use of AI in decision-making, surveillance, and algorithmic systems to prevent misuse and ensure fairness, transparency, and accountability. It has also made some key proposals.
Story continues below this ad
Constitution of ethics committee
The bill proposes the Constitution of ethics committee for AI, which shall consist of a chairperson having expertise in ethics and technology. The committee will also include representatives from academia, industry, civil society and government and experts in law, data science and human rights, to be appointed by the central government.
Functions
The bill proposes that the committee shall develop and recommend ethical guidelines for AI technologies and monitor compliance with ethical standards in AI systems.
It will also review cases of misuse, bias or violations of the provisions of this Act and promote awareness and capacity-building among stakeholders.
Restrictions on surveillance, decision making
The bill also provides that the AI systems involved in critical decision-making including law enforcement, financial credit and employment shall not discriminate only on the basis of race, religion, gender, or any of them and it shall be subjected to stringent ethical reviews by the committee.
Story continues below this ad
Responsibility of developers
The bill also places responsibility on developers of AI. It puts the onus on developers to ensure transparency in AI systems by disclosing the intended purpose and limitations of the AI system, the data sources and methodologies used for training algorithms, and the reasons for any decisions made by AI systems that impact individuals.
The bill also provides that developers will prevent algorithmic bias by conducting regular audits to identify and mitigate biases in AI systems. Additionally, developers will maintain records of compliance with the ethical standards under this Act.
Penalties for non-compliance
The bill proposes a fine of up to Rs 5 crore depending on the severity of the violation and suspension or revocation of licenses for deploying AI systems. It further provides that the offender may face criminal liability in case of repeat violations.
Grievance redressal mechanism
The bill also provides for a grievance redressal mechanism. Any affected individual or group may file complaints with the committee regarding misuse or harm caused by AI technologies.