AI in Indian classrooms: Inside a cafe in West Delhi’s Kamla Nagar, two Delhi University second-year philosophy students lean across a small plastic table, their cold coffee glasses sweating into paper coasters, a shared plate of Maggi running cold.
“Just tell it to make the conclusion sound deeper,” one says, thumb scrolling on his laptop. “Throw in some quotes from Nehru or Ambedkar, the teacher would like it.”
The other, in a faded Nirvana T-shirt, smirks. “Do you think she’ll find out?”
“How is it cheating? We are just saving time,” the other says.
One is generating a summary for a paper titled ‘Dr Ambedkar and His Philosophy’ without having read the course material. The other is trying to compress a Hindi-language text into a digestible summary but has been getting the same output from GPT even after 10 prompts. Each of these prompts ends with a nudge: “Write it like a human being.”
When OpenAI launched GPT-5, the latest model of its generative artificial intelligence tool, CEO Sam Altman described it as “having a team of PhD-level experts in your pocket”. Not to be outdone, Elon Musk, unveiling xAI’s Grok 4, claimed it was “better than PhD level in every subject, no exceptions”.
AI and its generative models have caught on faster than anyone could imagine, reshaping not just the way we work, but the way we learn. In higher education classrooms across India, educators are scrambling to redraw the boundaries of originality, integrity, and learning itself, as students increasingly turn to generative AI tools for everything from assignments to examinations.
Despite the acknowledgment among higher education institutions of the dizzying changes AI will bring about in curriculum and pedagogy, in the vast majority of classrooms, where faculty members and students go through the grind of assignments and project submissions, questions – and suspicions – linger: how much of AI is acceptable? Are students gaming the system by getting AI to do their assessments and project work?
In April last year, IIT Delhi formed a committee to decide how generative AI should be used in classrooms, labs and exams without eroding academic integrity. Over months, it surveyed 427 students and 88 faculty members. The results weren’t unexpected: four out of five students use AI, often several times a week, and one in 10 pays for premium subscriptions to bypass the quirks and errors of free versions.
Students told the surveyors that they use AI to simplify concepts, create mind maps, summarise material, and simulate scenarios. But they also catalogued its shortcomings: wrong answers, weak context handling, bad maths, flawed code debugging.
Nearly 77 per cent of the faculty had used AI, the survey said, to summarise research papers, make slides, and draft official communication. They valued the speed but feared grading distortions, the loss of critical thinking, and the temptation to let a slick answer replace the rough edges of an original work.
The committee recommended integrating AI and machine learning into all core curricula, mandating disclosure of AI use, running workshops on responsible use, and buying campus-wide premium licences to ensure equitable access to AI. Plagiarism policies, it said, must be rewritten to reward “honesty” and punish “submitting AI-generated work without meaningful personal input”.
“Some of us PhD students avoid using AI because, at some level, it feels like cheating. The tedium and struggle are central to learning, regardless of discipline. This tediousness disappears when interacting with AI chatbots, which generate responses within seconds. The alarming part is that we often treat these responses as capital-T Truth.
Moreover, with AI tools such as ChatGPT and Google Gemini, much of the nuance in understanding concepts is lost,” says Tarun A, a fifth-year PhD student in the Humanities Department at IIT Delhi.
While students have been quick to adapt to AI, using it both as a tool to fill gaps in classroom learning and relying on it for assignments and exams, it’s teachers who have had to work much harder to catch up.
Chetan Arora, Professor in the Department of Computer Science and Engineering at IIT Delhi and a joint faculty at the Yardi School of AI and ANSK School of Information Technology, admits he uses ChatGPT in class, but always after the basics have been taught.
In a class on computer vision, Arora explains, generating the code is the preliminary part. Instead of wasting too much time on code generation, he now uses AI tools for the task and focuses more on how that code can be used to solve the actual problem.
“Use AI, but don’t outsource judgment. I believe there is nothing wrong in using AI to generate code by a student if it saves a lot of time. But in my course, a student should be able to understand the code and be able to debug it if a GPT model is making a mistake,” Arora says.
Professor Siddharth Savyasachi Malu, Director of Shiv Nadar University’s AI Centre of Excellence, recalls an email he got from a student when he was teaching at IIT Indore. “The email mentioned the word ‘rubric’. It stood out. I had never heard a student use that term before in an email. When I asked him what the word meant, he didn’t know,” he says. The student had asked his friends for help to draft the email and they had turned to ChatGPT.
“It’s not malicious,” Malu says. “It’s just convenient. But it changes the way we need to engage with students.”
Given the inevitability of AI, teachers have had to take their assessment methods back to the drawing board.
At Delhi’s Indraprastha Institute of Information Technology, Gautam Shroff, Professor of Computer Science and Engineering, changed the weightage of assignments and classroom tests to ensure AI was not being misused in his classroom. “We have shifted from 50% assignments and 50% exams to 90% exams. You can cheat on assignments, not on proctored tests,” he says.
Yet, he admits, these are often desperate measures to ensure classroom integrity. “It’s not about teachers outsmarting students, it’s about making sure students don’t outsmart themselves. Students use AI to do their work. That’s fine, but they must understand it and explain it. That’s the test. We haven’t yet taught students how to use AI well. Most teachers are either learning it themselves, ignoring it, or resisting it,” he says.
Shroff uses AI as a teaching tool, using it to create conversational, entertaining course content. Instead of lecturing, the material is presented through dialogues. “Students now explain their work to AI. It asks follow-up questions, records the interaction, and we grade the recording. AI helps evaluate projects through voice-based viva. It checks understanding, not just output,” he says.
Beyond his research, Shroff has been quietly nurturing what he calls a “retirement hobby” — building an online platform where companies and faculty can post projects, students can apply, after which AI tools conduct interviews and help with the selection process. While the platform is designed to bridge the gap between students and employment opportunities, Shroff has begun integrating it into his teaching. His students take their exams entirely on the platform. AI tools monitor attendance, enforce anti-cheating safeguards, and even grade assignments — but always with faculty supervision.
To ensure academic rigour, IIM Ranchi has a detailed evaluation rubric for faculty assessment, with four levels — Excellent, Proficient, Developing and Beginning — that evaluate how well students integrate AI into problem-solving.
Director Deepak Srivastava says, “We recognised that the traditional model of isolated theoretical learning was becoming increasingly disconnected. Our students need to develop competencies that go beyond textbook knowledge – they need to work collaboratively with emerging technologies to solve complex business challenges.”
In April 2025, Shiv Nadar University in Greater Noida officially moved away from blanket bans and reactive policies around AI. Instead, it accepted that generative AI is here to stay and it must be taught, not feared.
“Gen AI is a powerful, pedagogical tool,” the policy states. “Learning how to use Gen AI appropriately is a skill users at the University need to develop to leverage its immense potential.”
This is woven through the university’s five-level Gen AI Assessment Scale — a framework that determines what kind of AI usage is permitted in different academic contexts. Each course defines its own rules, but the scale ranges from Level 1: ‘Prohibited’, to Level 5: ‘Responsible Autonomy’, where students are allowed full use of Gen AI, as long as they disclose prompts and take full responsibility for accuracy and ethics.
At Ashoka University, Vice Chancellor Somak Raychaudhury had his moment of reckoning when he came across student emails with a telltale sign at the bottom: “Generated by ChatGPT”. The students had forgotten to delete the tag.
He now uses those examples in integrating AI at his university. Ashoka University is building AI literacy into its curriculum, with foundation courses that teach students how to ask better questions and assess the credibility of responses. The university’s philosophy department offers a course on ‘The Ethics of AI’.
“We cannot ban AI. But we can redesign assignments so students must show their thought process, not just give an answer,” says Raychaudhury.
While that’s a work in progress, on most days, Raychaudhury simply falls back on his instinct as a teacher. “When I see students’ faces, I know whether they understood. That live feedback, AI can’t replicate it.”
Despite these measures, the overwhelming sentiment among teachers is that they have to constantly remind students of the AI rule book.
“Misuse,” says IIT Kanpur’s Agrawal, when asked about the challenges that AI poses in the classroom. “Increasingly, students are using AI to write assignments or theses. That impacts genuine learning. The line between ‘help’ and ‘cheating’ has blurred.”
As teachers engage with AI or learn to live with it, somewhere on this spectrum lies a steadfast resistance to the idea.
In a classroom at Delhi University’s Education Department, the chalk squeaks against the board as Assistant Professor Latika Gupta writes in large, block letters: “NO AI”. She underlines it twice and lets the words sit there for the rest of the lecture.
“This whole idea that AI is a good thing… it needs to be dismantled,” she tells her students. All her assignments are now handwritten, except for visually impaired students. In her gender studies course, she asks students to analyse gender concepts through a curated set of songs; in another, she collects newspaper cuttings for discussion and for creating her coursework creatively. “Education is not about easy solutions. The desire to work hard and look forward to a teacher’s feedback is getting lost in classrooms these days. Everybody is using it. It’s disappointing.”
In a world reshaped by both the potential and uncertainties posed by AI, classrooms abroad aren’t immune to some of these challenges. But with an early mover advantage, many leading universities have moved past the distrust to embrace AI.
Sayash Kapoor, a computer science Ph.D. scholar at Princeton and co-author of AI Snake Oil, says his department leaned in early. “We provided students with licenses to ChatGPT Plus, and they were encouraged to use it as long as they found it helpful. Our only requirement was that students disclose how they use AI,” he explains.
To make that work, assignments had to change: instead of merely testing knowledge, they asked students to build things. For Kapoor, that shift turned AI into a skill. “Since we made AI use a part of the overall curriculum, we didn’t have to worry about distinguishing AI use from original work.”
Kathryn Yurecko, a Master’s student of Social Science of the Internet at the University of Oxford, recalls a math professor during her undergraduate years in Washington who gave students ChatGPT-written proofs riddled with errors. Their task was to find and fix the mistakes — a hands-on lesson that large language models can be slick but wrong.
“Institutions have no reliable means of determining whether or not students used AI to complete a given assessment. Instead, they are holding meetings to discuss changes to assessments, either to integrate AI or to make assignments AI-proof,” she says.
At Oxford, every student has to disclose AI use when submitting work on Inspera, the university’s assignment platform, but how far they can go with it depends on the professor.
William Burke, vice president of the Oxford AI Society, describes it as pragmatic. “University officials know AI has already arrived. They are not pretending students aren’t using it. Instead, they are adjusting assessments,” he says.
In some courses, therefore, AI is built in — one computer science professor used AI to further break down his lecture notes and encouraged students to openly use the altered notes – while in others, particularly philosophy, even brainstorming with ChatGPT is frowned upon because idea-generation is precisely what the course is supposed to teach.
The Oxford AI Society itself has become a hub for this experimentation. With 4,000 members, it runs hackathons, workshops, even black-tie galas with companies such as Anthropic and DeepMind.
Still, students feel the push and pull: whispers of stricter in-person exams circulate while some professors assign projects that explicitly require AI.
In Australia, guidelines issued by the higher-education regulator, TEQSA, make AI use legitimate — but disclosure is mandatory. Every assignment must say how AI was used. Universities are also shifting to oral exams, viva voce defences, and more in-class work — the kinds of assessments AI can’t do.
Umme Hani, a lecturer at the University of Notre Dame Australia, echoed that approach but stressed equity. “At UNDA, our stance isn’t about banning AI. It is about using it ethically and responsibly,” she says, before adding a caveat. “No student should be disadvantaged because they can’t afford subscriptions or personal technology.”
In the UK, universities are collaborating to find solutions. Swansea University is one of eight institutions that are part of a nationwide pilot project to test TeacherMatic, an AI platform for grading and personalised feedback. The aim is to pool experience into a best-practice toolkit.
Denis Dennehy, Professor of Information Systems and Sustainability at Swansea, says, “We are moving away from a fragmented approach — where each lecturer experiments on their own — towards a sector-wide learning model. That way, teachers and students don’t have to repeat the same mistakes.”
Dennehy says the danger is when students lean on AI as a shortcut: “You can often see generic responses across a class when people are using AI but not disclosing it. The real challenge is to design assessments that AI cannot answer at the depth you’d expect from a Master’s student or PhD scholar.”