Premium

Use AI for task automation, not blind medical or financial advice: Symbiosis institute head Dr Shruti Patil on safe use of tech in 2026

Dr Shruti Patil says that while AI tools can assist with tasks, users should not share sensitive information and should always verify information independently.

saii director dr shruti patilDr Shruti Patil, Director of Symbiosis Artificial Intelligence Institute. (Photo: https://saii.edu.in/director.php)

Since the launch of ChatGPT in late November 2022, Artificial Intelligence (AI) tools in the form of generative AI have changed how people surf the web. AI tools have made it much easier to perform many tasks, while at the same time raising concerns about data privacy and accuracy.

In an interview with The Indian Express, Dr Shruti Patil, Director of Symbiosis Artificial Intelligence Institute, spoke about how people can make safe use of AI in 2026.

Q: In what ways can people safely incorporate AI into their lives?

Dr Shruti Patil: The important thing is to understand what people can use AI for. For example, if you want to know about things going on globally or if a particular event is happening, and you want to read news about that, you can use AI.

If there are some manual tasks that you are doing repeatedly, they can slowly be automated with the help of AI. For example, if you want to travel somewhere and want to create an itinerary within a particular budget. People spend two days or three days researching the location, directions, sightseeing, and temperatures. All these things can be done in just two minutes using ChatGPT. So these kinds of small tasks, where some decision-making is required, and we do it based on some kind of research, can be automated.

Content generation or application generation is also an area where AI can be used very, very well. For example, if you want to design an invite for an event, you do not need to go to a designer. Simply using AI tools like Gemini or Notebook LM, you can design those invitations and quickly share them.

Q: How should people protect their privacy when using AI tools?

Story continues below this ad

Patil: It is important for people to understand what kind of information can be given to AI and what should be camouflaged. Sensitive information about yourself or anyone else should definitely not be disclosed. This can be information that can reveal their identity, financial details, or passwords. We are all using generalised large language model products, for example, ChatGPT, which is taking millions of parameters from all over the world. So, definitely, we should avoid giving this kind of information.

If you are making a financial decision about investments, you can ask AI about current stock trends, etc., but you also have to do your own homework and not blindly trust it. When you visit a doctor, and if you want to better understand what they said, you can make use of AI tools for an explanation. But you cannot replace your doctor with AI.

Q: AI is also prone to hallucination (when AI tools produce plausible-sounding but false or inaccurate information). How can people be safe from this?

Patil: Generally, for single-page results, AI tools work well. If a PDF has hundreds of pages, then AI hallucinates. So AI can be used for some personal work, but for office work, free versions of AI models should not be used. Paid versions of AI tools can be used there.

Story continues below this ad

Even on single-page PDFs, AI sometimes hallucinates, so it depends on the criticality of the data. These tools are still learning. The tasks are becoming more complex as we are using it more and more.

Q: So would you say it is important to cross-check AI results even if you are doing a small but important task?

Patil: Yes, of course. Currently, we don’t have tools giving exact and perfect results. Sometimes they give correct answers, sometimes they do not. That consistency of outcome is missing.

Q: Women are targeted online using generative AI tools with men editing themselves onto women’s photos or videos. What is the responsibility of users as well as AI companies in this regard?

Story continues below this ad

Patil: More than users, it is important for a country to come up with an AI policy, which should be enforced by every AI service-providing company. Certain rules have to be devised at the product level so that these things are not allowed.

Even now, for example, if you ask ChatGPT ‘when will I die?’, it will not answer. Now, ChatGPT realises when a user is getting emotionally attached to it because a lot of teenagers and even elders are getting attached and treating it like a digital person.

In India, we have to come up with a very strong AI policy. Especially about user data privacy, because these AI tools are trained on user data. The government has to put up guardrails, specifying which kind of data is allowed to be shared and which kind of data is simply banned.

Alister Augustine is an intern with The Indian Express

Soham Shah is a Correspondent with The Indian Express, based in Pune. A journalism graduate with a background in fact-checking, he brings a meticulous and research-oriented approach to his current reporting. Professional Background Role: Correspondent coverig education and city affairs in Pune. Specialization: His primary beat is education, but he also maintains a strong focus on civic issues, public health, human rights, and state politics. Key Strength: Soham focuses on data-driven reporting on school and college education, government reports, and public infrastructure. Recent Notable Articles (Late 2025) His late 2025 work highlights a transition from education-centric reporting to hard-hitting investigative and human-rights stories: 1. Investigations & Governance "Express Impact: Mother's name now a must to download birth certificate from PMC site" (Dec 20, 2025): Reporting on a significant policy change by the Pune Municipal Corporation (PMC) following his earlier reports on gender inclusivity in administrative documents. "44-Acre Mahar Land Controversy: In June, Pune official sought land eviction at Pawar son firm behest" (Nov 9, 2025): An investigative piece on real estate irregularities involving high-profile political families. 2. Education & Campus Life Faculty crisis at SPPU hits research, admin work: 62% of govt-sanctioned posts vacant, over 75% in many depts (Sept 12, 2025): An investigative piece on professor vacancies at Savitribai Phule Pune University. "Maharashtra’s controversial third language policy: Why National Curriculum Framework recommends a third language from Class 6" (July 2): This detailed piece unpacks reasons behind why the state's move to introduce a third language from class 1 was controversial. "Decline in number of schools, teachers in Maharashtra but student enrolment up: Report" (Jan 2025): Analyzing discrepancies in the state's education data despite rising student numbers. 3. Human Rights & Social Issues "Aanchal Mamidawar was brave after her family killed her boyfriend" (Dec 17, 2025): A deeply personal and hard-hitting opinion piece/column on the "crime of love" and honor killings in modern India. "'People disrespect the disabled': Meet the man who has become face of racist attacks on Indians" (Nov 29, 2025): A profile of a Pune resident with severe physical deformities who became the target of global online harassment, highlighting issues of disability and cyber-bullying. Signature Style Soham is known for his civil-liberties lens. His reporting frequently champions the rights of the marginalized—whether it's students fighting for campus democracy, victims of regressive social practices, or residents struggling with crumbling urban infrastructure (as seen in his "Breathless Pune" contributions). He is adept at linking hyper-local Pune issues to larger national conversations about law and liberty. X (Twitter): @SohamShah07 ... Read More


Click here to join Express Pune WhatsApp channel and get a curated list of our stories

Latest Comment
Post Comment
Read Comments
Advertisement
Loading Taboola...
Advertisement