Follow Us:
Monday, May 17, 2021

Hyper experimenting for a successful AI program

AI has promised much so far, but is yet to deliver on its full potential. While many organizations have done few successful POCs (proof-of-concepts), very few organizations have been able to implement AI at scale and derive the promised benefits.

August 11, 2020 6:28:48 pm
Demystifying AI, Artificial Intelligence, potential of AI, AI in companies, AI in production, AI companies, Deloitte India, DeloitteAI (Artificial Intelligence) concept. 3D illustration.

Written by Ravi Mehta, Partner; Sushant Kumaraswamy, Director; Akshay Kumar and Prashant Kumar, both Senior Consultant, Deloitte India

Artificial Intelligence (AI) has promised much so far, but is yet to deliver on its full potential. While many organizations have done few successful POCs (proof-of-concepts), very few organizations have been able to implement AI at scale and derive the promised benefits. One of the key reasons for this anomaly is that most organizations do not seem to have yet discovered the right implementation rhythm for AI programs. AI is neither a pure-play ‘language’ based technology (example, Java) nor a ‘function / modules’ based technology platform (example, HR or Finance ERP) and hence organizations cannot use these implementation rhythms for implementing AI. AI is more a ‘purpose’ based technology (example, voice recognition, document processing) and doing the right experiments at the right stage can help organizations discover their own unique right AI implementation rhythm and achieve three critical success factors for implementing AI in their organizations – (1) create the right AI launch pad (2) maximize scalability and adoption (3) optimize maintainability and maximize Return on Investment (ROI).
Create the right AI launch pad:

A good launch pad provides the AI program the right booster energy and acceleration at the start. Many important questions need to be answered to create a good AI launch pad in an organization. For example, organizations need to decide whether they will have single tool or multiple tools for a specific type of application (example, should we have one or multiple AI tools to read different types of documents such as invoices, resumes and contracts?). Additionally, organizations need to decide the funding mechanism for creating the launch pad and the chargeback mechanism for the overall AI program (example, should we recover the full cost of AI launch pad from the early adopters of AI or should we subsidize the early adopters to increase their ROI and promote wider adoption in the organization?). Similarly, organizations need to decide whether they will deliver AI programs in a more ‘centralized’ manner or in a more ‘decentralized’ or a ‘hybrid’ manner. There are no ‘one-and-best’ right answers for these important questions and hence organizations need to do multiple experiments to identify their own unique answers to these pivotal questions.

Maximize scalability and adoption

While a good launch pad provides the right acceleration at the start, a successful AI program needs more fuel to build increased momentum and achieve desired scale within a defined time frame. Specifically, organizations need to create a very healthy pipeline of qualified AI opportunities. Doing the right experiments can help organizations build a healthy pipeline and reduce resistance to change. For example, organizations can experiment with innovative ways (example, short immersive and interesting videos on AI applicability in various diverse areas such as ‘demand sensing’, ‘employee recruitment and retention’, ‘risk forecasting’, etc.) to rapidly increase education and awareness about AI. Additionally, organizations can run interesting adoption campaigns to accelerate AI infusion in the organization (example, gamify AI adoption to attract more interest from millennials in the organization). Similarly, organizations can experiment with new ways of engaging key stakeholders (example, creating customized programs for senior leaders to connect with their industry peers and AI experts to help them better understand AI success stories, implementation challenges and leading practices) to get right sponsorship and reduce potential resistance to change.
Optimize maintainability and maximize ROI:

Many AI programs suffer due to increase in costs of creating right algorithms and sourcing the right volume and type of data at the right time to test these algorithms. Doing the right experiment to increase reusability across algorithms can help reduce cost and improve ROI (example, can we use some part of logic used to identify a potential fraudulent payment to identify a potential fraudulent resume of a job applicant?). Similarly, organizations can also benefit from experimenting with creating the right data strategy to increase cross-leverage of data across multiple AI initiatives. Finally, creating an ‘Internal AI Marketplace’ platform can help organizations to encourage more reusability and diffuse key learnings (example, on best ways to handle employee or customer data to ensure compliance with required regulations and security policies and protocols, leveraging GPT-3 to accelerate adoption) across the organization in a better and faster manner leading to increased AI adoption, reduced costs and increase in ROI.

The famous poet William Blake once said that ‘the true method of knowledge is experiment’. While experimentation is important in any organizational initiative, it assumes more importance in an AI program due to its novelty and inherent complexity. Doing the right experiments at the right stage can help organizations answer the right questions at the right time in the right way. This, in turn, will create more success stories and enhance speed of adoption of AI in the organization.

📣 The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines

For all the latest Technology News, download Indian Express App.

  • The Indian Express website has been rated GREEN for its credibility and trustworthiness by Newsguard, a global service that rates news sources for their journalistic standards.