Technology

India firms rapidly scaling AI amid need for stronger governance and security: Report

Published On Wed, 03 Dec 2025
Asian Horizan Network
2 Views
news-image
Share
thumbnail
New Delhi, Nov 3 (AHN) India Inc. is rapidly scaling AI, fueled by global tailwinds, competition and advances in GenAI technologies, a report said on Wednesday, noting that AI now cuts across customer engagement, operational optimisation and mission-critical processes in multiple sectors.
Yet adoption remains fragmented, with only 15 per cent of organisations having extensive enterprise-wide AI deployment.
"While AI will continue growing, oversight is not keeping pace with it. In many organisations, AI infrastructure is expanding faster than the governance, security and ethical safeguards needed, creating widening gaps in accountability and risk management," Alvarez & Marsal (A&M), a global professional services firm, said in its report.
Meanwhile, governance maturity remains limited despite rising usage.
While 60 per cent of organisations have introduced basic governance or acceptable-use policies, only 19 per cent have carried out detailed risk assessments, and 81 per cent still lack full visibility of how their AI systems are monitored or governed, the report noted.
With many AI initiatives developed in silos, accountability and standards vary widely, especially when third-party and in-house models coexist.
The report highlighted the need for integrated, organisation-wide governance frameworks that embed transparency, oversight and clear role ownership.
“AI is now embedded deeper into business processes and decision systems than ever before. India’s AI opportunity is substantial, but its long-term gains depend on how effectively organisations govern and secure the systems they deploy," said Dhruv Phophalia, MD and India Lead - Disputes and Investigations, Alvarez & Marsal.
Those who invest early in these foundations will be best placed to unlock the full economic and competitive potential of AI, he added.
According to the report, responsible AI principles are widely acknowledged; however, their implementation remains limited.
Fewer than 20 per cent of organisations have deployed mechanisms for explainability, bias detection or fairness, and 60 per cent lack any formal process to validate model integrity.
Data governance shows similar gaps, with only 26 per cent having integrated data masking and PII-scanning within AI workflows, and 60 per cent perform no structured dataset validation.
The report also highlighted that as more complex AI models go into production, security across the AI lifecycle will be imperative.
While 52 per cent of enterprises have secure development environments with basic controls, fewer than 30 per cent conduct penetration testing or red-teaming, and only 19 per cent have safeguards to detect data poisoning during model training.