Lens on Algorithms: Auditing India’s AI Revolution Credibly
Newgen Software shares rise 12% on AI push! (Image X.com)
India’s digital governance future will be judged not just by how boldly it uses AI, but by how bravely it audits it.
By P SESH KUMAR
New Delhi, October 10, 2025 — The Comptroller and Auditor General (CAG) of India sits at a historic inflection point. After years of chairing the INTOSAI Working Group on IT Audit and declaring ambitious plans to harness AI, big data, and analytics in its own audit functions, India’s top auditor has yet to showcase a game-changing technological leap.
Most deployments remain confined to pilot projects — GIS overlays, Aadhaar-linked verifications, and sporadic database triangulations. Against this backdrop, the question looms large: is the CAG institutionally and professionally ready to audit the government’s sprawling new “Next-Leap” AI initiatives?
Few constitutional institutions in the world have both the authority and the responsibility to scrutinize technology-driven governance like India’s CAG. For decades, it has been at the vanguard of INTOSAI’s IT audit movement, influencing global standards for information systems assurance. It has repeatedly announced its own digital metamorphosis — promising audits powered by machine learning, predictive analytics, and natural language models that could slice through terabytes of government data with surgical precision.
Yet, the track record so far has been modest. Satellite imagery has been used in select environmental and infrastructure audits; Aadhaar validation has occasionally aided beneficiary verification; and cross-database analytics have exposed leakages in welfare schemes. But these remain isolated bursts of innovation, not an integrated transformation. The gap between rhetoric and readiness persists. The promise of “AI-enabled auditing” still feels aspirational, not operational.
Now the government is gearing up for a massive AI push- digitising decision-making across welfare, health, policing, taxation, and urban governance. In theory, the CAG should be the natural guardian of this new digital State. But credibility demands competence. When the auditor’s own house is still under digital renovation, can it credibly certify the algorithmic fortresses of others? Professional integrity in auditing rests not only on independence but also on demonstrated capability.
If the CAG embarks on a full-scale audit of AI systems without a robust internal AI audit framework or proof-of-concept pilots, it risks being accused of form over substance- a paper-heavy audit of paperless systems.
There are deeper methodological dilemmas too. Auditing AI systems is not just about reviewing expenditure or IT controls. It means probing how algorithms learn, what biases their training data carry, whether their predictions are explainable, and whether their deployment infringes privacy or fairness.
Memorandum or Mirage: CAG–CBDT MoU and the Stakes for Tax Audit
These are not mere compliance checklists; they are forensic investigations into the State’s digital conscience. To do this credibly, auditors need data scientists, coders, behavioural experts, and ethicists-roles that remain peripheral in the CAG’s traditional hierarchy. Without that multidisciplinary infusion, any “AI audit” risks becoming a ceremonial review rather than a clinical evaluation.
Furthermore, a CAG-led audit of government AI initiatives would attract intense scrutiny from both policymakers and technologists. Every shortcoming in its own digital evolution—lack of interoperable databases, uneven IT training, legacy audit management systems—will be magnified.
To question the opacity of others’ algorithms, CAG’s own use of AI and data analytics must be transparent, explainable, and demonstrably ethical. In short, it must practise what it preaches.
Still, the opportunity is enormous. If the CAG invests sincerely in its digital core-perhaps through an in-house AI and Data Analytics Lab staffed with domain specialists, coders, and public-policy analysts-it could lead the world in auditing algorithmic governance.
No other Supreme Audit Institution has yet mastered this frontier. India could. The foundation is already there: the CAG’s IS Audit Manual, its chairmanship of INTOSAI’s IT committee, and its proximity to MeitY and NIC ecosystems. What it lacks is the second gear-converting isolated digital experiments into an institutionalised, replicable audit methodology with measurable outcomes.
For instance, a credible AI audit would go beyond checking procurement and licensing. It would test training data provenance, model validation protocols, accuracy thresholds, error handling, citizen grievance pathways, and real-world outcomes versus promised efficiencies. It would assess whether government AI deployments respect the guardrails of the Digital Personal Data Protection Act and align with ethical frameworks like the OECD AI Principles or the EU AI Act. Most importantly, it would measure- with numbers-whether these technologies actually improve service delivery, inclusion, and fiscal efficiency.
The CAG’s entry into this arena must therefore be staged, not sudden. Begin with a handful of pilot audits- say, AI-based grievance portals in welfare ministries, machine-learning tax-risk engines in CBDT, or predictive analytics in agriculture.
Build audit tools that can trace data flows and simulate model outputs. Publish results openly—what worked, what failed, and what was learned. That transparency will build credibility faster than any seminar or slogan.
CAG Must Be AI-Ready
For the CAG to credibly audit India’s AI revolution, it must first become AI-ready itself. This means establishing a dedicated Centre for AI and Algorithmic Accountability Audits, staffed with hybrid teams- auditors, technologists, ethicists, and statisticians. It must create and publish a National AI Audit Framework based on INTOSAI standards, NIST’s AI Risk Management Framework, and ISO/IEC 42001. It should pilot real audits with measurable performance baselines and outcome matrices.
Equally important, the CAG must demonstrate its own transformation: real-time audit dashboards, NLP-based analytics for audit trails, and AI-driven fraud detection that shows tangible fiscal savings. Only when the auditor’s use of AI yields verifiable, peer-reviewed success can it claim moral and professional authority to audit the government’s AI systems.
India’s digital governance future will be judged not just by how boldly it uses AI, but by how bravely it audits it. The CAG stands at that moral intersection-between innovation and integrity. Its next leap must not just be into technology, but into trust built on demonstrable data-driven competence. Only then will its audit of the AI revolution be professionally defensible, institutionally credible, and globally admired.
(This is an opinion piece, and views expressed are those of the author only)
Follow The Raisina Hills on WhatsApp, Instagram, YouTube, Facebook, and LinkedIn