NEET Rankings in a Black Box: Algorithms Without Auditors
Image credit X @PIB
The NEET algorithmic black box now decides merit, rank, and medical futures—without audits, transparency, or accountability
By P. SESH KUMAR
New Delhi, January 17, 2026 — India’s largest and most consequential examinations—NEET-UG and NEET—PG-are now mediated almost entirely by algorithm-driven systems that calculate percentiles, ranks, eligibility, counselling order, and seat allocation. Yet these algorithms operate in a regulatory vacuum. Unlike financial algorithms, which are subject to model validation and supervisory audits, exam and ranking algorithms remain opaque, unaudited, and immune from systematic scrutiny. The recent NEET-PG cut-off relaxation to zero and negative percentiles has brought this opacity into sharp public focus, raising uncomfortable questions about fairness, transparency, and accountability.
India likes to believe that its national examinations are temples of merit-cold, objective, incorruptible. Once the paper is written, we assume the mathematics will take over and truth will emerge, neatly ranked and unquestionable. But the uncomfortable reality is this: today, merit is not just tested, it is computed. And the computation happens inside algorithmic systems that no one audits, no one independently verifies, and no one is legally required to explain.
The recent NEET-PG controversy—where qualifying cut-offs were lowered to zero and even negative percentiles—did not merely expose policy desperation to fill seats. It exposed something deeper and far more dangerous: how completely India has outsourced trust to black-box ranking algorithms without building any corresponding audit architecture.
Percentiles, ranks, tie-breakers, normalisation across shifts, eligibility thresholds, and counselling order are all outcomes of algorithmic decisions. Yet candidates are told to accept these outcomes on faith. When anomalies arise, the response is procedural reassurance, not forensic explanation. The system asks millions of aspirants to trust the math-while refusing to open the math to scrutiny.
This is where the problem begins.
In traditional governance, any system that allocates scarce public goods—jobs, licences, admissions—must be auditable. Accounts are audited. Procurement decisions are audited. Even exam conduct is audited at the centre level. But the algorithm that converts answers into destiny remains sacrosanct, shielded by technical jargon and institutional authority.
In India, there is no statutory requirement for algorithm audits in examination systems. Bodies conducting or supervising exams may publish information brochures and formula outlines, but the actual implementation—the code, the logic paths, the edge—case handling-remains invisible. When outcomes are challenged, courts often defer to “expert bodies,” and the loop closes without transparency.
This regulatory gap becomes more troubling when viewed alongside how algorithm audits function elsewhere.
When Merit Hits Zero: NEET-PG and Collapse of Specialist Training
In the financial sector, for instance, lending and credit-scoring algorithms used by banks and fintechs are subject to oversight by the Reserve Bank of India. Models must be validated, bias risks assessed, and outcomes explainable. If an algorithm systematically disadvantages a group or produces unexplained distortions, regulators can intervene. The principle is simple: when algorithms affect livelihoods, trust cannot be blind.
In contrast, examination algorithms affect futures—careers, income trajectories, social mobility—yet face no comparable scrutiny.
Competition and consumer law offer only partial remedies. The Competition Commission of India can examine algorithmic conduct when it leads to abuse of dominance or unfair trade practices. But this is ex-post and market-oriented. It is not designed to audit ranking logic or fairness in public examinations. Similarly, digital governance under the Ministry of Electronics and Information Technology focuses on content moderation, cybersecurity, and data protection-not algorithmic fairness in public decision-making systems.
As a result, exam algorithms fall between regulatory stools. They are not markets. They are not financial products. They are not consumer interfaces in the traditional sense. They are sovereign tools that operate with the force of law but without the discipline of audit.
The NEET-UG and NEET-PG ranking imbroglio makes this painfully visible. Candidates see wild variations in percentile outcomes for small score differences. They see negative scores translate into eligibility. They see ranks reshuffled by normalisation formulas that are explained in theory but never audited in practice. When challenged, authorities insist that “the algorithm is standard” or “the process is as per notification.” But standardisation without verification is not transparency—it is ritual.
The danger here is not merely academic confusion. It is systemic unfairness that scales silently. Algorithms can embed biases without intent. They can amplify edge-case distortions. They can privilege certain score distributions over others. And once embedded, they affect lakhs of candidates simultaneously. Unlike a human examiner, an algorithm does not explain itself unless forced to.
This is why the argument that “algorithms are neutral” no longer holds. Neutrality is not a property of code; it is a property of oversight.
The irony is that India already understands this logic in other domains. Election EVMs are subject to verification protocols. Financial statements undergo statutory audits. Even government schemes are audited for outcomes and leakages. Yet the ranking engines that decide who becomes a doctor, engineer, or specialist operate on trust alone.
The recent cut-off relaxation exposes another uncomfortable truth. Algorithms are now being used not just to rank merit, but to manage embarrassment. When seats remain vacant, instead of questioning whether seat capacity, college quality, or fee structures are flawed, the system tweaks algorithmic thresholds. The math is bent to serve policy optics. Without audit trails, this bending leaves no fingerprints.
This is where algorithm opacity becomes dangerous. It allows policy failure to masquerade as technical adjustment. It converts governance problems into statistical footnotes.
Way Forward: Auditing the Invisible State
India does not need to fear algorithms. It needs to fear unaudited algorithms.
The first step is to recognise exam algorithms as public decision systems, not mere technical tools. Any algorithm that determines eligibility, rank, or allocation in national examinations should be subject to independent audit-much like financial models or procurement systems.
Second, India must move from disclosure to verifiability. Publishing formulas is not enough. Independent experts should be able to test algorithms using anonymised data to check for distortions, bias, and edge-case unfairness.
Third, there must be a clear legal mandate for algorithm accountability in public examinations. Without statutory backing, transparency will remain discretionary and defensive.
Finally, algorithm audits should be seen not as adversarial, but as legitimacy-enhancing. In an era of intense competition and fragile trust, sunlight strengthens institutions; secrecy weakens them.
If algorithms now decide merit, then merit deserves an audit.
(This is an opinion piece. Views expressed are author’s own)
Follow The Raisina Hills on WhatsApp, Instagram, YouTube, Facebook, and LinkedIn