Zero Marks, Zero Sense? How Percentiles Qualify Failure
Image credit X @INCIndia
From zero percentile to negative scores, a former CAG officer explains why ranking tools are being mistaken for standards—and why merit is quietly collapsing
By P. SESH KUMAR
New Delhi, January 16, 2026 — The recent spectacle of “zero percentile” and even “negative-score” candidates being declared eligible in national competitive examinations has left many parents, students, and professionals scratching their heads. How can someone who scores below zero still qualify? What exactly is a percentile—and how is it different from marks or grades? And how does this modern statistical vocabulary compare with the old, familiar Indian system of “first class,” “second class,” or “distinction”?
At first glance, the phrase “zero percentile” sounds terminal. To the untrained ear, it suggests total failure-the academic equivalent of being bowled out for a duck. Add to that the idea of negative marks still allowing participation, and the system begins to look not merely confusing but absurd.
Yet the truth is less mysterious and more revealing: the problem lies not in mathematics, but in how statistical tools are being used-and misused-in high-stakes examinations.
To understand this, one must first grasp what a percentile really is. A percentile is not a measure of how much one knows; it is a measure of how one performed relative to others. If one is at the 60th percentile, it simply means he or she has performed better than 60 percent of the test-takers. It says nothing, by itself, about whether one’s absolute score was good, mediocre, or poor. In a very difficult exam where most candidates struggle, even a low raw score can translate into a respectable percentile. Conversely, in an easy exam, a decent-looking score may place one surprisingly low in percentile terms.
Now, let us consider what zero percentile means. It does not mean zero marks. It means that a candidate is at the bottom of the ranking list—that everyone else performed better. If the entire cohort performed poorly, the lowest scorer might still have answered some questions correctly, or even scored in positive territory. Zero percentile merely marks the statistical tail, not intellectual emptiness.
The real shock comes when negative marks enter the picture. Competitive exams often use negative marking to discourage guesswork. In such systems, a candidate who answers many questions incorrectly can end up with a negative total score. If enough candidates perform poorly-as can happen in very tough exams-then even those with negative scores can fall somewhere along the percentile distribution. When authorities declare that the qualifying percentile is zero, they are effectively saying: we are not excluding anyone based on relative performance. Eligibility becomes universal, regardless of how low the raw score dips.
This is where confusion erupts. Percentiles were never designed to certify competence. They are ranking tools, not quality stamps. Using them as qualifying thresholds—especially when set at zero-creates the illusion that failure has been rebranded as success.
To appreciate how different this is from older systems, let us recall the traditional Indian classification model. A student scoring 75 percent or more was said to have passed with distinction. Sixty percent earned a first class. Fifty percent meant second class. Forty percent was a bare pass. These labels were absolute. They told us, at a glance, how much of the syllabus a student had mastered. A student who scored 38 percent did not pass simply because others performed worse. Standards were fixed; outcomes were transparent.
Grades, which came later, softened this rigidity but retained the same core logic. An “A” grade corresponded to a defined band of performance, a “B” to another, and so on. While grading systems sometimes adjust boundaries based on cohort performance, they still aim to reflect absolute achievement, not merely relative standing.
Percentiles broke decisively from this philosophy. They emerged from large-scale competitive testing, where the goal was not to certify learning but to rank millions of candidates for a limited number of seats. In exams like engineering (JEE) and medical entrances ( NEET-UG and NEET-PG), percentiles helped manage scale. When two million students compete for a few thousand seats, ranking matters more than whether everyone crossed a common learning threshold.
Problems arise when this ranking logic is confused with qualification logic. In entrance exams for undergraduate courses, percentiles are used primarily to order candidates. Eligibility usually still requires crossing a basic marks threshold. In postgraduate and professional contexts, however, lowering percentile cut-offs to extreme levels blurs this distinction. The exam continues to rank, but it no longer filters.
The result is a semantic sleight of hand. A candidate is told they are “qualified,” not because they demonstrated sufficient mastery, but because the gate has been left open to avoid vacant seats. The exam remains national, the counselling remains centralized, but the idea of minimum competence quietly slips out the back door.
This is why comparisons with the old “first class” system feel jarring. Under that regime, no amount of peer underperformance could rescue a weak score. Under a pure percentile regime, performance is always relative, and in extreme cases, relative failure can still look like formal eligibility.
The deeper issue, therefore, is not whether percentiles are bad. They are powerful statistical tools when used for ranking. The issue is what happens when ranking tools are mistaken for standards. A thermometer can tell us who is colder than whom, but it cannot decide who is healthy. Similarly, a percentile can tell us where a candidate stands in a crowd, but it cannot, by itself, assure readiness for professional responsibility.
Percentiles, grades, and class divisions are not interchangeable languages; each was designed for a different purpose. The old class system measured mastery, grades softened that measurement, and percentiles ranked competitors at scale. Trouble begins when percentiles are asked to do what they were never meant to do-define minimum competence. A zero percentile does not mean zero intelligence, but it certainly signals zero filtering. When eligibility thresholds collapse into statistical technicalities, examinations risk losing their moral authority, even if their mathematics remains sound. The challenge before policymakers is not to abandon percentiles, but to remember what they can-and cannot-credibly represent.
Let us attempt a simple, story-driven explanation, using everyday illustrations, that would show how percentile works, how it differs from percentage / class / division systems, and where it helps-and where it dangerously misleads.
Let us imagine two classrooms.
In the first classroom, the teacher uses the old Indian system. The paper is out of 100 marks. If you score 75, you get distinction. If you score 60, you get first class. At 50, it is second class. At 40, you pass. Below that, you fail. What matters here is how much of the syllabus you actually mastered, not what others did. Even if everyone else failed, a student scoring 75 would still get distinction. The standard is fixed. Performance is absolute. This is how most of us grew up understanding merit.
Now shift to the second classroom, where the teacher announces something different. “I will not tell you whether you passed or failed based on marks. I will only tell you where you stand compared to others.”
This is the percentile system.
Suppose 100 students write a very tough paper. Most struggle. The highest scorer gets just 45 marks. The lowest gets minus 20. You score 10 marks. In percentage terms, 10 out of 100 looks terrible. Under the old system, you would fail without discussion.
But under the percentile system, the teacher lines everyone up from best to worst. If your 10 marks are better than what 60 students scored, you are placed at the 60th percentile. Suddenly, you look “above average” despite knowing very little of the syllabus. Nothing about your knowledge changed-only the reference point did.
This is the first and most important insight:
Percentile does not measure knowledge. It measures position in a crowd.
Now let us consider an even stranger case.
Suppose the paper is brutally difficult. Many students guess and lose marks. The bottom scorer gets minus 40. You get minus 10. Under common sense, minus 10 is failure. But in percentile terms, if 30 students scored worse than you, you are at the 30th percentile. If the authority declares that the qualifying percentile is zero, then even the last-ranked candidate-minus 40 included-is “eligible.”
This is how a negative score can still qualify. It is not magic. It is mathematics applied without judgment.
Now contrast this with the grade system, which sits somewhere between marks and percentiles. Grades group performance into bands—A, B, C, D-often mapped loosely to percentages. Grades soften precision but still aim to reflect absolute achievement. If the entire class performs poorly, the grades may look bad across the board, but no one pretends that low achievement equals competence. Grades still whisper, “You need to know this much.”
Percentiles whisper something very different: “You just need to be better than someone else.”
That difference matters enormously.
Percentiles are powerful when the goal is ranking, not qualification. In massive exams with millions of candidates and limited seats—like engineering or medical entrances-percentiles help answer one question efficiently: who should be called first? They may be good for tie-breaking, ordering, and large-scale comparisons across shifts and papers of varying difficulty.
But percentiles are deeply flawed when used to answer a different question: Is this candidate ready?
Here lies the biggest pitfall.
In the old class or percentage system, standards are anchored to the subject. Forty percent means you understood roughly forty percent of what was taught. That may be low, but it has meaning. In the percentile system, zero percentile does not mean zero knowledge-it only means “someone had to come last.” If standards are not layered on top of percentiles, the system loses its moral compass.
Another pitfall is false comfort. A student at the 70th percentile may believe they are strong, even if the entire cohort performed poorly. Conversely, a genuinely competent student may feel like a failure if placed at the 40th percentile in a very strong cohort. Percentiles distort self-assessment because they replace learning with competition.
Yet percentiles do have advantages.
They are fairer across papers of unequal difficulty. They reduce the tyranny of one tricky question ruining a career. They handle scale elegantly. They prevent obsession over one or two marks. When used correctly-on top of minimum qualifying standards-they combine fairness with competition.
The danger begins when percentiles replace standards instead of sitting above them.
We may think of it this way. Marks and grades ask: How much do you know?
Percentiles ask: Where do you stand?
A healthy system asks both questions. A broken one asks only the second.
When eligibility is reduced to a percentile alone-especially to zero-the exam stops being a gatekeeper and becomes a crowd-management tool. Everyone gets through the gate; the sorting happens later, quietly, and often painfully.
In professional education, especially in fields that deal with human lives, this distinction is not academic. It is existential.
Percentiles are rulers. They can measure distance between people. They cannot tell us whether the bridge ahead is safe to cross.
(This is an opinion piece. Views expressed are author’s own)
When Merit Hits Zero: NEET-PG and Collapse of Specialist Training
Follow The Raisina Hills on WhatsApp, Instagram, YouTube, Facebook, and LinkedIn