If you test positive do you have the disease?
The unequivocal answer is: maybe. No diagnostic test is 100 per cent accurate. But if you are not showing any symptoms, and do not have any reason to believe you might have been exposed to the virus, getting yourself tested just to check your status might not be advisable. You would possibly be surprised to know how big the probability of your being incorrectly identified as positive can be in such a case. The key, as you will see later, is the prevalence of the disease, meaning how widespread the disease is in the community.
How accurate are tests?
No test is 100 per cent accurate. It is not very difficult to believe this. Diagnostic tests, including the ones used for detection of COVID-19, have a margin of error and often generate results that are false positives or false negatives. A false positive means that the test incorrectly labels a healthy person as diseased. Similarly, a false negative means the test incorrectly marks a diseased person as healthy.
Any test has to accomplish two goals: correctly identify the diseased and the healthy individuals. The two goals are distinct, and one can be achieved without the other. To illustrate the point, let’s consider this extreme example: imagine a test that labels everyone as diseased. This test would work remarkably well if a large majority of those undertaking it were actually sick, but it would fail miserably if most of them were healthy people. Therefore, it would correctly identify the diseased people in one scenario, while failing to find the healthy ones in the other.
Therefore, accuracy of a test is measured by two parameters — aligned with the two goals. Sensitivity is the ability of the test to correctly identify diseased individuals. A highly sensitive test correctly picks up a large percentage of the diseased people. It is calculated as the number of positives detected out of the number of diseased individuals tested. So, if out of 100 people who undertake the test, 10 are sick, and nine of them get correctly identified by the test, then the sensitivity would be 90 per cent.
The second measure, the ability to correctly identify healthy individuals, is called specificity. It is calculated in a similar manner. Most tests have different sensitivity and specificity values.
Do I believe my test result?
Let us imagine a test with 90 per cent sensitivity, and 95 per cent specificity. So that, the test is 90 per cent of the times correct in identifying a diseased person, and 95 per cent of the times correct in picking up a healthy person.
The test is conducted in a population that has an overall disease prevalence of 5 per cent (meaning, 5 per cent of the population is infected). This is crucial parameter. We often do not know the true spread of the disease in the community or a population group, and therefore, typically an estimate is used for prevalence.
Now let us test 1,000 people from this group. By our assumptions, 950 of these people would be healthy and only 50 (5 per cent disease prevalence) would be infected. The table below shows the test results for these 1,000.
Out of 50 diseased people, this test correctly identifies 45, and marks five as negative. And among the 950 healthy people, 903 (95 per cent) are correctly identified, and 47 are incorrectly detected as infected.
Although the actual number of diseased people is only 50, this test marks 92 (45 actual, plus 47 false positives) people as infected. Therefore, a person who receives a positive result will only be 49 per cent likely to have the disease. This realization comes from the calculation that only 45 out of the 92 positive results (49 per cent) are correct. This ratio is called the positive predictive value, or post-test probability of disease, and indicates the degree to which a positive test can predict actual sickness. It should be as high as possible to drive confidence in a positive result.
Similarly, there is a negative predictive value and it indicates confidence in a negative result. In this example, a negative result has a 99.4 per cent probability of being accurate (903 out of 908 negative results are actually healthy).
One insightful way to interpret these results is to compare the probability of being sick before and after the test. Before the test is taken, the probability of a person being sick (called pre-test disease probability) is 5 per cent – same as the disease prevalence. A positive test increases this probability to 49 per cent (post-test disease probability).
Interestingly, the probability of being healthy also increases with this test. Before the test, the probability of being healthy is 95 per cent. A negative result increases this probability to 99.4 per cent.
So, this test increases the confidence in either diagnosis – though the confidence level in a positive result from this test might not be satisfactory. A good test is one that raises this confidence to very high levels.
How can false results be reduced?
In this scenario, the high number of false positives, 47, was a result of 95 per cent specificity and 950 healthy individuals, that is the disease prevalence was low. In most contexts, 95 per cent accuracy would be considered good enough. But in a situation with low disease prevalence, a test of much higher specificity is required. To push the false positives down to, let’s say, 10, a test with 99 per cent specificity is needed.
Increasing specificity is one way to reduce false positives. The other lever, mathematically, is to reduce the relative number of healthy people. This can be achieved by testing a population group in which the disease is much more widespread. This does not mean waiting for the disease to spread, but to limit testing to a sub-group that is more likely to have the disease – the primary contacts of infected people, for example, or health workers (It has the effect of increasing disease prevalence).
Repeating the above calculations with a population size of 500 with 50 diseased people reveals that false positives come down to 22. The positive predictive value (confidence in a positive result) increases from 49 per cent to 67 per cent.
In summary, false positives are reduced with two levers: a high specificity test, and a high disease prevalence setting (high pre-test disease probability).
Similar logic indicates that false negatives are reduced with a high sensitivity test used in a situation with low disease prevalence.
How accurate are tests being currently used?
In the COVID-19 setting, two kinds of tests are being used – the RT-PCR tests to detect infections, and the antibody tests to estimate the disease prevalence (Antigen and other diagnostic tests are not being considered here). The accuracy data for either of these tests is not widely known. A recent study evaluated RT-PCR kits from 22 companies in laboratory settings and showed that sensitivity was upwards of 90 per cent, and specificity was at least 95 per cent (data from finddx.org). Laboratory testing results usually show higher numbers compared to clinical testing results (tests on samples from individuals). Sample collection methodology, apart from other factors, impacts these results.
The antibody kits developed in India have 92.4 per cent sensitivity and 97.9 per cent specificity. The Indian Council of Medical Research completed a seroprevalence study in May, presumably using these kits. This study estimated the disease spread to be 0.73 per cent. As we have seen above, at such low prevalence, false positives will be a problem. Calculations using these test parameters with one per cent disease prevalence show that 70 per cent of all positive results would be false. Since the objective of a seroprevalence study is to estimate the overall disease spread, options such as limiting the size of the population group are not available. Therefore, further analysis and interpretation of the calculated prevalence rate are required.
📣 Express Explained is now on Telegram. Click here to join our channel (@ieexplained) and stay updated with the latest
What is the takeaway for individuals who want to know their status?
For an individual, there could be an understandable urge to “test and just check” and reduce uncertainty by seeking the comfort of an anticipated negative result (the RT-PCR test to detect active infection). The prudent decision is to counter this impulse with the knowledge that testing will not provide certainty unless conducted in the correct context. For asymptomatic individuals without any risk of exposure, the pre-test probability of disease is low. The chance of false positives in such a setting is high. Asymptomatic individuals should, therefore, opt for a test only if they have a solid reason to fear exposure. If a person is required to undergo testing, the individual should, if possible, demand a testing kit with the highest specificity.
The reasoning is not limited to COVID-19 tests and is applicable to any test: testing in asymptomatic individuals “just to check” leads to false positives when disease prevalence is low, and is not recommended. Adherence to this principle by the overall health care community will result in higher true positives.
Dr Tushar Gore’s focus area is pharmaceuticals. He studied at IIT-Bombay and the University of Minnesota, and has worked at McKinsey and Novo Nordisk. He is former MD/CEO at Resonance Laboratories, a niche pharmaceuticals manufacturer.
📣 The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines