Reference: Cochrane Database Syst Rev. 2020 Jun 25;6:CD013652
RT-PCR testing for COVID-19 has variable cost, sometimes lengthy turn-around times, and disappointing predictive values, leading to high hopes for testing that is cheaper, faster, and can more accurately diagnose symptomatic and previously exposed individuals. A recent systematic review and meta-analysis from the Cochrane group provides plenty of details about the diagnostic accuracy of various COVID-19 antibody tests compared to RT-PCR as a reference standard, but sifting through the 310-page document to figure out what it means for clinical practice is challenging. So here are the five most important take-aways about COVID-19 antibody testing for clinical practice:
1. Antibody testing is not useful for diagnosis of active infection.
Sensitivities for any of the IgA, IgM, IgG, total antibody, or combination IgM/IgG tests studied were all less than 30% in the first week after symptom onset. As an EBM concept refresher, sensitivity describes how good a test is at detecting a condition when the condition is present, and is used to rule out a condition because if the test doesn’t detect it, it must not be there. A 30% sensitivity is not good, and translates to many false negative results. Antibody testing can’t be used to rule out active infection.
2. Antibody testing is at best moderately useful for diagnosing prior infection, but pre-test probability (prevalence and symptoms) is critical to test accuracy.
The best case scenario for IgG antibody testing in this analysis was a sensitivity of 88.2% and specificity of 99.1% when patients were tested 15-21 days after symptom onset. Assuming a prevalence of 50%, these test characteristics translate to a positive predictive value (PPV) of 99% and a negative predictive value (NPV) of 89%, which sound pretty great. At the time of writing, the case rate was 2.7% in Miami-Dade County, Florida, a place that recently got a lot of media attention for its high prevalence. (Whether case rate really equates to prevalence here is a separate conversation.) For a prevalence of 2.7%, the PPV is 73%, which leaves about 14/1,000 false positive results—not as great. PPV and NPV take into account pre-test probability (often prevalence), which is critically important when considering test reliability. This pretest probability, however, can reflect more than just geographic prevalence. For example, a nursing home or school with a COVID-19 outbreak may well have a prevalence of 50% or greater, even if the state or city prevalence is low. Likewise, an asymptomatic population has a lower pre-test probability, and therefore testing will lead to more false positives than if you tested a symptomatic population from the same geographic distribution.
3. We don’t really know what to do with positive test results, especially when prevalence is low.
For a low-prevalence region like Charlottesville, Virginia, where case rates were 661/100K at the time of writing, the same positive test result would have a PPV of 40%. To state the obvious, that means that a positive antibody test result would be a false positive more often than it would be a true positive. Falsely diagnosing COVID-19 might not raise as many flags as the idea of missing cases that do exist on first pass, but it is certainly not without important consequences. Furthermore, when test results really are positive, does it mean that the person is still contagious and needs to isolate for 10 days, after which their family needs to quarantine for 14 more days? Or does it mean that they have immunity? There are no clear answers to those questions.
4. We can probably trust a negative COVID-19 antibody result if prevalence is low.
COVID-19 antibody tests perform best when the result is negative and pretest probability is low. For a prevalence of 2.7%, we would expect a NPV of 99%. We can be pretty sure that a patient with a negative test has not been exposed as long as the patient was sampled at least a week after symptom onset and ideally in the 15-21 days after symptom onset.
5. Antibody testing outside of the 15-21 day interval after symptom onset yields less reliable results.
Testing was assessed during various intervals from day 1 to 35 after symptoms onset. All test characteristics were worse than the previously described best case scenario for every antibody test analyzed in this systematic review when patients were sampled outside of the 15-21-day window. So the bottom line is that if you aren’t testing in the 15-21-day window after symptom onset, the results are even less reliable.
As a recent discoverer of the musical Hamilton—we know, we know, but it’s hard to keep up with all the medical literature AND pop culture!—we feel compelled to ask, “If you stand for nothing, what’ll you fall for?” Strong test characteristics are the foundation of diagnostic efficacy of any test; without reliable predictive values, a test cannot diagnose a condition or lead to changes in management or outcomes. The currently available COVID-19 antibody tests shouldn’t be used to diagnose active infection and at best have moderate positive predictive value for diagnosing prior infection, but only in symptomatic patients with a high pretest probability of disease, and possibly only in a small window after symptom onset (15-21 days). You can probably trust a negative result in a patient with low pretest probability, but otherwise antibody tests have limited usefulness at the point of care and may be most useful for public health purposes.
For more information, see the topic COVID-19 (Novel Coronavirus) in DynaMed.
DynaMed EBM Focus Editorial Team
This EBM Focus was written by Katharine DeGeorge, MD, MS, Associate Professor of Family Medicine at the University of Virginia and Clinical Editor at DynaMed. Edited by Alan Ehrlich, MD, Executive Editor at DynaMed and Associate Professor in Family Medicine at the University of Massachusetts Medical School, and Dan Randall, MD, Deputy Editor for Internal Medicine at DynaMed.