
13 January 2022 Blog Post: On Antigen Testing (Our Best Tool)
Two interesting, and diametrically opposed articles, came up on my Doximity feed (Doximity is an application tool for health care professionals) today. The first from STAT News, entitled “Study Raises Doubts About Rapid Covid Tests’ Reliability in Early Days After Infection” and the second “False-Positive Results in Rapid Antigen Tests for SARS-CoV-2.”
The STAT News article (link:https://www.statnews.com/2022/01/05/study-raises-doubts-about-rapid-covid-tests-reliability-in-early-days-after-infection/) highlights an issue that seems to have become popularized lately, namely that rapid antigen tests are ‘not good’ or ‘not as good’ at detecting the Omicron variant. This particular study evaluated an underwhelming 30 individuals from theaters and offices who were being tested daily both with rapid antigen tests and PCR. The study showed that for Days 0 and 1 following a positive PCR test, all of the antigen tests produced negative results. Further, in four cases researchers showed that those infected transmitted the virus to others before having a positive result on the rapid antigen test.
The second article published in JAMA (link: https://jamanetwork.com/journals/jama/fullarticle/2788067) evaluated 903.408 rapid antigen tests conducted over 537 workplaces throughout Canada. Of these, only 0.05% were erroneously (falsely) positive. More than half of the false-positive results occurred with a single batch of Abbott’s Panbio COVID-19 Ag Rapid Test Device – suggesting a manufacturing error.
Test performance, like many things, is a classic take the good with the bad. But rapid antigen tests have been used inappropriately throughout the pandemic and when they underfunction are written off as ‘not good enough.’ Take, for instance, the Trump Administration’s misuse of the Abbott IDNow rapid antigen testing platform which they began using in the Spring of 2020 in lieu of masking, contact tracing or any other protocols. The blind acceptance of rapid antigen testing as a stop-gap measure for bad behavior, ultimately led to the October 2020 outbreak and President Trump’s infection. At the time, Dr. Susan Bulter-Wu, a clinical microbiologist at USC, noted that even if Abbott’s tests had performed within their limits, ‘it’s statistically impossible it wouldn’t have missed some infections.” At the time, the Abbott system was only authorized for use among symptomatic individuals within 7 days of the start of symptoms.
Rapid antigen tests today have an expanded indication to detection of the virus among those without symptoms but there is an optimal testing window, namely within 5-7 days from exposure to a known case or at any point if a patient is symptomatic. The presence of symptoms alone makes the testing more accurate. Part of the reason that it is so difficult to manage the spread of SARS-CoV-2 is that patients can spread the virus before becoming symptomatic. This is a unique feature of COVID-19 and distinct from other infections, like influenza, where transmissibility begins when a patient becomes ill. It isn’t surprising, then, that rapid antigen testing is less reliable at this interface. Even less surprising still is that rapid antigen tests “fail” when the population relying on testing instead engages in activities leaving them at higher risk of becoming infected.
But here is what is really galling about the 30 person STAT News study – researchers note that all antigen tests produced negative results Days 0 and 1 following a positive PCR test. But guess what, in reality, takes 2, 3, 4 and sometimes 5 days to return? A PCR test! So a fair comparison of rapid antigen performance really only begins after Day 2, because a PCR is simply not actionable until it returns.
The JAMA study highlights the other side of the accuracy coin, namely the very low false positive rate. So 99.95% of the time, a positive test is really positive and this is actionable information permitting the test-taker to appropriately isolate and prevent further spread of the infection. The sooner that information is delivered (in 20 minutes, as opposed to 3 days), the more impactful that becomes from a public health perspective.
In our own clinic as well as in large testing efforts for school reopening, I have had no difficulties with rapid antigen test accuracy. It is important, however, to highlight some of the subtlety in reading results. The picture below shows a side by side comparison of two positive tests, one obvious and the other less so:

The positive test line on the right is very faint, to be fair, but the tip off is that it extends all the way across the lateral flow strip. In addition to visual inspection, the BD Veritor system which I use in clinic also comes with an optical reader, which shows definitively that the test is positive.

A mobile phone application could also be developed to aid in detection (and even quantification) of rapid antigen results as has been done for CRISPR based SHERLOCK assays (link: https://www.science.org/doi/10.1126/sciadv.abh2944). This would improve the accuracy of rapid antigen test interpretation, particularly among home tests.
At the end of the day, no test is perfect. PCR suffers from a time delay, is inherently more expensive and can stay positive long after a patient is no longer infectious. Rapid antigen tests may not detect early infection as well and have an optimal window of performance post-exposure. But combined with safe practices, these tests are an amazing tool and work perhaps even better than advertised – bad press aside.
𝗦𝗶𝗴𝗻 𝗨𝗽 𝗳𝗼𝗿 𝗢𝘂𝗿 𝗡𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿
Dig deeper into the health topics you care about most by signing up for our newsletter.
By submitting this form you indicate you have read and agree to our Privacy Policy and Terms of Use. Please contact us for more information.
No comment yet, add your voice below!