No Location

It looks as though you are using Microsoft Internet Explorer which is no longer supported. To experience this site in the way it was designed, please upgrade to Microsoft Edge


Usability testing of coronavirus antibody kits: From a Zoom call with the public to a study of 14,400 people


Usability testing of coronavirus antibody kits: From a Zoom call with the public to a study of 14,400 people

  • 28/08/2020
  • Health & wellbeing, Health & Social Care Research, Ageing, Big Data & Digital Health, Blood & Immune System, Infectious diseases
  • Philippa Pristera

When Imperial College was approached to run a programme of antibody testing that could estimate how many people in England had already had COVID-19, the question of ‘usability’ (i.e. how easy it would be to perform an antibody correctly) was at the heart of our thinking. This testing would require adults from across the country to perform a finger-prick blood test at home to see if they had antibodies against the virus that causes COVID-19. They would be asked to share their test results and then continue to follow current Government advice around social distancing and self-isolation. But at the time, we didn’t know how accurate the antibody tests were, we didn’t know whether having antibodies would mean someone is protected from getting COVID-19 again, and the test kits had never previously been used by members of the public. Are they safe? Will people be able to perform the test correctly? Will the results change people’s perceptions about their own risk of infection? These were some of the questions we had to understand before any large-scale testing could begin. But we didn’t have long. At the time we were approached, England was a month into lockdown, the true number of infections was unknown and there was an urgent need to understand how the virus was spreading through the country.

An embedded design

This antibody testing programme is part of the REACT (REal-time Assessment of Community Transmission) study, which is taking two approaches to track the spread of SARS-CoV-2 (the virus that causes COVID-19) in the community:

Illustration outling testing approach for REACT-1 and REACT-2

To answer our questions, a series of sub-studies was embedded into the design of the antibody testing programme (REACT-2). These aimed to explore not only how accurate the kits were (Study 1), but also how easily people could use them at home (Studies 2 & 3), before testing began on larger groups of people (Study 5).

With this embedded design, we then began our rapid usability journey, integrating further public involvement and guidance at every step.

Day 1: A call with the public

Friday 1st May 2020 marks the day our public involvement began. The COVID-19 outbreak is bringing us all face-to-face with things that were not normal practice or part of everyday conversations a year ago, including talking about antibodies. But as researchers working on the COVID-19 response, we ran the risk of becoming too familiar with the terms being used in the outbreak, as well as the latest research on the virus. So, we invited VOICE members, local London residents and a range of public partners from across the country to join a Zoom call to talk about antibody testing. The session gave the public the chance to ask questions about our plans, raise any key concerns or suggestions, and give us a better idea of what people were thinking and feeling in those early days. Overall, the REACT study was well received by those on the call, and people viewed at-home antibody testing as appropriate and acceptable. But they felt our data sharing arrangements needed to be clearer, there needed to be more communication about the study, and many wanted to ensure that people from ethnic minorities were fairly represented. This input was used to develop a set of frequently asked questions and an Insight Report directed the usability testing and ongoing public involvement that followed. We also established a Public Advisory Group formed of eight members of the public who continue to review and feedback on our study plans, results and material.

Day 6–12: Real world testing

Many factors can influence how easy it is for someone to perform a home-based antibody test successfully. We couldn’t make any changes to the testing stick itself, but we had some control over the other items people would need and the instruction booklet and video we produced. So, the next step in our journey was to test these in the real-world (Study 2) – first amongst ourselves (for safety), then with a small selection of public volunteers who had signed up to support COVID-19 research and involvement at Imperial. 300 were invited to take part, 234 shared their feedback via an online survey, 9 were observed performing the test via video call and 25 were selected for an in-depth phone interview about their experience.

Here’s what we’ve learnt...

Understanding the instructions

Something that appears easy to do on paper isn’t always easy to do in practice. Throughout our user testing, the instructions were found to be well-illustrated, accessible and generally easy to understand. However, feedback from public volunteers revealed that performing the finger-prick, getting enough blood from the finger and transferring the blood to the testing stick were more difficult to carry out than expected.

Some of these issues were overcome by changing parts of the testing kit itself. But after interviews with participants and the video observations, we were able to identify how the instruction booklet was also contributing to the issue. One issue was the use of technical terms, such as ‘lancet’ (the finger-prick device), ‘cassette’ (the testing stick), ‘anticlockwise’ and ‘pulp’ (the part of finger where the finger-prick should be performed). The other was the labelling and explanations of the kit – it needed to be clearer how to remove the safety cap from the lancet, how much blood should be added to the testing stick, and importantly, what the test results actually mean – more on this later. Due to limited timing and logistical barriers, we were not able to translate the booklet into other languages. But, working with our public panel members, we edited the language and checked that the images alone illustrated the process clearly, supported by the instructional video. 

Lancet instruction booklet improvements
Figure 1. Illustrations from the instruction booklet guiding people how to remove the safety cap from the lancet during (A) study 2 usability testing with public volunteers; and (B) study 3 nationwide usability testing.

Performing the finger-prick

The finger-prick is performed using a device called a lancet. It contains a small needle which comes out a set distance when the device is activated and then immediately returns inside the case. Of those who took part and shared their feedback, 39% found it difficult to do the finger-prick using the first lancet provided. Some of these issues were caused by lack of clarity in the instruction booklet, but the lancet itself was ultimately not easy to use, or intuitive.

“I was holding the trigger down. You know, I don't know it, it was unfamiliar. It wasn't something I'd seen before, so it was hard to know what to do.” – participant quote

Many found the cap hard to remove and the device too easy to activate accidentally due to the subtle release button on the side. It also wasn’t clear which end of the lancet had to be pressed against the finger, with many people holding it upside down thinking that the safety cap was covering the activation button, not the needle.

Because lancets are single-use, if people failed to perform the finger-prick correctly first time, they were unable to continue with the rest of the test. Pilot testing among staff had raised similar issues, therefore a new lancet was sourced that worked in a more intuitive way. An additional lancet was also included in the testing kit for subsequent testing rounds, in case the first didn’t work, or draw enough blood.

Figure 2. Illustrations from the instruction booklet showing how to perform the finger-prick during (A) study 2 usability testing with public volunteers; and (B) study 3 nationwide usability testing.

Collecting and transferring the blood to the testing stick

After performing the finger-prick and creating a drop of blood, participants had to transfer the blood to a well on the testing stick. During the early usability testing, participants had to use a pipette (imagine a small plastic tube with a squeezy end) to collect and transfer the blood. This was meant to work by “capillary action”, where contact with the blood would cause it to naturally flow into the pipette, without needing to squeeze the end. However, 44% found this difficult to perform.

“Why is it bubbling when I press the pipette?” “There was air in it” – Participant quotes

The instructions said not to squeeze the pipette to collect the blood but without further explanation that the blood should go in by itself, people’s instinct to squeeze the pipette was too strong to overcome. This meant blood often got sucked up into the pipette and then sprayed and/or bubbled out when the blood was squeezed out onto the testing stick. The instructions were therefore changed, and a new pipette was sourced for the next phase of testing.

Figure 3. Illustration from instruction booklet showing how to collect the blood to the testing stick during (A) Study 2; and (B) Study 3 after issues with using the pipette were continually raised during user testing.

Reading the test result

As well as performing the test correctly, we had to make sure people could correctly interpret their results too (i.e. whether it was positive, negative or invalid). Everyone was invited to share their test result with us online as part of a questionnaire, and if they wished, upload a photo of their result too. A sample of these photos were then checked by two separate clinicians to confirm whether they were interpreted correctly. Those who we observed taking the test were also asked to interpret their result. The majority of participants appeared to read their test result correctly, but not all. Several participants were unsure what the letters on the test meant and had difficulty reading and interpreting the lines on the testing stick, especially if their test result lines were very faint. Including what the results meant in the instruction booklet also seemed to be a confusing message so reading the result and the meaning of the results were made into separate steps for the instructions in the next phase of testing.

Figure 4. Explanation of the possible test results for the kits tested, A) version prior to user testing of Kit 1; and B) version for Kit 2 after significant public input, feedback and user testing.

Understanding what the test result means

The speed at which people have signed up to take part in the antibody testing programme, and the number of enquiries we continue to receive from people hoping to take part, shows how much people want to understand whether or not they’ve already had COVID-19. But there were some important messages we had to get right before we could start testing on a larger scale.

Current finger-prick antibody tests aren’t 100% reliable. When testing large numbers of people, the results can help researchers estimate how many people in a population have likely been infected, by taking into account the number of incorrect results they’d expect (i.e. false positives and false negatives). But for an individual, it’s not accurate enough to confirm whether they’ve had COVID-19, or not. And even if they had, we still don’t know whether antibodies can protect people from getting COVID-19 again. Therefore, it was important that people didn’t change their behaviour, or their perceptions of risk, based on this test result – especially at the time when the country was still getting used to coronavirus restrictions.

Our interviews found that most people understood the test results in the context of the study.  However, there was one case when someone thought the test was checking for current infection, not past infection. And another person had a strong emotional response to the result due to the fact they were shielding. We therefore tried to make it clearer in the instructions what antibodies are, the purpose of this testing, and what it means for the person taking part.

Figure 5. Instruction booklet extract showing the meaning of the test results, A) version prior to user testing of Kit 1; and B) version for Kit 2 after significant public input, feedback and user testing.

Day 15–25: The largest test of usability

After changing the testing kits items, improving the instructions and simplifying the steps involved, we moved to the largest test of usability ever carried out for home-based antibody testing. This was REACT-2 Study 3 and involved over 14,000 members of the public in England, selected at random.

The findings were published as a peer-reviewed manuscript on 12 August and an infographic summary was produced to illustrate the process and the key findings.

Overall, fewer people experienced the issues faced by our earlier volunteers. However, a final points had to be noted for future work:

  • The majority of participants correctly interpreted their test results, as confirmed by a clinician. However, having test results checked by trained professionals is not feasible in ongoing large-scale studies. Computer analysis of uploaded photographs could allow for checking the accuracy of participant-reported results in the future, as well as determine reasons for invalid results
  • And, people can still be affected by the results of antibody tests, even if told the results aren’t reliable, or perhaps because they aren’t reliable. Further research is now underway to understand the meaning that people place on these tests, including how they may affect people’s behaviour and/or health and mental well-being

Ready to start nationwide testing

Thanks to the input, involvement, participation and feedback of the public during the month of May 2020, we were able to start nationwide antibody testing across England (Study 5) in June with the first results of over 100,000 people released in August. This valuable data is informing the Government’s response to the COVID-19 outbreak and is being repeated in rounds of testing over the course of the year. We hope to continue to engage and involve the public in this programme of testing, and ensure adequate support is provided when the tests become used more widely.

Want to learn more?



Add a comment