Is My Data Valid?

How much do I trust this data?

This question has kept me awake many a night, both in the lab and during self-tracking experiments. Researchers do validation tests even when using expensive and widely trusted laboratory equipment, and these tests often expose unexpected problems. Commercial self-tracking devices present similar challenges, especially because company-sponsored validation tests may not be independently verified and may be difficult to understand and replicate individually. Though this problem won’t be solved overnight, there are several steps you can take to better understand the constraints of your new technology.

In this post, I’m going to take you through some lessons I’ve learned from a recent attempt to validate a widely used lipid test kit. Some of these lessons are generally applicable, and I hope they will be useful to you as you do your own tests.

I’ve been working on a Quantified Self project to support people doing unusually high frequency home lipid testing. For the project to succeed, we need to determine the accuracy and precision of the CardioChek® Plus from PTS diagnostics. (Accuracy refers to how close a device output is to its real value; precision measures how consistent a measure is across identical trials.) We chose this device from among almost a dozen different options and approaches, because it was in common use, was easily accessible, and was approved by the FDA for home use. But in order to ask sensible questions about our data, we needed to know how well it would do under real conditions. With some work, I was able to find the reported accuracy and precision of the device, verify both in my own hands, and alleviate considerable anxiety about generating believable data.

Here are some tips based on my experience. Of course success is not guaranteed; it will depend on the device you have, the time you’re willing to invest, and a bit of luck. But these general suggestions should get you on your way.

  1. Look carefully through the wearable’s website and read the fine print. Some manufacturers report their in-house testing online, but tucked in a corner where it’s hard to find. Although companies typically won’t report bad results, it’s at least a place to start.
  2. Pubmed. This is the watering hole for finding scientific literature. Try searching the name of your wearable here. Abstracts are generally available.
  3. Check the QS forum. Someone there may have the details of your device.
  4. Contact the company, and frame your questions about getting the most accurate and valid data as positively as possible. Many of these companies are small. Yes, they might brush you off, but they might be willing to give you insider tips on how to best use your device, or even raw data from their own trials to compare with your own. If you are able to explain a personal experiment requiring a particular degree of accuracy or precision, a one-on-one conversation is more likely to get you a relevant an honest answer than hours of googling. There are often hidden factors (lighting and humidity in my case) that make a huge difference in your data quality.
  5. Find a medical/industry standard to compare your device to. It’s very important to not only read reports of a device’s accuracy and precision, but to test it in your own hands. For me, this meant making a doctor’s appointment for a fasting lipid panel, taken at the same time as I did my own finger prick test with the CardioCheck. This is not always possible (most of us are unlikely to have access to polysomnography), but do your best.
  6. Replicate your results under similar conditions. This one is often easier.
    If you measuring, say temperature, do so many times in a row to see the amount of variability. To see how accurate your step or distance tracker, walk from your house to the park several times and compare results. In my case, I pricked my fingers a few times in a row (ouch, but necessary).
  7. Take time of day into account when you are doing any measurement. Circadian rhythms are prominent in pretty much every system in your body. This means you should expect variability in any output by time of day. Let’s say that you’re tracking your basal body temperature (BBT) upon waking up as part of tracking ovulatory cycle. Sleeping until 11am on Saturday when you usually record at 6am on weekdays will confound the prediction of your cycle for sure! A perfectly accurate device can’t be a stand-in for good controls in your personal experiment.
  8. Once you know the constraints of your device, work within them. This may seem obvious, but it’s common to put too much faith in unverified data. Numbers aren’t magic. They are the outputs of sensors with strengths and weaknesses and calculations programmed by humans. Even a device with imperfect accuracy, but is consistent, can give useful information: you just have to figure out the right questions to ask.
  9. Don’t give up. This process takes time, but pays off in the long run. The trust gained from putting an honest effort into validation will save you hours, days or even weeks of confusion from trying to explain results that are just noise in the system, or from having to re-do an entire experiment. Save that time now.
  10. Embrace uncertainty. One of the toughest parts about navigating the validation of a new device is getting comfortable with uncertainty. A peek under the hood often reveals a lot we might wish we didn’t know. Sure, it would be nicer if the world delivered perfect data with every wearable purchase, but it isn’t so. Like all learning endeavors, it’s a continually evolving process that will not guarantee perfection. Questioning one’s potentially false sense of certainty, and leaning into the tricky process of confronting unknowns is a good practice to keep us honest anyways.

If you have done a validation test of a self-tracking tool, we’d like to hear about it.

About Azure Grant

QS Labs Associate Editor
This entry was posted in Lab Notes and tagged , , . Bookmark the permalink.

Comments are closed.