Tag Archives: access

Access Matters

Someday, you will have a question about yourself that impels you to take a look at some of your own data. It may be data about your activity, your spending at the grocery store, what medicines you’ve taken, where you’ve driven your car. And when you go to access your data, to analyze it or share it with somebody who can help you think about it, you’ll discover…

You can’t.

Your data, which you may have been collecting for months or years using some app or service that you found affordable, appealing, and useful, will be locked up inside this service and inaccessible to any further questions you want to ask it. You have no legal right to this data. Nor is there even an informal ethical consensus in favor of offering ordinary users access to their data. In many cases, commercial tools for self-tracking and self-measurement manifest an almost complete disinterest in access, as demonstrated by a lack of data export capabilities, hidden or buried methods for obtaining access, or no mention of data access rights or opportunities in the terms of service and privacy policy.

Now is the time to work hard to insure that the data we collect about ourselves using any kind of commercial, noncommercial, medical, or social service ought to be accessible to ourselves, as well as to our families, caregivers, and collaborators, in common formats using convenient protocols. In service to this aim, we’ve decided to work on a campaign for access, dedicated to helping people who are seeking access to their data by telling their stories and organizing in their support. Although QS Labs is a very small organization, we hope that our contribution, combined with the work of many others, will eventually make data access an acknowledged right.

The inspiration for this work comes from the pioneering self-trackers and access advocates who joined us last April in San Diego for a “QS Public Health Symposium.” Thanks to funding support from the Robert Wood Johnson Foundation, and program support from the US Department of Health And Human Services, Office of the CTO, and The Qualcomm Institute at Calit2, we convened 100 researchers, QS toolmakers, policy makers, and science leaders to discuss how to improve access to self-collected data for personal and public benefit.  During our year-long investigation leading up to the meeting, we learned to see the connection between data access and public health research in a new light.

If yesterday’s research subjects were production factors in a scientist’s workshop; and if today’s participants are – ideally – fully informed volunteers with interests worthy of protection; then, the spread of self-tracking tools and practices opens the possibility of a new type of relationship in which research participants contribute valuable craft knowledge, vital personal questions, and intellectual leadership along with their data.

We have shared our lessons from this symposium in a full, in-depth report from the symposium, including links to videos of all the talks, and a list of attendees. We hope you find it useful. In particular, we hope you will share your own access story. Have you tried to use your personal data for personal reasons and faced access barriers? We want to hear about it.

You can tweet using the hashtag #qsaccess, send an email to labs@quantifiedself.com, or post to your own blog and send us a link. We want to hear from you.

The key finding in our report is that the solution to access to self-collected data for personal and public benefit hinges on individual access to our own data. The ability to download, copy, transfer, and store our own data allows us to initiate collaboration with peers, caregivers, and researchers on a voluntary and equitable basis. We recognize that access means more than merely “having a copy” of our data. Skills, resources, and access to knowledge are also important. But without individual access, we can’t even begin. Let’s get started now.

An extract from the QSPH symposium report

[A]ccess means more than simply being able to acquire a copy of relevant data sets. The purpose of access to data is to learn. When researchers and self-trackers think about self-collected data, they interpret access to mean “Can the data be used in my own context?” Self-collected data will change public health research because it ties science to the personal context in which the data originates. Public health research will change self-tracking practices by connecting personal questions to civic concerns and by offering novel techniques of analysis and understanding. Researchers using self-collected data, and self-trackers collaborating with researchers, are engaged in a new kind of skillful practice that blurs the line between scientists and participants… and improving access to self-collected data for personal and public benefit means broadly advancing this practice.

Download the QSPH Report here.

Posted in Discussions, Lab Notes | Tagged , , , , , | 1 Comment

QS | Public Health Symposium: Jason Bobe on Participant Centered Research

As part of the Quantified Self Public Health Symposium, we invited a variety of individuals from the research and academic community. These included visionaries and new investigators in public health, human-computer interaction, and medicine. One of these was Jason Bobe, the Executive Director of the Personal Genome Project. When we think of the intersection of self-tracking and health, it’s harder to find something more definitive and personal than one’s own genetic code. The Personal Genome Project has operated since 2005 as a large scale research project that “bring together genomic, environmental and human trait data.”

We asked Jason to talk about his experience leading a remarkably different research agenda than what is commonly observed in health and medical research. From the outset, the design of the Personal Genome Project was intended to fully involve and respect the autonomy, skills, and knowledge of their participants. This is manifested most clearly one of their defining characteristics, that each participant receives a full copy of their genomic data upon participation. It may be surprising to learn that this is an anomaly in most, if not all, health research. As Jason noted at the symposium, we live in an investigator-centered research environment where participants are called on to give up their data for the greater good. In Jason’s talk below, these truths are exposed, as well as a few example and insights related to how the research community can move towards a more participant-centered design as they begin to address large amounts of personal self-tracking data being gathered around the world.

I found myself returning to this talk recently when the NIH released a new Genomic Data Sharing Policy that will be applied to all NIH-funded research proposals that generate genomic data. I spent the day attempting to read through some of the policy documents and was struck by the lack of mention of participant access to research data. After digging a bit I found the only mention was in the “NIH Points to Consider for IRBs and Institutions“:

[...] the return of individual research results to participants from secondary GWAS is expected to be a rare occurrence. Nevertheless, as in all research, the return of individual research results to participants must be carefully considered because the information can have a psychological impact (e.g., stress and anxiety) and implications for the participant’s health and well-being.

It will not be surprise to learn that the Personal Genome Project submitted public comments during the the comment period. Among these comments was a recommendation to require “researchers to give these participants access to their personal data that is shared with other researchers.” Unfortunately, this recommendation appears not to have been implemented. As Jason mentioned, we still have a long way to go.

Posted in Symposium, Videos | Tagged , , , , , , , | Leave a comment

We Need a Personal Data Task Force

Earlier today John Wilbanks sent out this tweet:

 

John was lamenting the fact that he couldn’t export and store the genome interpretations that 23&Me provides (they do provide a full export of a user’s genotype). By the afternoon two developers, Beau Gunderson and Eric Jain, had submitted their projects. (You can view them here and here).

We’ve doing some exploration and research about QS APIs over the last two years and we’ve come to understand that having data export is key function of personal data tools. Being able to download and retain an easily decipherable copy of your personal data is important for a variety of reasons. One just needs to spend some time in our popular Zeo Shutting Down: Export Your Data thread to understand how vital this function is.

We know that some toolmakers already include data export as part of their user experience, but many have not or only provide partial support. I’m proposing that we, as a community of people who support and value the ability to find personal meaning through personal data, work together to provide the tools and knowledge to help people access their data.

Would you help and be a part of our Personal Data Task Force*? We can work together to build a common set of resources, tools, how-to’s and guides to help people access their personal data. I’m listening for ideas and insights. Please let me know what you think and how you might want to help.

Replies on our forum or via email are welcomed.

*We’re inspired by Sina Khanifar’s work on the Rapid Response Internet Task Force.

Posted in Discussions | Tagged , , , , , | 2 Comments

APIs: What Are The Common Obstacles?

QS_APIs

Today’s guest post come to us from Eric Jain, the lead developer behind Zenobase and a wonderful contributor to our community. 

At last month’s QS Europe 2013 conference, developers gathered at a breakout session to compile a list of common obstacles encountered when using the APIs of popular, QS-related services. We hope that this list of obstacles will be useful to toolmakers who have developed APIs for their tools or are planning to provide such APIs.

  1. No API, or incomplete APIs that exposes only aggregate data, and not the actual data that was recorded.
  2. Custom authentication mechanisms (instead of e.g. OAuth), or custom extensions (e.g. for refreshing tokens with OAuth 1.0a).
  3. OAuth tokens that expire.
  4. Timestamps that lack time zone offsets: Some applications need to know how much time has elapsed between two data points (not possible if all times are local), or what e.g. the hour of the day was (not possible if all times are converted to UTC).
  5. Can’t retrieve data points going back more than a few days or weeks, because at least one separate request has to be made for each day, instead of being able to use a begin/end timestamp and offset/limit parameters.
  6.  Numbers that don’t retain their precision (1 != 1.0 != 1.00), or are changed due to unit conversion (71kg = 156.528lbs = 70.9999kg?).
  7. No SSL, or SSL with a certificate that is not widely supported.
  8. Data that lacks unique identifies (for track-ability, or doesn’t include its provenance (if obtained from another service).
  9. No sandbox with test data for APIs that expose data from hardware devices.
  10. No dedicated channel for advance notifications of API changes.

This list is by no means complete, but rather a starting point that we hope will kick off a discussion around best practices.

Posted in Discussions | Tagged , , , , | 2 Comments