Tag Archives: access
This morning President Barack Obama announced a new Precision Medicine Initiative, a key $215 million piece of the proposed 2016 budget. Much has been written since last week’s State of the Union, when this initiative was first mentioned by President Obama. In brief, the initiative is an investment in new programs and funding initiatives at major government bodies that influence the current and future health of all Americans, including the National Institutes of Health (NIH), the Food and Drug Administration (FDA), and the Office of the National Coordinator for Health Information Technology (ONC). These programs will focus on developing “a new model of patient-powered research that promises to accelerate biomedical discoveries and provide clinicians with new tools, knowledge, and therapies to select which treatments will work best for which patients.”
There is a lot of information being circulated about this new initiative, and we’ve collected some links below, but we’d like to highlight something directly related to our interests in self-tracking data, personal data access, and new models of participatory research. In this morning’s announcement President Obama mentioned a long-term goal of creating a participatory research cohort comprised of 1 million volunteers who will be called upon to share personal medical record data, genetic samples, biological samples, and diet and lifestyle information. This is truly an ambitious goal and we are happy to see the President take care to mention the importance of including patients and the individuals who collect this data in the decision making and research process. For example, here is the description of this specific program from the NIH Precision Medicine Infographic
Here at QS Labs, we’re dedicated to helping create and grow a culture that enables everyone to generate personal meaning from their personal data. Sharing, participation, and exploring new models of discovery are a core themes we’re exploring as part of our QS Access work. We’ll be following this initiative as it moves from today’s announcement to tomorrow’s reality. Be sure to stay tuned to our QS Access Channel for more updates as we learn more.
Learn more about the Precision Medicine Initiative
NIH mini site describing the initiative
White House Blog: The Precision Medicine Initiative: Data-Driven Treatments as Unique as Your Own Body
FACT SHEET: President Obama’s Precision Medicine Initiative
A New Initiative on Precision Medicine by Francis Collins and Harold Varmus (New England Journal of Medicine).
New sensors are peeking into previously invisible or hard to understand human behaviors and information. This has led to many researchers and organizations developing an interest in exploring and learning from the increasing amount of personal self-tracking data being produced by self-trackers. Even though individuals are producing more and more personal data that could possibly provide insights into health and wellness, access to that data remains a hurdle. Over the last few years a few different projects, companies, and research studies have launched to tackle this data access issue. As an introduction to this area, we’ve put together a short list of three interesting projects that involve donating personal data for broader use.
Developed and administed by the WikiLife foundation, the DataDonors platform allows individuals to upload and donate various forms of self-report and Quantified Self data. Data is currently available to the public at no cost in an aggregated format (JSON/CSV). Data types includes physical activity, diet, sleep, mood, and many others.
OpenSNP is an online community of over 1600 individuals who’ve chosen to upload and publicly share their direct-to-consumer genetic testing results ( 23andMe, deCODEme or FamilyTreeDNA) . Genotype and phenotype data is freely available to the public.
Open Paths is an Android and iOS geolocation data collection tool developed by the New York Times R&D Lab. It periodically collects, transmits, and stores your geolocation in a secure database. The data is available to users via an API and data export functions. Additionally, users can grant access to their data to researchers who have submitted projects.
We’ll be expanding this list in the coming weeks with additional companies, projects, and research studies that involve personal self-tracking data donation. If you have one to share comment here or get in touch.
As part of our new Access channel we’re going to highlight interesting stories, ideas, and research related to self-tracking data and data access issues and the role they take in personal and public health. We recently found this expert report, published in the International Journal of Obesity, that tackles issues with the data researchers rely on for understanding diet and physical activity behaviors, and ultimately concludes that the data is fundamentally flawed.
Researchers has known for a long time that relying on individuals to understand, recall, and accurately report what they eat and how much they exercise isn’t the best way to understand the realities of everyday life. Unfortunately for many years, this was the only way to track this information – interviews, surveys, and research measures. Only recently have tools, devices, and methods matured to a point where objective information can be captured and analyzed.
The authors of this article make the case that obesity and weight management fundamentally relies on getting these numbers right, and unfortunately most research hasn’t. Reading the background on self-report data and the call to action the authors make for developing and using more objective measures we can’t help but wonder about the role of commercial personal self-tracking tools. How can we, as a community of users, toolmakers, and researchers work together to open up access pathways so that the millions of people tacking pictures of their meals and uploading their step data can have a positive impact on personal and public health? This is an open question, one that we’re excited to be working on.
If you’re interested in these type of questions, or working on projects related to data access we invite you to get in touch and keep following along here with us.
MyFitnessPal is one of the leading dietary tracking tools, currently used by tens of millions of people all around the world to better track and understand the foods they consume every day. Their mobile apps and online tools allow individuals to enter foods and keep track of their micro- and macro-nutrient consumption, connect additional devices such as fitness trackers, and connect with their community – all in the name of weight management. However, there is no natively available method for easily accessing your dietary data for personal analysis, visualization, or storage.
With a bit of digging in the MyFitnessPal help section we can see that they have no official support for data export. However, they mention the ability to print reports and save PDF files that contain your historical data. While better than some services, a PDF document is far from easy to use when you’re trying to make your own charts or take a deeper look into your data.
We spent some time combing the web for examples of MyFitnessPal data export solutions over the last few days. We hope that some of these are useful to you in your ongoing self-tracking experiences.
MyFitnessPal Data Downloader: This extension allows you to directly download a CSV report from your Food Report page. (Chrome only)
MyFitnessPal Data Export: This extension is tied to another website, FoodFastFit.com. If you install the extension, it will redirect you back to that site where your data is displayed and you can download the CSV file. (Chrome only)
ExportMFP: A simple bookmark that will open a text area with comma-separated values for weight and calories, which you can copy/paste into your data editor of choice.
MyFitnessPal Reports: A bookmarklet that allows you to generates more detailed graphs and reports.
MyFitnessPal Analyser: Accesses your diet and weight data. It requires you to input your password so be careful.
Export MyFitnessPal Data to CSV: Simple web tool for exporting your data.
FreeMyDiary: A recently developed tool for exporting your food diary data.
MyFitnessPal Data Access via Python: If you’re comfortable working with the Python language, this might be for you. Developed by Adam Coddington, it allows access to your MyFitnessPal data programmatically
MFP Extractor and Trend Watcher: An Excel Macro, developed by a MyFitnessPal user, that exports your dietary and weight data into Excel. This will only work for Windows users.
Access MyFitnessPal Data in R: If you’re familiar with R, then this might work for you.
QS Access + Apple HealthKit
If you’re an iPhone user, you can connect MyFitnessPal to Apple’s HealthKit app to view your MyFitnessPal data alongside other data you’re collecting. You can also easily export the data from your Health app using our QS Access app. Data is available in hourly and daily breakdowns, and you should be able to export any data type MyFitnessPal is collecting to HealthKit.
Someday, you will have a question about yourself that impels you to take a look at some of your own data. It may be data about your activity, your spending at the grocery store, what medicines you’ve taken, where you’ve driven your car. And when you go to access your data, to analyze it or share it with somebody who can help you think about it, you’ll discover…
Now is the time to work hard to insure that the data we collect about ourselves using any kind of commercial, noncommercial, medical, or social service ought to be accessible to ourselves, as well as to our families, caregivers, and collaborators, in common formats using convenient protocols. In service to this aim, we’ve decided to work on a campaign for access, dedicated to helping people who are seeking access to their data by telling their stories and organizing in their support. Although QS Labs is a very small organization, we hope that our contribution, combined with the work of many others, will eventually make data access an acknowledged right.
The inspiration for this work comes from the pioneering self-trackers and access advocates who joined us last April in San Diego for a “QS Public Health Symposium.” Thanks to funding support from the Robert Wood Johnson Foundation, and program support from the US Department of Health And Human Services, Office of the CTO, and The Qualcomm Institute at Calit2, we convened 100 researchers, QS toolmakers, policy makers, and science leaders to discuss how to improve access to self-collected data for personal and public benefit. During our year-long investigation leading up to the meeting, we learned to see the connection between data access and public health research in a new light.
If yesterday’s research subjects were production factors in a scientist’s workshop; and if today’s participants are – ideally – fully informed volunteers with interests worthy of protection; then, the spread of self-tracking tools and practices opens the possibility of a new type of relationship in which research participants contribute valuable craft knowledge, vital personal questions, and intellectual leadership along with their data.
We have shared our lessons from this symposium in a full, in-depth report from the symposium, including links to videos of all the talks, and a list of attendees. We hope you find it useful. In particular, we hope you will share your own access story. Have you tried to use your personal data for personal reasons and faced access barriers? We want to hear about it.
You can tweet using the hashtag #qsaccess, send an email to firstname.lastname@example.org, or post to your own blog and send us a link. We want to hear from you.
The key finding in our report is that the solution to access to self-collected data for personal and public benefit hinges on individual access to our own data. The ability to download, copy, transfer, and store our own data allows us to initiate collaboration with peers, caregivers, and researchers on a voluntary and equitable basis. We recognize that access means more than merely “having a copy” of our data. Skills, resources, and access to knowledge are also important. But without individual access, we can’t even begin. Let’s get started now.
An extract from the QSPH symposium report:
[A]ccess means more than simply being able to acquire a copy of relevant data sets. The purpose of access to data is to learn. When researchers and self-trackers think about self-collected data, they interpret access to mean “Can the data be used in my own context?” Self-collected data will change public health research because it ties science to the personal context in which the data originates. Public health research will change self-tracking practices by connecting personal questions to civic concerns and by offering novel techniques of analysis and understanding. Researchers using self-collected data, and self-trackers collaborating with researchers, are engaged in a new kind of skillful practice that blurs the line between scientists and participants… and improving access to self-collected data for personal and public benefit means broadly advancing this practice.
As part of the Quantified Self Public Health Symposium, we invited a variety of individuals from the research and academic community. These included visionaries and new investigators in public health, human-computer interaction, and medicine. One of these was Jason Bobe, the Executive Director of the Personal Genome Project. When we think of the intersection of self-tracking and health, it’s harder to find something more definitive and personal than one’s own genetic code. The Personal Genome Project has operated since 2005 as a large scale research project that “bring together genomic, environmental and human trait data.”
We asked Jason to talk about his experience leading a remarkably different research agenda than what is commonly observed in health and medical research. From the outset, the design of the Personal Genome Project was intended to fully involve and respect the autonomy, skills, and knowledge of their participants. This is manifested most clearly one of their defining characteristics, that each participant receives a full copy of their genomic data upon participation. It may be surprising to learn that this is an anomaly in most, if not all, health research. As Jason noted at the symposium, we live in an investigator-centered research environment where participants are called on to give up their data for the greater good. In Jason’s talk below, these truths are exposed, as well as a few example and insights related to how the research community can move towards a more participant-centered design as they begin to address large amounts of personal self-tracking data being gathered around the world.
I found myself returning to this talk recently when the NIH released a new Genomic Data Sharing Policy that will be applied to all NIH-funded research proposals that generate genomic data. I spent the day attempting to read through some of the policy documents and was struck by the lack of mention of participant access to research data. After digging a bit I found the only mention was in the “NIH Points to Consider for IRBs and Institutions“:
[...] the return of individual research results to participants from secondary GWAS is expected to be a rare occurrence. Nevertheless, as in all research, the return of individual research results to participants must be carefully considered because the information can have a psychological impact (e.g., stress and anxiety) and implications for the participant’s health and well-being.
It will not be surprise to learn that the Personal Genome Project submitted public comments during the the comment period. Among these comments was a recommendation to require “researchers to give these participants access to their personal data that is shared with other researchers.” Unfortunately, this recommendation appears not to have been implemented. As Jason mentioned, we still have a long way to go.
Earlier today John Wilbanks sent out this tweet:
— John Wilbanks (@wilbanks) December 11, 2013
John was lamenting the fact that he couldn’t export and store the genome interpretations that 23&Me provides (they do provide a full export of a user’s genotype). By the afternoon two developers, Beau Gunderson and Eric Jain, had submitted their projects. (You can view them here and here).
We’ve doing some exploration and research about QS APIs over the last two years and we’ve come to understand that having data export is key function of personal data tools. Being able to download and retain an easily decipherable copy of your personal data is important for a variety of reasons. One just needs to spend some time in our popular Zeo Shutting Down: Export Your Data thread to understand how vital this function is.
We know that some toolmakers already include data export as part of their user experience, but many have not or only provide partial support. I’m proposing that we, as a community of people who support and value the ability to find personal meaning through personal data, work together to provide the tools and knowledge to help people access their data.
Would you help and be a part of our Personal Data Task Force*? We can work together to build a common set of resources, tools, how-to’s and guides to help people access their personal data. I’m listening for ideas and insights. Please let me know what you think and how you might want to help.
*We’re inspired by Sina Khanifar’s work on the Rapid Response Internet Task Force.
At last month’s QS Europe 2013 conference, developers gathered at a breakout session to compile a list of common obstacles encountered when using the APIs of popular, QS-related services. We hope that this list of obstacles will be useful to toolmakers who have developed APIs for their tools or are planning to provide such APIs.
- No API, or incomplete APIs that exposes only aggregate data, and not the actual data that was recorded.
- Custom authentication mechanisms (instead of e.g. OAuth), or custom extensions (e.g. for refreshing tokens with OAuth 1.0a).
- OAuth tokens that expire.
- Timestamps that lack time zone offsets: Some applications need to know how much time has elapsed between two data points (not possible if all times are local), or what e.g. the hour of the day was (not possible if all times are converted to UTC).
- Can’t retrieve data points going back more than a few days or weeks, because at least one separate request has to be made for each day, instead of being able to use a begin/end timestamp and offset/limit parameters.
- Numbers that don’t retain their precision (1 != 1.0 != 1.00), or are changed due to unit conversion (71kg = 156.528lbs = 70.9999kg?).
- No SSL, or SSL with a certificate that is not widely supported.
- Data that lacks unique identifies (for track-ability, or doesn’t include its provenance (if obtained from another service).
- No sandbox with test data for APIs that expose data from hardware devices.
- No dedicated channel for advance notifications of API changes.
This list is by no means complete, but rather a starting point that we hope will kick off a discussion around best practices.