Tag Archives: data access
We’re posting a quick note today to let you know that we’ve updated our “How To Download Your Fitbit Data” post. It now included separate instructions for both the old and new versions of Google Spreadsheets. This is just the first in a series of planned updates. We hope to post additional updates to allow you to have deeper access to your Fitbit data including, heart rate, blood pressure, and daily goal data.
If you’re using this how-to we’d love to hear from you! Are you learning something new? Making interesting data visualizations? Discussing the data with your health care team? Let us know. You can email us or post here in the comments.
As part of the Quantified Self Public Health Symposium, we invited a variety of individuals from the research and academic community. These included visionaries and new investigators in public health, human-computer interaction, and medicine. One of these was Jason Bobe, the Executive Director of the Personal Genome Project. When we think of the intersection of self-tracking and health, it’s harder to find something more definitive and personal than one’s own genetic code. The Personal Genome Project has operated since 2005 as a large scale research project that “bring together genomic, environmental and human trait data.”
We asked Jason to talk about his experience leading a remarkably different research agenda than what is commonly observed in health and medical research. From the outset, the design of the Personal Genome Project was intended to fully involve and respect the autonomy, skills, and knowledge of their participants. This is manifested most clearly one of their defining characteristics, that each participant receives a full copy of their genomic data upon participation. It may be surprising to learn that this is an anomaly in most, if not all, health research. As Jason noted at the symposium, we live in an investigator-centered research environment where participants are called on to give up their data for the greater good. In Jason’s talk below, these truths are exposed, as well as a few example and insights related to how the research community can move towards a more participant-centered design as they begin to address large amounts of personal self-tracking data being gathered around the world.
I found myself returning to this talk recently when the NIH released a new Genomic Data Sharing Policy that will be applied to all NIH-funded research proposals that generate genomic data. I spent the day attempting to read through some of the policy documents and was struck by the lack of mention of participant access to research data. After digging a bit I found the only mention was in the “NIH Points to Consider for IRBs and Institutions“:
[...] the return of individual research results to participants from secondary GWAS is expected to be a rare occurrence. Nevertheless, as in all research, the return of individual research results to participants must be carefully considered because the information can have a psychological impact (e.g., stress and anxiety) and implications for the participant’s health and well-being.
It will not be surprise to learn that the Personal Genome Project submitted public comments during the the comment period. Among these comments was a recommendation to require “researchers to give these participants access to their personal data that is shared with other researchers.” Unfortunately, this recommendation appears not to have been implemented. As Jason mentioned, we still have a long way to go.
Today’s post comes to us from Laurie Frick. Laurie led a breakout session at the 2014 Quantified Self Europe Conference that opened up a discussion about what it would mean to be able to access all the data being gathered about yourself and then open that up for full transparency. In the summary below, Laurie describes that discussion and her ideas around the idea of living an open and transparent life. If you’re interested in these ideas and what it might mean to live an open and transparent life we invite you to join the conversation on our forum.
by Laurie Frick
Fear of surveillance is high, but what if societies with the most openness develop faster culturally, creatively and technically?
Open-privacy turns out to an incredibly loaded term, something closer to data transparency seems to create less consternation. We opened the discussion with the idea, “What if in the future we had access to all the data collected about us, and sharing that data openly was the norm?”
Would that level of transparency gain an advantage for that society or that country? What would it take to get to there? For me personally, I want access to ALL the data gathered about me, and would be willing to share lots of it; especially to enable new apps, new insights, new research, and new ideas.
In our breakout, with an international group of about 21 progressive self-trackers in the Quantified Selfc community, I was curious to hear how this conversation would go. In the US, data privacy always gets hung-up on the paranoia for denial of health-care coverage, and with a heavy EU group all covered with socialized-medicine, would the health issue fall away?
Turns out in our discussion, health coverage was barely mentioned, but paranoia over ‘big-brother’ remained. The shift seemed to focus the fear toward not-to-be-trusted corporations instead of government. The conversation was about 18 against and 3 for transparency. An attorney from Denmark suggested that the only way to manage that amount of personal data was to open everything, and simply enforce penalizing misuse. All the schemes for authorizing use of data one-at-a-time are non-starters.
“Wasn’t it time for fear of privacy to flip?” I asked everyone, and recalled the famous Warren Buffet line “…be fearful when others are greedy and greedy when others are fearful”. It’s just about to tip the other way, I suggested. Some very progressive scientists like John Wilbanks at the non-profit Sage Bionetworks are activists for open sharing of health data for research. Respected researchers like Dana Boyd, and the smart folks at the Berkman Center for Internet and Society at Harvard are pushing on this topic, and the Futures Company consultancy writes “it’s time to rebalance the one-sided handshake” and describes the risk of hardening of public attitudes as a result of the imbalance.
Once you start listing the types of personal data that are realistically gathered and known about each of us TODAY, the topic of open transparency gets very tricky.
- Time online
- Online clicks, search
- Physical location, where have you been
- Money spent on anything, anywhere
- Credit history
- Do you exercise
- What you eat
- Sex partners
- Bio markers, biometrics
- Health history
- School grades/IQ
- Driving patterns, citations
- Criminal behavior
For those at the forefront of open privacy and data transparency it’s better to frame it as a social construct rather than a ‘right’. It’s not something that can be legislated, but rather an exchange between people and organizations with agreed upon rules. It’s also not the raw data that’s valuable – but the analysis of patterns of human data.
I’m imagining one country or society will lead the way, and it will be evident that an ecosystem of researchers and apps can innovate given access to pools of cheap data. I don’t expect this research will lessen the value to the big-corporate data gatherers, and companies will continue to invest. A place to start is to have individuals the right to access, download, view, correct and delete data about them. In the meantime I’m sticking with my motto: “Don’t hide, get more”.
If you’re interested in the idea of open privacy, data access, and transparency please join the conversation on our forum or here in the comments.
Today’s post comes to us from Dawn Nafus and Robin Barooah. Together they led an amazing breakout session at the 2014 Quantified Self Europe Conference on the topic of understanding and mapping data access. We have a longstanding interest in observing and communicating how data moves in and out of the self-tracking systems we use every day. That interest, and support from partners like Intel and the Robert Wood Johnson Foundation, has helped us start to explore different methods of describing how data flows. We’re grateful to Dawn and Robin for taking this important topic on at the conference, and to all the breakout attendees who contributed their thoughts and ideas. If mapping data access is of interest to you we suggest you join the conversation on the forum or get in touch with us directly.
Mapping Data Access
By Dawn Nafus and Robin Barooah
One of the great pleasures of the QS community is that there is no shortage of smart, engaged self-trackers who have plenty to say. The Mapping Data Access session was no different, but before we can tell you about what actually happened, we need to explain a little about how the session came to being.
Within QS, there has been a longstanding conversation about open data. Self-trackers have not been shy to raise complaints about closed systems! Some conversations take the form of “how can I get a download of my own data?” while other conversations ask us to imagine what could be done with more data interoperability, and clear ownership over one’s own data, so that people (and not just companies) can make use of it. One of the things we noticed about these conversations is that when they start from a notion of openness as a Generally Good Thing, they sometimes become constrained by their own generality. It becomes impossible not to imagine a big pot of data in the sky. It becomes impossible not to wonder about where the one single unifying standard is going to come from that would glue all this data together in a sensible way. If only the world looked something like this…
We don’t have a big pot of data in the sky, and yet data does, more or less, move around one way or another. If you ask where data comes from, the answer is “it depends.” Some data come to us via just a few noise-reducing hops away from the sensors from which they came, while others are shipped around through multiple services, making their provenance more difficult to track. Some points of data access come with terms and conditions attached, and others less so. The system we have looks less like a lot and more like this…
… a heterogeneous system where some things connect, but others don’t. Before the breakout session, QS Labs had already begun a project  to map the current system of data access through APIs and data downloads. It was an experiment to see if having a more concrete sense of where data actually comes from could help improve data flows. These maps were drawn from what information was publicly available, and our own sense of the systems that self-trackers are likely to encounter.
Any map has to make choices about what to represent and what to leave out, and this was no different. The more we pursued them, there more it became clear that one map was not going to be able to answer every single question about the data ecosystem, and that the choices about what to keep in, and what to edit out, would have to reflect how people in the community would want to use the map. Hence, the breakout session: what we wanted to know was, what questions did self-trackers and toolmakers have that could be answered with a map of data access points? Given those questions, what kind of a map should it be?
Participants in the breakout session were very clear about the questions they needed answers to. Here are some of the main issues that participants thought a mapping exercise could tackle:
Tool development: If a tool developer is planning to build an app, and that app cannot generate all the data it needs on its own, it is a non-trivial task to find out where to get what kind of data, and whether the frequency of data collection suits the purposes, whether the API is stable enough, etc.. A map can ease this process.
Making good choices as consumers: Many people thought they could use a map to better understand whether the services they currently used cohered with their own sense of ‘fair dealings.’ This took a variety of forms. Some people wanted to know the difference between what a company might be capable of knowing about them versus the data they actually get back from the service. Others wanted a map that would explicitly highlight where companies were charging for data export, or the differences between what you can get as a developer working through an API and what you can get as an end user downloading his or her own data. Others still would have the map clustered around which services are easy/difficult to get data out of at all, for the reason that (to paraphrase one participant) “you don’t want to end up in a data roach motel. People often don’t know beforehand whether they can export their own data, or even that that’s something they should care about, and then they commit to a service. Then they find they need the export function, but can’t leave.” People also wanted the ability to see clearly the business relationships in the ecosystem so they could identify the opposite of the ‘roach motel’—“I want a list of all the third party apps that rely on a particular data source, because I want to see the range of possible places it could go.”
Locating where data is processed: Many participants care deeply about the quality of the data they rely on, and need a way of interpreting the kinds of signals they are actually getting. What does the data look like when it comes off the sensor, as opposed to what you see on the service’s dashboard, as opposed to what you see when you access it through an API or export feature? Some participants have had frustrating conversations with companies about what data could fairly be treated as ‘raw’ versus where the company had cleaned it, filtered it, or even created its own metric that they found difficult to interpret without knowing what, exactly, goes into it. While some participants did indeed want a universally-applicable ‘quality assessment,’ as conveners, we would point out that ‘quality’ is never absolute—noisy data at a high sample rate can be more useful for some purposes than, say, less noisy but infrequently collected data. We interpreted the discussion to be, at minimum, a call for greater transparency in how data is processed, so that self-trackers can have a basis on which to draw their own conclusions about what it means.
Supporting policymaking: Some participants had a sense that maps which highlighted the legal terms of data access, including the privacy policies of service use, could support the analysis of how the technology industry is handling digital rights in practice, and that such an analysis could have public policy implications. Sometimes this idea didn’t take the form of a map, but rather a chart that would make the various features of the terms of service comparable. The list mentioned earlier of which devices and services rely on which other services was important not just to be able to assess the extent of data portability, but also to assess what systems represent more risk of data leaking from one company to another without the person’s knowledge or consent. As part of the breakout, the group drew their own maps—maps that either they would like to exist in the world even if they didn’t have all the details, or maps of what they thought happened to their own data. One person, who drew a map of where she thought her own data goes, commented (again, a paraphrase) “All I found on this map was question marks, as I tried to imagine how data moves from one place to the next. And each of those question marks appeared to me to be an opportunity for surveillance.”
What next for mapping?
If you are a participant, and you drew a map, it would help continue the discussion if you talked a little more about what you drew on the breakout forum page. If you would like to get involved in the effort, please do chime in on the forum, too.
Clearly, these ecosystems are liable to change more rapidly than they can be mapped. But given the decentralized nature of the current system (which many of us see as a good thing) we left the breakout with the sense that some significant social and commercial challenges could in fact be solved with a better sense of the contours and tendencies of the data ecosystem as it works in practice.
 This work was supported by Intel Labs and the Robert Wood Johnson Foundation. One of us (Dawn) was involved in organizing support for this work, and the other (Robin) worked on the project. We are biased accordingly.