Tag Archives: data
We hope you enjoy this week’s list!
Are Google making money from your exercise data?: Exercise activity as digital labour by Christopher Till. Christopher describes his recent paper, Exercise as Labour: Quantified Self and the Transformation of Exercise into Labour, which lays out a compelling argument for considering what happens when all of our exercise and activity data become comparable. Are we destined to become laborers producing an expanding commercialization of our physical activities and the data they produce?
How Big is the Human Genome? by Reid J. Robinson. Prompted by a recent conversation at QS Labs, I went looking for information about the size of the human genome. This post was one of the most clear descriptions I was able to find.
Visualizing Summer Travels by Geoff Boeing. A mix of Show&Tell and visualization here. Geoff is a graduate student and as part of his current studies he’s exploring mapping and visualization techniques. If you’re interested in mapping your personal GPS data, especially OpenPaths data, Geoff has posted a variety of tutorials you can use.
Symptom Portraits by Virgil Wong. For 30 weeks Virgil met with patients and helped them turn their symptoms into piece of art work and data visualization.
Data Visualization Rules, 1915 by Ben Schmidt. In 1915, the US Bureau of the Census published a set of rules for graphic presentation. A great find by Ben here.
Someday, you will have a question about yourself that impels you to take a look at some of your own data. It may be data about your activity, your spending at the grocery store, what medicines you’ve taken, where you’ve driven your car. And when you go to access your data, to analyze it or share it with somebody who can help you think about it, you’ll discover…
Now is the time to work hard to insure that the data we collect about ourselves using any kind of commercial, noncommercial, medical, or social service ought to be accessible to ourselves, as well as to our families, caregivers, and collaborators, in common formats using convenient protocols. In service to this aim, we’ve decided to work on a campaign for access, dedicated to helping people who are seeking access to their data by telling their stories and organizing in their support. Although QS Labs is a very small organization, we hope that our contribution, combined with the work of many others, will eventually make data access an acknowledged right.
The inspiration for this work comes from the pioneering self-trackers and access advocates who joined us last April in San Diego for a “QS Public Health Symposium.” Thanks to funding support from the Robert Wood Johnson Foundation, and program support from the US Department of Health And Human Services, Office of the CTO, and The Qualcomm Institute at Calit2, we convened 100 researchers, QS toolmakers, policy makers, and science leaders to discuss how to improve access to self-collected data for personal and public benefit. During our year-long investigation leading up to the meeting, we learned to see the connection between data access and public health research in a new light.
If yesterday’s research subjects were production factors in a scientist’s workshop; and if today’s participants are – ideally – fully informed volunteers with interests worthy of protection; then, the spread of self-tracking tools and practices opens the possibility of a new type of relationship in which research participants contribute valuable craft knowledge, vital personal questions, and intellectual leadership along with their data.
We have shared our lessons from this symposium in a full, in-depth report from the symposium, including links to videos of all the talks, and a list of attendees. We hope you find it useful. In particular, we hope you will share your own access story. Have you tried to use your personal data for personal reasons and faced access barriers? We want to hear about it.
You can tweet using the hashtag #qsaccess, send an email to email@example.com, or post to your own blog and send us a link. We want to hear from you.
The key finding in our report is that the solution to access to self-collected data for personal and public benefit hinges on individual access to our own data. The ability to download, copy, transfer, and store our own data allows us to initiate collaboration with peers, caregivers, and researchers on a voluntary and equitable basis. We recognize that access means more than merely “having a copy” of our data. Skills, resources, and access to knowledge are also important. But without individual access, we can’t even begin. Let’s get started now.
An extract from the QSPH symposium report:
[A]ccess means more than simply being able to acquire a copy of relevant data sets. The purpose of access to data is to learn. When researchers and self-trackers think about self-collected data, they interpret access to mean “Can the data be used in my own context?” Self-collected data will change public health research because it ties science to the personal context in which the data originates. Public health research will change self-tracking practices by connecting personal questions to civic concerns and by offering novel techniques of analysis and understanding. Researchers using self-collected data, and self-trackers collaborating with researchers, are engaged in a new kind of skillful practice that blurs the line between scientists and participants… and improving access to self-collected data for personal and public benefit means broadly advancing this practice.
Today’s post comes to us from Rain Ashford. Rain is a PhD student, researcher, and hardware tinkerer who is interested in how personal data can be conveyed in new and meaningful ways. She’s been exploring ideas around wearable data and the hardware that can support it. At the 2014 Quantified Self Europe Conference, Rain led a breakout session on Emotive Wearables during which she introduced her EEG Visualizing Pendant and engaged attendees in a discussion around wearing data and devices.
By Rain Ashford
It was great to visit Amsterdam again and see friends at the 3rd Quantified Self Europe Conference, previously I have spoken at the conference on Sensing Wearables, in 2011 and Visualising Physiological Data, in 2013.
There were two very prominent topics being discussed at Quantified Self Europe 2014, firstly around the quantifying of grief and secondly on privacy and surveillance. These are two very contrasting and provocative areas for attendees to contemplate, but also very important to all, for they’re very personal areas we can’t avoid having a viewpoint on. My contribution to the conference was to lead a Breakout Session on Emotive Wearables and demonstrated my EEG Visualising Pendant. Breakout Sessions are intended for audience participation and I wanted to use this one-hour session to get feedback on my pendant for its next iteration and also find out what people’s opinions were on emotive wearables generally.
I’ve been making wearable technology for six years and have been a PhD student investigating wearables for three years; during this time I’ve found wearable technology is such a massive field that I have needed to find my own terms to describe the areas I work in, and focus on in my research. Two subsets that I have defined terms for are, responsive wearables: which includes garments, jewellery and accessories that respond to the wearer’s environment, interactivity with technology or physiological signals taken from sensor data worn on or around the body, and emotive wearables: which describes garments, jewellery and accessories that amplify, broadcast and visualise physiological data that is associated with non-verbal communication, for example, the emotions and moods of the wearer. In my PhD research I am looking at whether such wearable devices can used to express non-verbal communication and I wanted to find out what Quantified Self Europe attendees opinions and attitudes would be about such technology, as many attendees are super-users of personal tracking technology and are also developing it.
My EEG Visualising Pendant is an example of my practice that I would describe as an emotive wearable, because it amplifies and broadcasts physiological data of the wearer and may provoke a response from those around the wearer. The pendant visualises the brainwave attention and meditation data of the wearer simultaneously (using data from a Bluetooth NeuroSky MindWave headset), via an LED (Light Emitting Diode) matrix, allowing others to make assumptions and interpretations from the visualizations. For example, whether the person wearing the pendant is paying attention or concentrating on what is going on around them, or is relaxed and not concentrating.
After I demonstrated the EEG Visualising Pendant, I invited attendees of my breakout session to participate in a discussion and paper survey about attitudes to emotive wearables and in particular feedback on the pendant. We had a mixed gender session of various ages and we had a great discussion, which covered areas such as, who would wear this device and other devices that also amplified one’s physiological data? We discussed the appropriateness of such personal technology and also thought in depth about privacy and the ramifications of devices that upload such data to cloud services for processing, plus the positive and the possible negative aspects of data collection. Other issues we discussed included design and aesthetics of prominent devices on the body and where we would be comfortable wearing them.
I am still transcribing the audio from the session and analysing the paper surveys that were completed, overall the feedback was very positive. The data I have gathered will feed into the next iteration of the EEG Visualising Pendantprototype and future devices. It will also feed into my PhD research. Since the Quantified Self Europe Conference, I have run the same focus group three more times with women interested in wearable technology, in London. I will update my blog with my findings from the focus groups and surveys in due course, plus of course information on the EEG Visualising Pendant’s next iteration as it progresses.
As much as we talk about self-tracking being about health or fitness. . . I think it’s about identity. I think it’s about us. It’s about seeing something meaningful in who we are.
Laurie Frick is a self-tracker and visual artist. It this unique combination that has led her down a path of learning about herself while using the data she collects to inform her artistic work. What started with time and sleep tracking rapidly expanded to included other types of data. In this short talk, presented at the 2014 Quantified Self Europe Conference, Laurie explains how her past experiences have informed her new way of thinking about data, “Don’t hide. Get more.”
If you’re interested in Laurie’s artistic work I highly recommend spending some time browsing the gallery on her website.
Today’s post comes to use from Anne Wright and Eric Blue. Both Anne and Eric are longtime contributors to many different QS projects, most recently Anne has been involved with Fluxtream and Eric with Traqs.me. In our work we’ve constantly run into more technical questions and both Anne and Eric has proven to be invaluable resources of knowledge and information about how data flows in and out of the self-tracking systems we all enjoy using. We were happy to have them both at the 2014 Quantified Self Europe Conference where they co-led a breakout session on Best Practices in QS APIs. This discussion is highly important to us and the wider QS community and we invite you to participate on the QS Forum.
Best Practices in QS APIs
Before the breakout Eric and I sorted through the existing API forum discussion threads for what issues we should highlight. We found the following three major issues:
- Account binding/Authorization: OAuth2
- Time handling: unambiguous, UTC or localtime + TZ for each point
- Incremental sync support
We started the session by introducing ourselves and having everyone introduce themselves briefly and say if their interest was as an API consumer, producer, or both. We had a good mix of people with interests in each sphere.
After introductions, Eric and I talked a bit about the three main topics: why they’re important, and where we see the current situation. Then we started taking questions and comments from the group. During the discussion we added two more things to the board:
- The suggestion of encouraging the use of the ISO 8601 with TZ time format
- The importance of API producers having a good way to notify partners about API changes, and being transparent and consistent in its use
One attendee expressed the desire that the same type of measure from different sources, such as steps, should be comparable via some scaling factor and that we should be told enough to compute that scaling factor. This topic always seems to come up in discussions of APIs and multiple data sources. Eric and I expressed the opinion that that type of expectation is a trap, and there are too many qualitative differences in the behavior of different implementations to pretend they’re comparable. Eric gave the example of a site letting people compare and compete for who walks more in a given group, if this site wants to pretend different data sources are comparable, they would need to consider their own value system in deciding how to weight measures from different devices. I also stressed the importance of maintaining the provenance of where and when data came from when its moved from place to place or compared.
On the topic of maintaining data provenance, which I’d also mentioned in the aggregation breakout: a participant from DLR, the German space agency, came up afterwards and told me that there’s actually a formal community with conferences that cares about this issues. It might be good to get better connections between them and our QS API community.
The topic of background logging on smartphones came up. A attendee from SenseOS said that they’d figured out how to get an app that logs ambient sound levels and other sensor data on iOS through the app store on the second try.
At some point, after it seemed there weren’t any major objections to the main topics written on the board, I asked everyone to raise their right hand, put their left over their heart, and vow that if they’re involved in creating APIs that they’d try hard to do those right, as discussed during the session. They did so vow.
After the conference, one of the attendees even contacted me, said he went right to his development team to “spread the religion about UTC, oAuth2 and syncing.” He said they were ok with most of it, but that there was some pushback about OAuth2 based on this post. I told him what I saw happening with OAuth2 and a link to a good rebuttal I found to that post. So, at least our efforts are yielding fruit with at least one of the attendees.
We are thankful to Anne and Eric for leading such a great session at the conference. If you’re interested in taking part in and advancing our discussion around QS APIs and Data Flows we invite you to participate:
This is a visualization of one month of my blood sugar readings from October 2012. I see that my control was generally good, with high blood sugars happening most often around midnight (at the top of the circle). -Doug Kanter
Richard Bernstein, an engineer with diabetes, pioneered home blood glucose monitoring. What he learned about himself contradicted the medical doctrine of his day, but Bernstein went on to become an MD himself, and established a thriving practice completely devoted to helping others with diabetes. We think of Dr. Bernstein as a hero because he used self-measurement to support his own learning, and shared what he learned for general benefit.
Tracking personal metabolism is a necessity for diabetics, and it is also something that will become increasingly common for many people who want to understand and improve their metabolism. Diabetics are also leading the fight for personal access to personal data, and we’re looking forward to meeting inspiring activists and toolmakers today at the DiabetesMine D-Data Exchange meeting in San Francisco. In honor of this meeting, we’ve put together an anthology of sort of QS Show&Tell talks about diabetes and metabolism data.
Jana is a Type 1 diabetic and data visualization practitioner who has been working on creating new techniques for understanding that data from her Dexcom continuous blood glucose monitor. In this talk, she described some of her newest techniques and her ongoing work with Tidepool.org. You can also view her original QS show&tell talk here.
Doug has been featured here on the QS website many times. We first learned about Doug through his amazing visualizations of his own data (like the image above). At the 2013 QS Global Conference, Doug shared what he learned from tracking his diabetes, diet, activity, and other personal data and his ongoing work with the Databetes project.
We spoke with Doug about his experience with tracking, visualizing and understanding his diabetes data. You can listen to that below.
James is a graduate student, professional cyclist, and a Type 1 diabetic. In this talk at the QS San Diego meetup group he talked a bit about how he manages his diabetes along with his near super human exercise schedule and how he uses his experience to inspire others. (Check out this great article he wrote for Ride Magazine.)
Brooks, a Type 1 diabetic, was tracking his blood glucose manually for years before switching to a continuous blood glucose meter. In this talk he describes what he’s learned from his data and why he prefers a modal day view.
Bob tracked his fasting blood glucose, diet, and activity to find out what could help him lower his risk of developing type 2 diabetes.
Vivienne’s son was diagnosed with Type 1 Diabetes two years ago and she’s applied her scientific and data analysis background to understand her son’s life.
Today’s post comes to us from Alberto Frigo who led the Data Future: Possibilities and Dream breakout session at the 2014 Quantified Self Europe Conference. Alberto started the discussion by asking a few questions: As we passionately gather our data, it is striking to reflect about its destiny. Is it going to end up in an attic? Will there be an institution interested in hosting it? Will it make any sense to future generations? Or are we going to build our own mausoleum in our backyards, or on a website with no expiration? You’re invited to read his description of the session and then join the discussion on the QS Forum.
Data Future: Possibilities and Dream
by Aberto Frigo
This breakout discussion commenced by analyzing the contemporary focus on “Big Data” as a cultural artifact. As pointed out by Gary Wolf in the welcoming conference venue, we started focusing instead on “Our Data”, meaning the data generated via our self quantification. To begin with, the example of Janina Turek, a Polish housewife, was given. For over fifty years she has been tracking in hundreds of diaries very detailed facts of her life and the diaries were only found at her death inside a closet. The closet reminds us of the one utilized by the Russian experimental film maker Dziga Vertov to collect fragments of reality in the form of film clips. The introduction to the discussion was ended by speculating on the possibility to hypothetically be able to use “Our Data” as a source for a montage in order for future generations to make sense of it.
As a response to the introduction, several interesting points were made. On one hand there were different personal accounts of people who claimed that their friends only start worrying about them when they stop receiving their tweets. In this respect, one participant was in fact seriously sick. Other participants to this breakout section started talking about several art projects in which artists and designers but also amateurs have dealt with postmortem data. In one instance, a participant talked about an Austrian climber who died and how the family decided to keep his tweet account alive. Beside this discussion, issues about privacy, even after death, where brought up by the participants. At this point a proposition emerged in which the data does not necessarily need to be explicit but could require active interpretation, as in the autobiographical projects of the French photographer Sophie Calle. Also, another proposition was that “misunderstanding” of the data could be actually an interesting factor. In this respect the QS data could work as triggers which would affect the mind of the audience scavenging through the QS data of a dead man or woman, not necessarily leading to a truthful recollection of the reality he or she has tracked but generating a dream like narrative.
If you’re interested in keeping this conversation going about what should happen to our data after we’re gone you’re invited to join the discussion on the QS Forum.
While not part of this breakout session it may be worthwhile for those interested in the longevity of personal data to see this show&tell from Mark Krynsky, presented at the Los Angeles QS Meetup group. In his talk, Mark explain why data preservation is important and how we can preserve our personal data for future generations. Mark’s great Lifesteam Blog also has a more in-depth list of tools and services you can use to create your own digital legacy system.
Today’s post comes to us from Kitty Ireland, who co-led the Telling Stories With Data breakout session at the 2014 Quantified Self Europe Conference. You’re invited to read her description of the session and then join the discussion on the QS Forum. You can also view and download their session presentation here.
Telling Stories With Data
by Kitty Ireland
One theme at this year’s QS Europe conference was how we connect the practices and calculations of the Quantified Self to the emotional side of the human experience. Adrienne Andrew Slaughter and I hosted a breakout discussion on telling stories with data. The conversation evolved from how we tease out stories from personal data to why we do so. What makes a story interesting? Many of the stories we can tell with data don’t really tell us much of anything new. Sometimes it takes some exploration to draw out something revelatory. And in fact sometimes there may not be anything revelatory at all, but there’s always a story.
In the case of QS, the story is often character-driven, meaning the narrative follows some kind of personal transformation. On the other hand, when you start to look at data in aggregate (either multiple data types from an individual, or data from multiple individuals) a plot-driven narrative is more likely to emerge. You begin to see correlations, themes, and distinct plot points.
Last year, I went through an exercise with my grandmother’s diary from 1942 to see if the story in the “data” of the diary reflected her emotional truth. Diaries are by nature messy collections of data. They are inconsistent and incomplete, and in the case of my grandmother’s, nearly illegible. In order to pull out a data set, I had to look at what she tracked most consistently. This turned out to be her relationships with boys.
Counting mentions of boys’ names allowed me to create a data visualization of my grandmother’s relationships over the course of 1942. This gave me a better understanding of her emotional connections. Each boy followed a similar pattern — a steep peak followed by a slow fade — until she fell in love with Zip in August. He blew the other boys out of the water, and his numbers kept rising even after he went off to war.
Adrienne extracted her location data from Saga and created a visualization of her past 12 months. She explained how Saga adds contextual layers on top of the raw location traces, including named places, categories of places, whether this place is home or a workplace, and how much time you’ve spent there. With this extra data it is easy to see Adrienne’s typical routine, and breaks from routine appear as visual anomalies. Seasonal changes show up as subtle shifts — such as earlier commute times during the school year — which become more obvious when she extracts her daily departure time.
One question that came up is whether this kind of day-to-day routine data makes an interesting story, or are only the anomalies worth exploring? It really depends on what question you’re trying to answer, and sometimes you have to look at the data from a few angles to dig up the right question.
In looking at my grandmother’s diary, the daily details of her life give context to the wartime story of love and loss that emerges. To understand Adrienne’s story, it helps to visualize her routine to understand what plot points cause breaks in routine. In order to build a more complete story, we look for patterns and also deviations from those patterns.
Ultimately, telling stories is how we connect to each other, learn from each other, and transmit our culture. As the tsunami of personal data continues to expand, we need the right tools to understand what stories the data tells. Our data can be a rich repository of stories for ourselves, our descendants, and the archives of human history, if we can extract and preserve meaning.
If you’re interested in joining the discussion about how we can tell better stories about our lives with and through data make sure to join the discussion on the forum!
Earlier today John Wilbanks sent out this tweet:
— John Wilbanks (@wilbanks) December 11, 2013
John was lamenting the fact that he couldn’t export and store the genome interpretations that 23&Me provides (they do provide a full export of a user’s genotype). By the afternoon two developers, Beau Gunderson and Eric Jain, had submitted their projects. (You can view them here and here).
We’ve doing some exploration and research about QS APIs over the last two years and we’ve come to understand that having data export is key function of personal data tools. Being able to download and retain an easily decipherable copy of your personal data is important for a variety of reasons. One just needs to spend some time in our popular Zeo Shutting Down: Export Your Data thread to understand how vital this function is.
We know that some toolmakers already include data export as part of their user experience, but many have not or only provide partial support. I’m proposing that we, as a community of people who support and value the ability to find personal meaning through personal data, work together to provide the tools and knowledge to help people access their data.
Would you help and be a part of our Personal Data Task Force*? We can work together to build a common set of resources, tools, how-to’s and guides to help people access their personal data. I’m listening for ideas and insights. Please let me know what you think and how you might want to help.
*We’re inspired by Sina Khanifar’s work on the Rapid Response Internet Task Force.