Topic Archives: Discussions
Earlier today John Wilbanks sent out this tweet:
— John Wilbanks (@wilbanks) December 11, 2013
John was lamenting the fact that he couldn’t export and store the genome interpretations that 23&Me provides (they do provide a full export of a user’s genotype). By the afternoon two developers, Beau Gunderson and Eric Jain, had submitted their projects. (You can view them here and here).
We’ve doing some exploration and research about QS APIs over the last two years and we’ve come to understand that having data export is key function of personal data tools. Being able to download and retain an easily decipherable copy of your personal data is important for a variety of reasons. One just needs to spend some time in our popular Zeo Shutting Down: Export Your Data thread to understand how vital this function is.
We know that some toolmakers already include data export as part of their user experience, but many have not or only provide partial support. I’m proposing that we, as a community of people who support and value the ability to find personal meaning through personal data, work together to provide the tools and knowledge to help people access their data.
Would you help and be a part of our Personal Data Task Force*? We can work together to build a common set of resources, tools, how-to’s and guides to help people access their personal data. I’m listening for ideas and insights. Please let me know what you think and how you might want to help.
*We’re inspired by Sina Khanifar’s work on the Rapid Response Internet Task Force.
Earlier this summer we found out that the Knight Foundation was launching a challenge centered on funding “innovative ideas to harness information and data for the health of communities.” We decided that this would be a great opportunity to propose a program idea we’ve wanted to work on for a long time: A Quantified Self Civic Festival. The idea of the festival is that the highest value in personal data lies in its usefulness for self-discovery, both individually and in our communities.
Traditionally, research questions about health and wellness are addressed from the top down. Professionals choose which health measures are important, while citizens are seen mainly as sources of data and recipients of expert advice. We’d like to help turn this world upside down, inspiring individuals, families, and communities to define what they’d like to track, and why, while enlisting experts as servants to a broadly popular adventure in making knowledge. (A guiding principle of the festival would be that participants have maximum control over their own data.)
We’d love your feedback. You can comment here, but it would be very helpful if you commented on the challenge website. While you’re there, take a look at some of the other wonderful entries. There is a wealth of inspiration and we’re excited to see what comes out of this work.
Earlier this year we discussed some very interesting research from the Pew Research Center’s Internet & American Life Project about the role of technology and the Internet in health and healthcare. We were lucky to have Susannah Fox, Associate Director at Pew, talk to us a bit about what it means when 21% of people who track are using some form of technology. Of course, that conversation and that research spawned a few more questions and some interesting insights.
Today we’re looking at some brand new research results coming from Pew that are derived from that same research data set. This time Susannah and her team have focused on a particularly important set of individuals in the health and healthcare space: caregivers. In their recently released report, Family Caregivers are Wired for Health, they found that 39% of adults in the U.S. are caring for child or adult. So why talk about this here? What does that have to do with Quantified Self? Well, it turns out that the people who spend their time and energy caring for the health and wellbeing of others may actually be more engaged in tracking than their non-caregiving counterparts:
- 72% of caregivers track their health (weight, diet, exercise, blood pressure, sleep, etc.) while 63% of non-caregivers track their health.
- 44% of caregivers who track say they track their most important indicator “in their heads” (non-caregivers = 53%).
- 43% of caregivers who track say they track their most important indicator using paper (non-caregivers = 28%).
- 31% of caregivers track the health of someone other than themselves.
“When controlling for age, income, education, ethnicity, and good overall health, being a caregiver increases the probability that someone will track a health indicator.”
- 41% of caregivers who track share their data with someone else (non-caregivers = 29%).
- 52% of caregivers who track say it has changed their overall approach to maintaining their health or the health of someone for whom they provide care (non-caregivers = 41%).
- 50% of caregivers who track say it has led them to ask a doctor new questions or to seek a second opinion (non-caregivers = 32%).
- 44% of caregivers who track say it has affected a decision about how to treat an illness or condition (non-caregivers = 26%).
We asked our friend and fellow QS organizer, Rajiv Mehta to comment on this report. When he’s not helping organize our Bay Area QS Meetup, Rajiv has been working on exploring and understanding caregiving.
“Given the prevalence of caregiving (40% of adults) and that 30% of caregivers track something about the person they’re caring for, there’s a lot of opportunity for appropriate tracking and analysis tools. However, caregiving often involves tracking a wide variety of medications, biometrics, symptoms, etc., and design and developing appropriate tools is not easy. I recently wrote about my own experiences in “Self-Care and Caregiving Apps Development.” After all these years of QS meetups and conferences, I can only recall one talk of caregiver tracking (a mother tracking the progress of her baby). Hopefully we’ll see much more over time.”
Please take some time to read the full report and for the data savy, take a look at the preliminary survey data and see what you can find. We would love to hear your thoughts on this new report here in our comments or on our forum.
At last month’s QS Europe 2013 conference, developers gathered at a breakout session to compile a list of common obstacles encountered when using the APIs of popular, QS-related services. We hope that this list of obstacles will be useful to toolmakers who have developed APIs for their tools or are planning to provide such APIs.
- No API, or incomplete APIs that exposes only aggregate data, and not the actual data that was recorded.
- Custom authentication mechanisms (instead of e.g. OAuth), or custom extensions (e.g. for refreshing tokens with OAuth 1.0a).
- OAuth tokens that expire.
- Timestamps that lack time zone offsets: Some applications need to know how much time has elapsed between two data points (not possible if all times are local), or what e.g. the hour of the day was (not possible if all times are converted to UTC).
- Can’t retrieve data points going back more than a few days or weeks, because at least one separate request has to be made for each day, instead of being able to use a begin/end timestamp and offset/limit parameters.
- Numbers that don’t retain their precision (1 != 1.0 != 1.00), or are changed due to unit conversion (71kg = 156.528lbs = 70.9999kg?).
- No SSL, or SSL with a certificate that is not widely supported.
- Data that lacks unique identifies (for track-ability, or doesn’t include its provenance (if obtained from another service).
- No sandbox with test data for APIs that expose data from hardware devices.
- No dedicated channel for advance notifications of API changes.
This list is by no means complete, but rather a starting point that we hope will kick off a discussion around best practices.
This slide from Mary Meeker’s Internet Trends slide deck (link is to full deck on Slideshare) puts some numbers around what we’ve been noticing among QS Toolmakers: everybody wants to talk APIs.
At Quantified Self we’ve come to appreciate the interest in QS from scholars, researchers, and scientists. The essay below, which originally appeared on the Society Pages blog Cyborgology, was written by the thoughtful QS participant and scholar, Whitney Erin Boesel (we have collaboratively made minor edits for this posting). We learned quite a bit from it and are honored that Whitney allowed us to repost it here. Essays such as these help us think critically about QS and our growing community. We hope that posting it here will spur discussion and we invite you to add your voice in the comments or email us with essays of your own.
When people ask me what it is that I’m studying for my PhD research, my answer usually begins with, “Have you ever heard of the group Quantified Self?” I ask this question because, if the person says yes, it’s a lot easier for me to explain my project (which is looking at different forms of mood tracking, primarily within the context of Quantified Self). But sometimes asking this return question makes my explanation more difficult, too, because a lot of people have heard the word “quantified” cozy up to the word “self” in ways that make them feel angry, uncomfortable, or threatened. They don’t at all like what those four syllables sometimes seem to represent, and with good reason: the idea of a “quantified self” can stir images of big data, data mining, surveillance, loss of privacy, loss of agency, mindless fetishization of technology, even utter dehumanization.
But this is not the Quantified Self that I have come to know. Continue reading
If you’re a loyal, or even infrequent user of the Zeo sleep tracking device then you’ve probably heard the sad news that the company has shut down. This opens up a lot of questions about what is means to make consumer devices in this day and age, but rather than focus on those issues we’ld like to talk a bit about data.
Zeo has been unfortunately a little quiet on the communication front and there are quite a few users out there who are wondering about what will happen to all those restless nights and sound sleeps that were captured by their device. This has been compounded by the fact that the Zeo website went down for a short time (it is up as of this writing) closing off access to user accounts and the data therein. Lucky for you there have been quite a few enterprising and enthusiastic individuals who have taken the time to create or highlight ways to capture and store your Zeo data.
Use The Zeo Website
You can’t fault Zeo with making it hard to access your own data. As long as their website is up you can easily download your sleep data from by logging into your user account at mysleep.myzeo.com. After logging into your account you will see a link on the right hand side labeled “Export Data.” Click that link and you’ll be able to download a CSV file containing all your sleep data. They’ve even provided a description of the data and formats that you can download here.
Eric Blue’s FreeMyZeo Data Exporter
QS Los Angeles Meetup Organizer and hacker extraordinaire whipped up a simple data export tool using the Zeo API. The great thing about Eric’s is that even if the myZeo web portal goes down this tool should continue to work.
Download Data Directly From the Device
If you’re using a Zeo bedside device then you can continue to use it and download the data directly from the memory card without relying on uploading it to the Zeo website. In order to do this you’ll have to read the documentation and use the Data Decoder Library. These files are hard to find as they’ve been removed from the Zeo developer website, but you can access them from our Forum thanks to our friend Dan Dascalesu. Zeo also created a viewer using this library that you can use via this Sourceforge page.
If you’ve found another way to download Zeo data please let us know. You can also participate in the great forum discussion that inspired this post.
It’s no secret we love data here at Quantified Self, but we also love seeing how people interact with data. We’ve explored many of those interactions here and we’re always on the lookout for new and different ways people communicate their data and the insights therein. A few weeks ago we wrote up a short “how to” post describing a recent phenomenon on Twitter – sparktweets. It didn’t take too long before we started seeing the Quantified Self community using these new “data words.”
— P.G. Holder (@pat_holder) April 16, 2013
— Benny Wong (@bdotdub) April 14, 2013
We couldn’t stop thinking about sparktweets. What kind of data could you communicate in 140 characters? What would people do if it was easier to make a sparktweet? So we asked out friend Stan James to help us out and our Sparktweet Tool was born. Since then we’ve seen some great tweets roll though our feed, and we would love to see more. Need some inspiration? Here’s a few we really enjoyed:
▄▃▄▃█▁█▁█▁█ My heart when I walked up to her door, 13 years ago today. (quantifiedself.com/sparktweet-too…)
— Gary Wolf (@agaricus) April 30, 2013
— Robby Macdonell (@robby1066) May 1, 2013
— Martin Putniorz (@sputnikus) May 2, 2013
— BuildingIoT (@BuildingIoT) May 2, 2013
A quick post here to highlight some interesting developments in the heart rate tracking space. Tracking and understanding heart rate has been a cornerstone of self-tracking since, well since someone put two fingers on their neck and decided to write down how many pulses they felt. We’ve come a long way from that point. If you’re like me tracking heart rate popped up on your radar when you started training for a sporting event like a marathon or long distance cycling. Like many who used the pioneering devices from Polar it felt a bit odd to strap that hard piece of plastic around my chest. After time, and seeing the benefits of tracking heart rate, it became part of my daily ritual. Yet, for all the great things heart rate monitoring can do for physical training, there have been very few advances to provide people with a noninvasive method. That is, until now.
Thearn, an enterprising Github user and developer, has released an open source tool that uses your webcam to detect your pulse. The Webcam Pulse Detector is a python application that uses a variety of tools such as OpenCV (an open source computer vision tool) to “find the location of the user’s face, then isolate the forehead region. Data is collected from this location over time to estimate the user’s heartbeat frequency. This is done by measuring average optical intensity in the forehead location, in the subimage’s green channel alone.” If you’re interested in the research that made this work possible check out the amazing work on Eulerian Video Magnification being conducted at MIT. Now, getting it to work is a bit of a hurdle, but it does appear to be working for those who have the technical expertise. If you get it working please let us know in the comments. Hopefully someone comes along that provides a bit of an easier installation solution for those of us who shy away from working in the terminal. Until then, there are actually quite a few mobile applications that use similar technology to detect and track heart rate:
Let us know if you’ve been tracking your heart rate and what you’ve found out. We would love to explore this space together.
Update: Want to make your own Sparktweet? We made a simple tool that you can use. Check it out here!
I was stumbling around Twitter the other day when I was confronted with something new and different:
— Steve Cavendish (@scavendish) April 5, 2013
Apparently that little data representation is not all that new and different. Way back in 2010 Alex Kerin figured out that Twitter was accepting unicode and decide to play around and see if it could represent data. Lo and behold it could and a SparkTweet was born:
▁▁▂▂▃▄▄█▁▁▂ ▃▄▄▅▆▁▁▂▂▃▄▄▅▆▁▁▂▂▃▄▄▅▆ Can you guess what I’m coding in Excel? Eh? Eh?
— Alex Kerin (@AlexKerin) June 9, 2010
Before we get into how you too can start populating your Twitter feed and Facebook (I checked and it worked there as well) with representations of your own Quantified Self data let’s dive into some history.
a small intense, simple, word-sized graphic with typographic resolution. Sparklines mean that graphics are no longer cartoonish special occasions with captions and boxes, but rather sparkline graphic can be everywhere a word or number can be: embedded in a sentence, table, headline, map, spreadsheet, graphic.
In another wonderful book, The Visual Display of Quantitative Information, Tufte describes sparklines as “datawords: data-intense, design-simple, word-sized graphics.“ Of course, those of us in the QS community are deeply interested not only in data, but also in how data operates in society, what is means as a cultural artifact that is discussed and exchanged in language both written and verbal. This interest iswhat initially piqued my curiosity. The movement of data and a dataword distributed among text and publicly expressed in a tweet. I can’t help but wonder, what does this mean for how we think about and express data about our world?*
If you want display quantitative data in your Twitter stream it shouldn’t take you all that long to get started. Lucky for us Alex Kerin has provided a nifty little Excel workbook that will generate the unicode that can be pasted into your tweet. Just download this workbook and follow the simple instructions! Soon you’ll be able to send out tweets just like this:
My 30-day step history: ▄ ▄ ▄ ▅ ▅ ▅ ▄ ▆ ▄ █ █ ▅ ▁ ▃ ▆ ▅ ▁ ▄ ▇ ▃ ▅ ▆ ▂ ▂ ▅ ▃ ▄ ▄ ▅ ▄ #QuantifiedSelf
— Ernesto Ramirez (@eramirez) April 11, 2013
Now you’re ready and able to go forth and tweet your data! If you use a sparktweet to express your Quantified Self data be sure to let us know in the comments or tweet at us with #sparktweet and/or #quantifiedself.
*Of course the use of sparktweets is not without controversy in the world of data visualization. For more discussion on sparktweets and their utility I suggest you start here.