Tag Archives: API
Philip Thomas is a software engineer at OpenDNS. He’s been collecting a lot of personal data since college, first starting with his custom built beer tracking system. He then moved on to slightly more sophisticated personal data. As the data started to pile up in services and systems he started to explore what it would take to create his own custom personal dashboard. In this talk, presented at the Bay Area QS meetup group, Philip explains how he built his dashboard and why it’s so valuable to him as he tracks his life.
Today’s post comes to use from Anne Wright and Eric Blue. Both Anne and Eric are longtime contributors to many different QS projects, most recently Anne has been involved with Fluxtream and Eric with Traqs.me. In our work we’ve constantly run into more technical questions and both Anne and Eric has proven to be invaluable resources of knowledge and information about how data flows in and out of the self-tracking systems we all enjoy using. We were happy to have them both at the 2014 Quantified Self Europe Conference where they co-led a breakout session on Best Practices in QS APIs. This discussion is highly important to us and the wider QS community and we invite you to participate on the QS Forum.
Best Practices in QS APIs
Before the breakout Eric and I sorted through the existing API forum discussion threads for what issues we should highlight. We found the following three major issues:
- Account binding/Authorization: OAuth2
- Time handling: unambiguous, UTC or localtime + TZ for each point
- Incremental sync support
We started the session by introducing ourselves and having everyone introduce themselves briefly and say if their interest was as an API consumer, producer, or both. We had a good mix of people with interests in each sphere.
After introductions, Eric and I talked a bit about the three main topics: why they’re important, and where we see the current situation. Then we started taking questions and comments from the group. During the discussion we added two more things to the board:
- The suggestion of encouraging the use of the ISO 8601 with TZ time format
- The importance of API producers having a good way to notify partners about API changes, and being transparent and consistent in its use
One attendee expressed the desire that the same type of measure from different sources, such as steps, should be comparable via some scaling factor and that we should be told enough to compute that scaling factor. This topic always seems to come up in discussions of APIs and multiple data sources. Eric and I expressed the opinion that that type of expectation is a trap, and there are too many qualitative differences in the behavior of different implementations to pretend they’re comparable. Eric gave the example of a site letting people compare and compete for who walks more in a given group, if this site wants to pretend different data sources are comparable, they would need to consider their own value system in deciding how to weight measures from different devices. I also stressed the importance of maintaining the provenance of where and when data came from when its moved from place to place or compared.
On the topic of maintaining data provenance, which I’d also mentioned in the aggregation breakout: a participant from DLR, the German space agency, came up afterwards and told me that there’s actually a formal community with conferences that cares about this issues. It might be good to get better connections between them and our QS API community.
The topic of background logging on smartphones came up. A attendee from SenseOS said that they’d figured out how to get an app that logs ambient sound levels and other sensor data on iOS through the app store on the second try.
At some point, after it seemed there weren’t any major objections to the main topics written on the board, I asked everyone to raise their right hand, put their left over their heart, and vow that if they’re involved in creating APIs that they’d try hard to do those right, as discussed during the session. They did so vow.
After the conference, one of the attendees even contacted me, said he went right to his development team to “spread the religion about UTC, oAuth2 and syncing.” He said they were ok with most of it, but that there was some pushback about OAuth2 based on this post. I told him what I saw happening with OAuth2 and a link to a good rebuttal I found to that post. So, at least our efforts are yielding fruit with at least one of the attendees.
We are thankful to Anne and Eric for leading such a great session at the conference. If you’re interested in taking part in and advancing our discussion around QS APIs and Data Flows we invite you to participate:
At last month’s QS Europe 2013 conference, developers gathered at a breakout session to compile a list of common obstacles encountered when using the APIs of popular, QS-related services. We hope that this list of obstacles will be useful to toolmakers who have developed APIs for their tools or are planning to provide such APIs.
- No API, or incomplete APIs that exposes only aggregate data, and not the actual data that was recorded.
- Custom authentication mechanisms (instead of e.g. OAuth), or custom extensions (e.g. for refreshing tokens with OAuth 1.0a).
- OAuth tokens that expire.
- Timestamps that lack time zone offsets: Some applications need to know how much time has elapsed between two data points (not possible if all times are local), or what e.g. the hour of the day was (not possible if all times are converted to UTC).
- Can’t retrieve data points going back more than a few days or weeks, because at least one separate request has to be made for each day, instead of being able to use a begin/end timestamp and offset/limit parameters.
- Numbers that don’t retain their precision (1 != 1.0 != 1.00), or are changed due to unit conversion (71kg = 156.528lbs = 70.9999kg?).
- No SSL, or SSL with a certificate that is not widely supported.
- Data that lacks unique identifies (for track-ability, or doesn’t include its provenance (if obtained from another service).
- No sandbox with test data for APIs that expose data from hardware devices.
- No dedicated channel for advance notifications of API changes.
This list is by no means complete, but rather a starting point that we hope will kick off a discussion around best practices.
Editors Note: We’ve updated this post to reflect Google’s move to a new version of their spreadsheet application. The newest version no longer support the Script Gallery mentioned here. We have included a link in the instruction steps below that allows you to use the old version of Google Spreadsheets. We’ll keep an eye out and update this post again if the old version is taken down. We’ve also included a new set of instructions if you’d like to use the new Google Spreadsheets. This involves slight editing to a simple script (updated 09/22/14).
Interested in downloading your minute-by-minute Fitbit data? Check out our new how-to here! (updated 09/26/14)
We’ve updated the script to reflect Fitbit’s move to only accept HTTPS requests to access their API. Make sure to update your own scripts if you’ve modified the one linked below in Step 4 (updated 10/15/14).
If you’re like me, then you’re always looking for new ways to learn about yourself through the data you collect. As a long time Fitbit user I’m always drawn back to my data in order to understand my own physical activity patterns. Last year we showed you how to access your Fitbit data in a Google spreadsheet. This was by far the easiest method for people who want to use the Fitbit API, but don’t have the programming skills to write their own code. As luck would have it one of our very own QS Meetup Organizers, Mark Leavitt from QS Portland, decided to make some modifications to that script to make it even easier to get your data. In this video below I walk you through the steps necessary to setup your very own Fitbit data Google spreadsheet.
Step-by-step instructions after the jump. Continue reading
In this talk, Beau Gunderson shares a way to bring all of your disparate data sets, from Facebook to Twitter to Foursquare to Zeo to Fitbit to Runkeeper, together in one collection to be accessed through simple APIs. It’s part of an open source development effort called The Locker Project. The hope is to be able to see new patterns and correlations by bringing these sources of data together. Beau learned some interesting things about himself, and had fun playing with different questions he had about his data. (Filmed by the Seattle QS Show&Tell meetup group.)