Tag Archives: discussion
Today’s post comes to use from Anne Wright and Eric Blue. Both Anne and Eric are longtime contributors to many different QS projects, most recently Anne has been involved with Fluxtream and Eric with Traqs.me. In our work we’ve constantly run into more technical questions and both Anne and Eric has proven to be invaluable resources of knowledge and information about how data flows in and out of the self-tracking systems we all enjoy using. We were happy to have them both at the 2014 Quantified Self Europe Conference where they co-led a breakout session on Best Practices in QS APIs. This discussion is highly important to us and the wider QS community and we invite you to participate on the QS Forum.
Best Practices in QS APIs
Before the breakout Eric and I sorted through the existing API forum discussion threads for what issues we should highlight. We found the following three major issues:
- Account binding/Authorization: OAuth2
- Time handling: unambiguous, UTC or localtime + TZ for each point
- Incremental sync support
We started the session by introducing ourselves and having everyone introduce themselves briefly and say if their interest was as an API consumer, producer, or both. We had a good mix of people with interests in each sphere.
After introductions, Eric and I talked a bit about the three main topics: why they’re important, and where we see the current situation. Then we started taking questions and comments from the group. During the discussion we added two more things to the board:
- The suggestion of encouraging the use of the ISO 8601 with TZ time format
- The importance of API producers having a good way to notify partners about API changes, and being transparent and consistent in its use
One attendee expressed the desire that the same type of measure from different sources, such as steps, should be comparable via some scaling factor and that we should be told enough to compute that scaling factor. This topic always seems to come up in discussions of APIs and multiple data sources. Eric and I expressed the opinion that that type of expectation is a trap, and there are too many qualitative differences in the behavior of different implementations to pretend they’re comparable. Eric gave the example of a site letting people compare and compete for who walks more in a given group, if this site wants to pretend different data sources are comparable, they would need to consider their own value system in deciding how to weight measures from different devices. I also stressed the importance of maintaining the provenance of where and when data came from when its moved from place to place or compared.
On the topic of maintaining data provenance, which I’d also mentioned in the aggregation breakout: a participant from DLR, the German space agency, came up afterwards and told me that there’s actually a formal community with conferences that cares about this issues. It might be good to get better connections between them and our QS API community.
The topic of background logging on smartphones came up. A attendee from SenseOS said that they’d figured out how to get an app that logs ambient sound levels and other sensor data on iOS through the app store on the second try.
At some point, after it seemed there weren’t any major objections to the main topics written on the board, I asked everyone to raise their right hand, put their left over their heart, and vow that if they’re involved in creating APIs that they’d try hard to do those right, as discussed during the session. They did so vow.
After the conference, one of the attendees even contacted me, said he went right to his development team to “spread the religion about UTC, oAuth2 and syncing.” He said they were ok with most of it, but that there was some pushback about OAuth2 based on this post. I told him what I saw happening with OAuth2 and a link to a good rebuttal I found to that post. So, at least our efforts are yielding fruit with at least one of the attendees.
We are thankful to Anne and Eric for leading such a great session at the conference. If you’re interested in taking part in and advancing our discussion around QS APIs and Data Flows we invite you to participate:
The QS Meetup on March 13th in Mountain View was great fun, and covered a variety of topics ranging from nutrient tracking, classifying large archives of footage, quantifed-mind.com, and pH tracking, and newly disclosed interventions for mitigating the emotional knots associated with stressful events.
The meeting began with a round of introductions in which people described some of their areas of expertise, research, and curiosity. It ended with smiles on everyone’s face.
The discussion quickly moved to interest in a recent Wired article, which suggested it was possible to mitigate the emotional impacts of traumatic memories. Some discussion pondered to what extent this intervention actually erased the memories themselves, rather than the emotional knots associated with it. In this procedure, participants were actively encouraged to recall some debilitating stressful event such as a war time tragedy or familial abuse, while under the influence of this chemical concoction.
Ryan B said that a number of important new chemical interventions were being developed showing promise in reducing the brain plaques commonly associated with Alzheimer’s disease. Ensuing discussion also considered the possibility that other chemicals such as MDMA may have similar promise in releasing the emotional knots associated with trauma. Alex Grey pointed out that long term effects on down-regulating brain chemical receptors has to be considered, as some chemical interventions like MDMA can lead to a deficit in the neurochemical receptors for serotonin or oxytocin that promote happiness.
It was also suggested that the use of humor may be a non-chemical intervention capable of the same effect. One viewpoint is that any given set of facts can be written as a drama, tragedy, mystery, or a comedy. Perhaps communications-oriented processes that help to see the humor in terrible tragedies may also help to relieve the emotional knots associated with trauma. However, the use of these sorts of techniques in American culture may be challenging to adopt, owing to a propensity to take ourselves too seriously.
Yoni Donner discussed his new research with http://www.quantified-mind.com/, which is a free web app for measuring a number of aspects of intelligence. He said the app takes about fifteen minutes to establish a baseline, and allows participants to track how different aspects of their cognitive performance change over time. He said that care was taken to minimize training effects, so that the tool is better able to track other interventions, training, nutrition, or environmental factors on various types of intelligence.
He noted that they decided to keep the interface as simple as possible so that it could be easier to use. Although the use of sophisticated brain tracking was considered, it was not included owing to the relative paucity of deployment and the logistical challenges associated with setting up and using existing brain training equipment.
Phil von Stade discussed his interest in deriving some meaning, doing research on, or creating art from the 20,000 feet of archived family movies, and thousands of hours of video recordings he has, dating back to the 1950s. In addition, he has also shot several gigabytes of his own picture and video logs. He pointed out that the sheer size of the dataset makes it difficult to find or organize content in a useful way. Some efforts have been made to create slideshows and stories from this archive.
Tools which are automatically able to tag content with metadata such as participants, date, emotional patterns, behavioral data, or even physiological data might be useful in helping to better index such an archive.
There was also some discussion about the use of pH data from urine and other bodily fluids as a marker for other changes in the body. Steven Fowkes said that tracking pH was good for indentifying various physiological states associated with inflammation, leaky-gut syndrome, and other health effects. Some practical considerations were noted with existing measuring techniques, which require one to hold a strip in front of a stream of urine and disposing of this without creating a mess. A pH sensor mounted in the toilet with a Wi-Fi connection could solve these challenges, and provide a potentially reliable, no-touch measurement system for health research.
This led to a discussion on some of the challenges associated with current limitations in describing food. As it stands, many tracking systems might consider wheat from different sources, production processes, and species as identical, while each may have wildly different cooking and health effects, noted Raj. For example early research suggested that high-meat diets were leading to high-level of cholesterol and fatty deposits in the late 1960s. Follow up research suggested that it was in fact a growth in the consumption of grain-fed cattle that were causing these effects.
There are also curious discrepancies and changes in the nutrient database compiled by the USDA, which is often used by tracking programs to estimate the nutritional composition of various foods. For example, the average levels of calcium for the same portion of kale and calcium dropped 10-fold between 1980 and 1990. Perhaps improvements in the farm-to-fork databases now being mandated in the US could help to rectify these discrepancies.
We see a lot of cool things here that people are experimenting with, such as health (sleep, water intake, mood) or productivity (interruptions, hours/day, attention), but we are also trying odder things. My interest is in widening the definition of what could be considered an experiment, so I thought I’d ask, what off-the-wall things have you tracked? I’m also curious to know what kind of support or push back you got from those around you, if they were social experiments. While maybe not terribly odd, here are some of the things I’ve tried:
- Experimented with ways to keep my feet warm while mountain biking in winter (tracked left/right foot comfort).
- Tried changing my thinking around positive events (tracked the event and whether it helped me feel happier to relive it later).
- Played with different ways to prevent “wintry mix” ice buildup on sidewalks (tracked likelihood of falling – with careful testing). (Are you detecting a northern climate?)
- Tested different kinds of one-day contact lenses (tracked ease of insertion, visibility, and comfort).
- Dressed better in public (normally I’m very casual), including wearing a hat (tracked psychological and physical comfort, reactions of others, including – surprise! – special treatment at businesses).
[Image: Office Board by John F. Peto
Some time ago I was asked for the ultimate productivity tip, and instead of giving a straightforward take-away, I said that in the end the answer is “it depends.” That wasn’t a cheap shot because what works for you might not work for the next guy, and vice versa. Sound familiar? It’s the same case for medications, meditation, and most anything else we humans do. That’s why it’s best to experiment, examine your results, and decide based on the data. In other words, quantify!
But there’s a complication. Coming up with metrics that reflect the value of what we do, rather than the individual efforts, can be a challenge. While the latter are simpler to measure, (there’s a reason that some jobs require you to clock in – “seat time” is an easy metric), the real test is more how effective we are, not just how efficient. I may be cranking widgets at a fast pace, but what if I’m making the wrong ones?
Until we have general-purpose and quantified framework for measuring value (“accomplishment units?”), we have to keep being creative. In this long post I want to seed some discussion by sharing two things: some specific productivity experiments I’ve tried, with their results, and a recap of the cool productivity experiments found here on Quantified Self. Please share techniques that you’ve found helpful.
Productivity experiments I’ve tried
Adopt a system. The single biggest productivity change I made was trying a system for organizing my work. In my case I got the GTD fever (Getting Things Done), and my results were clear, including getting far more done more efficiently, feeling more in control, and freeing up brainpower for the big picture. At the time (five years ago) I wasn’t thinking of it in terms of an experiment, but it certainly qualified. From a QS perspective it can function as a kind of tracking platform because it has you keep a comprehensive and current list of tasks (Allen calls them “actions”). I have used them for various tracking activities, mainly by characterizing or counting them.
Two-by-two charting. I’ve plotted 2D graphs of various task dimensions to analyze my state of affairs, such as importance vs. fun (a sample is here). These are a kind of concrete snapshot that I analyze over time. In the above example I decided that the upper right quadrant (vital + fun) was still a little sparse.
Seth’s post on Personal Science (especially about “data exhaust” ) got me thinking about big data and the implications for the self-tracking work we do. What evidence is there that big data will infiltrate self-experimenting? Under what conditions will self-tracking move from “small data”, or “data poor” (a few hundred or a few thousand data points) to “big data” or “data rich” (terminology from The Coming Data Deluge)? Let me share some thoughts and get yours.
Big data are datasets that grow so large that they become awkward to work with using on-hand database management tools. Difficulties include capture, storage, search, sharing, analytics, and visualizing.
This identifies an important problem. While it is natural to throw all our personal data into one big database, there are costs associated with doing so. I don’t mean those associated with capture (clearly we will solve the technical and cultural challenges), but the costs in sensemaking – turning data into actionable wisdom. Let’s put the problem into context and assume the future for personal science looks something like this (help me here):
While talking recently with my QS fellows (thanks Alex, Eri, Seth, and Rajiv) I realized I’ve been using the term “citizen science” rather loosely. Expanding on my short section in Wandering minds, self-tracking, and citizen science, I’d like to use this post to explore how the expression is used, sketch a little vision of where it could go, and get your thoughts on what it means to you.
Current usage: Citizen-as-helper
In looking around the net I’ve found that the general meaning of “citizen science” is that of individuals who help with scientific research by contributing time and resources to projects organized and run by professional scientists. Here’s a how it’s defined at Citizen scientist: Helping scientists help themselves:
Citizen science is a form of organisation design for collaborative scientific research involving scientists and volunteers, for which internet-based modes of participation enable massive virtual collaboration by thousands of members of the public.
Some cool examples include:
While much of our work here is focused on individual development, there are plenty of circumstances in our professional lives where we can apply the ideas of experimentation. Let me set the stage with some background and ideas, and then I’d love to hear from you on how you widen self-tracking to apply to your occupation.
First, experimentation at work is not new. Frederick Taylor‘s Scientific management popularized applying metrics to factory worker performance in the late 1800s. Later came W. Edwards Deming, who influenced the Japanese Lean manufacturing movement in the 50s, which integrated experimentation, measurement, and continuous improvement. A more contemporary thinker is Thomas Davenport and his ideas on How to Design Smart Business Experiments (an excerpt of a paid article).
“After spending some time playing around with the idea of what it meant to have a ‘primary’ eye, I did the following experiment: I covered it with an eye patch for a day, to see if the ‘secondary’ eye would get stronger. Here’s what happened: I temporarily went blind!”
“I think Stefan raised an interesting point concerning the potential of self-tracking/experimentation to harm the subject. It might be interesting to discuss what negative experiences self-tracking has personally wrought and what we would recommend to make the experience less negative.”