The Reluctantly Quantified Parent by Erin Kissane. As a new mother, Erin was hesitant to use what she deemed “anxious technology.” After some hard nights of little sleep she began to slowly incorporate some self-tracking technology into her routine with her newborn daughter. A great read about using tools then putting them away once they’ve served their purpose. (Reminded me of this great talk by Yasmin Lucero.)
Show&Tell Returns to Leisure by Tom VanAntwerp. Tom was interested in his return on investment from his leisure time actives. He tracked his time spent in different non-work activities for two weeks and calculated the cost of participating in those activities.
The Quantified Microbiome Self By Carl Zimmer. The great science writer, Carl Zimmer, writes about a recent experiment and journal article by two MIT researchers who tracked their microbiome every day for a year. Fascinating findings, including a successful self-diagnosis of salmonella poisoning. You can also read the original research paper here.
A Personal Analysis of 1 Year of Using Citibike by Miles Grimshaw. Miles was interested in understanding more about his use of the Citibike bike share system in New York City. Using some ingenious methods he was able to download, visualize, and analyze his 268 total trips. I especially appreciate his addition of a simple “how-to” so other Citibike users can make the same visualizations.
Visualizing Runkeeper Data in R by Dan Goldin. In 2013 Dan ran 1000 miles and tracked them using the popular Runkeeper app. Runkeeper has a quick and easy data export function and Dan was able to download his data and use R to visualize and analyze his runs. (Bonus Link: If you’re a Runkeeper user you might be interested in this fantastic how-to for making a heatmap of your runs.)
Vanessa Sabino was curious about how well she was sleeping. By using the Sleep as Android app, she was able to track a year of sleep data. Before she was able to dig into the data she ran into a problem with the data export format and had to write her own custom data parser to create usable CSV files. Vanessa was then able to use the data to explore her question, “When do I get the most amount of deep sleep?” In this talk, presented at the Toronto QS meetup group, Vanessa explains her process and what she learned from analyzing 340 days of sleep data.
Paul LaFontaine was interested in understanding his anxiety and negative emotional states. What was causing them? When were they happening? What could he do to combat them? Using TapLog, a simple Android-based tracking app (with easy data export), Paul tracked these mental events for six months as well as the triggers associated with each one. In this talk, presented at the 2014 Quantified Self Europe Conference, Paul dives deep in to the data to show how he was able to learn how different triggers were related to his anxiety and stress. While exploring his data, he also discovered a few surprising and profound insights. Watch his great talk below to learn more!
Jenny Tillotson is a researcher and fashion designer who is currently exploring how scent plays a role in emotion and psychological states. As someone living with bipolar disorder, she’s been acutely aware of what affects her own emotions states and has been exploring different methods to track them. In this talk, presented at the 2014 Quantified Self Europe Conference, Jenny discusses her new project, Sensory Fashion, that uses wearable tracking technology and scent and sensory science to improve wellbeing. Be sure to read her description below when you finish watching her excellent talk.
What did you do?
I established a new QS project called ‘SENSORY FASHION’, funded by a Winston Churchill Fellowship that combines biology with wearable technology to benefit people with chronic mental health conditions. This allowed me to travel to the USA and meet leading psychiatrists, psychologists and mindfulness experts and find new ways to build monitoring tools that SENSE and balance the physiological, psychological and emotional states through the sense of smell. My objective was to manage stress and sleep disturbance using olfactory diagnostic biosensing tools and micro delivery systems that dispense aromas on-demand. The purpose was to tap into the limbic system (the emotional centre of our brain) with aromas that reduce sleep and stress triggers and therefore prevent a major relapse for people like myself who live with bipolar disorder on a day to day basis. I designed my own personalized mood-enhancing ‘aroma rainbow’ that dispenses a spectrum of wellbeing fragrances to complement orthodox medication regimes such as taking mood stabilizers.
How did you do it?
Initially by experimenting with different evidence-based essential oils with accessible clinical data, such as inhaling lavender to aid relaxation and help sleep, sweet orange to reduce anxiety and peppermint to stimulate the brain. I developed a technology platform called ‘eScent’ which is a wearable device that distributes scent directly into the immediate vicinity of the wearer upon a biometric sensed stimuli (body odor, ECG, cognitive response, skin conductivity etc). The scent forms a localized and personalized ‘scent bubble’ around the user which is unique to the invention, creating real-time biofeedback scent interventions. The result promotes sleep hygiene and can treat a range of mood disorders with counter-active calming aromas when high stress levels reach a pre-set threshold.
What did you learn?
I learnt it is possible to track emotional states through body smells, for example by detecting scent signals that are specific to individual humans. In my case this was body odor caused by chronic social anxiety from increased cortisol levels found in sweat and this could be treated with anxiolytic aromas such as sweet orange that create an immediate calming effect. In addition, building olfactory tools can boost self-confidence and communication skills, or identify ‘prodromal symptoms’ in mood disorders; they learn your typical patterns and act as a warning signal by monitoring minor cognitive shifts before the bigger shifts appear. This can easily be integrated into ‘Sensory Fashion’ and jewelry in a ‘de-stigmatizing’ manner, giving the user the prospect of attempting to offer them some further control of their emotional state through smell, whether by conscious control or bio-feedback. The next step is to miniaturize the eScent technology and further explore the untapped research data on the science of body (emotional) odor.
Today’s post comes to use from Freek Van Polen. Freek works at Sense Observations Systems, where they develop passive sensing applications and tools for smartphones. At the 2014 Quantified Self Europe Conference Freek led a breakout session where attendees discussed the opportunities, pitfalls, and ethical challenges associated with the increasing amount of passive data collection that is possible through the many different sensors we’re already carrying around in our pockets. We invite you to read his short description of the breakout below and continue the conversation on our forum.
Passive Sensing with Smartphones
by Freek van Polen
The session started out by using Google Now as an example of what passive sensing is, and finding out what people think about usage of sensor data in such a way. It quickly became apparent that people tend to be creeped out when Google Now suddenly appears to know where they live and where their work is, and especially dislike it when it starts giving them unsolicited advice. Following this discussion we arrived at a distinction between explicit and implicit sensing, where it is not so much about whether the user has to actively switch on sensing or enter information, but rather about whether the user is aware that sensing is going on.
From there the “uncanny valley” with respect to sensing on smartphones was discussed, as well as what would people be willing to allow an app to sense for. An idea for a BBC-app that would keep track of how much attention you pay to what you’re watching on television, and that would subsequently try to get you more engaged, was met with a lot of frowning. It was furthermore pointed out that passive sensing might be risky in the vicinity of children, as they are easily impressionable, are not capable of assessing whether it is desirable to have passive sensing going, and can be tricked into giving up a lot of information.
Natty Hoffman was interested in learning more about how she spent her money. Not satisfied with just categorizing expenses, she dove deeper into two years of transaction data to understand where here money was going and how well her spending habits reflected her ideals. In this talk, presented at the Boston QS Meetup group, Natty explains how she examined her spending data to see if she was supporting ethical, healthy, and local businesses.
This week there are five QS meetups planned all over the world. Follow the links below to learn more. You can also find the full list of the over 100 QS meetup groups in the right sidebar. Don’t see one near you? Why not start your own!
Enjoy this week’s list of articles, links, show&tells, and visualizations.
Articles Personal Health Data: Five Key Lessons for Better Health by Patti Brennan and Stephen J. Downs. A fantastic post by two great thinkers in the world of personal health and data. They outline five key challenges that must be addressed in order to have meaningful use of personal health data.
It’s Time for Open Data on Open Data by Luke Fretwell. A short but meaningful post here. With all the clamor for more government open data portals it’s time to start exploring how they’re actually being used and what can be done to improve them.
The NFL Gets Quantified Intelligence, Courtesy Of Shoulder Pad-Mounted Motion Trackers by Darrell Etherington. As a sports fan and spouse of someone who works in sports media production I am fascinated by how the world of personal data is quickly colliding with professional athletics. We’ve long looked towards athletes for inspiration and examples of how data can be used to understand and improve and I’m very interested to see how the NFL will make use of this data. Maybe we’ll see more sabermetric-like player and team analysis?
Show&Tell Heart Rate Variability While Giving a Public Speech by Pau LaFontaine. Paul gave a show&tell talk at a recent Bay Area QS meetup and tracked his heart rate variability. This post explains his data, and what he learned about the stress involved with public speaking. Be on the lookout soon for his show&tell talk video.
Chronic Diease and Self-Tracking – Part 1 by Sara Riggare. Sara is a longtime contributor in the Quantified Self community, having spoken at each of our three QS Europe Conferences. In this post she explains her new exploration of her resting heart rate and poses some interesting questions. We’d love to have you help her out!
Raspberry Pi Sleep Lab How-To by Nick Alexander. Nick was bothered by a common nightly occurrence, kicking off his covers in the middle of the night. Like any enterprising technologist, he enlisted his technical expertise to help examine this problem. This post is an amazingly detailed “How To” for building and setting up your own personal sleep monitoring tool complete with video, environmental information, sound, and sleep data.
This week I’ve been exploring how people are making using physical data visualizations. During some research I found a great resource, the List of Physical Visualizations. A few images below are from that great list, be sure to spend some time exploring the many different examples and then reading the excellent research paper linked below.
Evaluating the Efficiency of Physical Visualizations by Yvonne Jansen, Pierre Dragicevic, and Jean-Daneil Fekete. The first empirical study of the effectiveness of physical visualizations for conveying information. Using 3D bar charts as a primary example, the authors were abel to show that physical visualizations are more effective than their digital on-screen counterparts for some information retrieval tasks.
Stefan Hoevenaar’s father had Type 1 Diabetes. As a chemist, he was already quite meticulous about using data and those habits informed how he tracked and made sense of his blood sugar and insulin data. In this talk, presented at the 2014 Quantified Self Europe Conference, Stefan describes how his father kept notes and hand-drawn graphs in order to understand himself and his disease.
Today’s post comes to us from Floris van Eck. At the 2014 Quantified Self Europe Conference Floris led a breakout session on a project he’s been working on, The Imaging Mind. As imaging data become more prevalent it is becoming increasingly important to discuss the social and ethical considerations that arise when your image it stored and used, sometimes without your permission. As Floris described the session,
The amount of data is growing and with it we’re trying to find context. Every attempt to gain more context seems to generate even more imagery and thus data. How can we combine surveillance and
sousveillance to improve our personal and collective well-being and safety?
We invite you to read Floris’ great description of the session and the conversation that occurred around this topic then join the the discussion on our forum.
Imaging Mind QSEU Breakout Session
by Floris Van Eck
Imaging Mind Introduction
Imaging is becoming ubiquitous and pervasive next to being augmented. This artificial way of seeing things is quickly becoming our ‘third eye’. Just like our own eyes view and build an image and its context through our minds, so too does this ‘third eye’ create additional context while building a augmented view through an external mind powered by an intelligent grid of sensors and data. This forms an imaging mind. And it is also what we are chasing at Imaging Mind. All the roads, all the routes, all the shortcuts (and the marshes, bogs and sandpits) that lead to finding this imaging mind. To understand the imaging mind, is to understand the future. And to get there we need to do a lot of exploring.
The amount of available imagery is growing and alongside that growth we try to find context. Every attempt to gain more context, seems to generate even more imagery and thus data. We are watching each other while being watched. How can we combine surveillance and sousveillance to improve our personal and collective wellbeing and safety? And what consequences will this generate for privacy?
With about 15 people in our break-out session it started with a brief presentation about the first findings of the Imaging Mind project (see slides below). As an introduction, everyone in the group was then asked to take a selfie and use it to quickly introduce themselves. One person didn’t take a selfie as he absolutely loathed them. Funnily enough, the person next to him included him on his selfie anyway. It neatly illustrated the challenge for people that want to keep tabs on online shared pictures; it will become increasingly difficult to keep yourself offline. This leads us to the first question: What information can be derived from your pictures now (i.e. from the selfies we started with)? If combined and analyzed, what knowledge could be discovered about our group? This was the starting point for our group discussion.
Who owns the data
Images carry a lot of metadata and additional metadata can be derived by intelligent imaging algorithms. As those algorithms get better in the future, a new context can be derived from them. Will we be haunted by our pictures as they document more than intended? This lead to the question “who uses this data?” People in the group were most afraid of abuse by governments and less so by corporations, although that was still a concern for many.
People carrying a wearable camera gather data of other people without their consent. Someone remarked that this is the first time that the outside world is affected. Wearable cameras that are used in public are not about the Quantified Self, but about the ‘Quantified Us’. They are therefore not only about self-surveillance, but they can be part of a larger surveillance system. The PRISM revelations by Edward Snowden are an example of how this data can be mined by governments and corporations.
How are wearable cameras different from omnipresent surveillance cameras? The general consensus here was that security cameras are mostly sandboxed and controlled by one organisation. The chance that its imagery ends up on Facebook is very small. With wearable devices, people are more afraid that people will publish pictures on which they appear without their consent. This can be very confronting if combined with face recognition and tagging.
One of the things that everyone agreed on, is that pictures often give a limited or skewed context. Let’s say you point at something and that moment is captured by a wearable device. Depending on the angle and perspective, it could look like you were physically touching someone which could look very compromising when not placed in the right context. Devices that take 2,000 pictures a day greatly increase the odds that this will happen.
New social norms
One of the participants asked me about my Narrative camera. I wasn’t the only one wearing it, as the Narrative team was also in the break-out session. Did we ask the group for permission to take pictures of them? In public spaces this wouldn’t be an issue but we were in a private conference setting. Some people were bothered by it. I mentioned that I could take it off if people asked me, as stated by Gary in the opening of the Quantified Self Conference. This lead to discussing social norms. Everyone agreed that the advent of wearable cameras asks for new social norms. But which social norms do we need? This is a topic we would like to discuss further with the Quantified Self Community in the online forum and at meetups.
Capturing vs. Experiencing
We briefly talked about events like music concerts. A lot of people in the group said that they were personally annoyed by the fact that a lot of people are occupied by ‘capturing the moment’ with low quality imaging devices like smartphones and pocket cameras instead of dancing and ‘experiencing the moment’. Could wearable imaging devices be the perfect solution for this problem? The group thought some people enjoy taking pictures as an action itself, so for them nothing will change.
Wearable cameras create some sort of ‘visual memory’ that can be very helpful for people with memory problems like Alzheimer or dementia. An image or piece of music often triggers a memory that could otherwise not be retrieved. This is one of the positive applications of wearable imaging technology. The Narrative team has received some customer feedback that seems to confirm this.
Combining Imaging Data Sets
How to combine multiple imaging data sets without hurting privacy of any or all subjects? We talked for a long time about this question. Most people have big problems with mass surveillance and agree that permanently combining imaging data sets is not desirable. But what about temporarily? Someone in the group mentioned that the Boston marathon bombers were identified using footage submitted by people on the street. Are we willing to sacrifice some privacy for the greater good? More debate is needed here and I hope the Quantified Self community can tune in and share their vision.
One interesting project I mentioned at the end of the session is called called “Gorillas In The Cloud” by Dutch research institute TNO. The goal of the “Gorillas in the Cloud” is a first step to bring people in richer and closer contact with the astonishing world of wildlife. The Apenheul Zoo wants to create a richer visitors’ experience. But it also offers unprecedented possibilities for international behaviour ecology research by providing on-line and non-intrusive monitoring of the Apenheul Gorilla community in a contemporary, innovative way. “Gorillas in the Cloud” provides an exciting environment to innovate with sensor network technology (electronic eyes, ears and nose) in a practical way. Are the these gorillas the first primates to experience the internet of things, surveillance and the quantified self in its full force?
We invite you to continue the discussion on our forum.
We recently started a program to invite QS Toolmakers to contribute directly to funding our events. We call this program Friends of QS. If you would like to participate we invite you email us to learn more.