Topic Archives: Conference

Sara Riggare: How Not to Fall

We’re always interested in the way individuals with chronic conditions use self-tracking to better understand themselves. A great example of this is our good friend, Sara Riggare. Sara has Parkinson’s Disease and we’ve featured some of her amazing self-tracking work here before. At the 2014 Quantified Self Conference, Sara gave a short talk on what she feels is her most troublesome symptom: freezing of gait. In this talk, she explains why it’s such a big part of her daily life and how she’s using new tools and techniques to track and improve her gait.

Posted in Conference, Videos | Tagged , , , , , | 1 Comment

Paul LaFontaine: We Never Fight on Wednesday

Paul LaFontaine was interested in understanding his anxiety and negative emotional states. What was causing them? When were they happening? What could he do to combat them? Using TapLog, a simple Android-based tracking app (with easy data export), Paul tracked these mental events for six months as well as the triggers associated with each one. In this talk, presented at the 2014 Quantified Self Europe Conference, Paul dives deep in to the data to show how he was able to learn how different triggers were related to his anxiety and stress. While exploring his data, he also discovered a few surprising and profound insights. Watch his great talk below to learn more!

Posted in Conference, Videos | Tagged , , , , , , | 2 Comments

Jenny Tillotson: Science, Smell, and Fasion

Jenny Tillotson is a researcher and fashion designer who is currently exploring how scent plays a role in emotion and psychological states. As someone living with bipolar disorder, she’s been acutely aware of what affects her own emotions states and has been exploring different methods to track them. In this talk, presented at the 2014 Quantified Self Europe Conference, Jenny discusses her new project, Sensory Fashion, that uses wearable tracking technology and scent and sensory science to improve wellbeing. Be sure to read her description below when you finish watching her excellent talk.


You can also view the slides here.

What did you do?
I established a new QS project called ‘SENSORY FASHION’, funded by a Winston Churchill Fellowship that combines biology with wearable technology to benefit people with chronic mental health conditions. This allowed me to travel to the USA and meet leading psychiatrists, psychologists and mindfulness experts and find new ways to build monitoring tools that SENSE and balance the physiological, psychological and emotional states through the sense of smell. My objective was to manage stress and sleep disturbance using olfactory diagnostic biosensing tools and micro delivery systems that dispense aromas on-demand. The purpose was to tap into the limbic system (the emotional centre of our brain) with aromas that reduce sleep and stress triggers and therefore prevent a major relapse for people like myself who live with bipolar disorder on a day to day basis. I designed my own personalized mood-enhancing ‘aroma rainbow’ that dispenses a spectrum of wellbeing fragrances to complement orthodox medication regimes such as taking mood stabilizers.

How did you do it?
Initially by experimenting with different evidence-based essential oils with accessible clinical data, such as inhaling lavender to aid relaxation and help sleep, sweet orange to reduce anxiety and peppermint to stimulate the brain. I developed a technology platform called ‘eScent’ which is a wearable device that distributes scent directly into the immediate vicinity of the wearer upon a biometric sensed stimuli (body odor, ECG, cognitive response, skin conductivity etc). The scent forms a localized and personalized ‘scent bubble’ around the user which is unique to the invention, creating real-time biofeedback scent interventions. The result promotes sleep hygiene and can treat a range of mood disorders with counter-active calming aromas when high stress levels reach a pre-set threshold.

What did you learn?
I learnt it is possible to track emotional states through body smells, for example by detecting scent signals that are specific to individual humans. In my case this was body odor caused by chronic social anxiety from increased cortisol levels found in sweat and this could be treated with anxiolytic aromas such as sweet orange that create an immediate calming effect. In addition, building olfactory tools can boost self-confidence and communication skills, or identify ‘prodromal symptoms’ in mood disorders; they learn your typical patterns and act as a warning signal by monitoring minor cognitive shifts before the bigger shifts appear. This can easily be integrated into ‘Sensory Fashion’ and jewelry in a ‘de-stigmatizing’ manner, giving the user the prospect of attempting to offer them some further control of their emotional state through smell, whether by conscious control or bio-feedback. The next step is to miniaturize the eScent technology and further explore the untapped research data on the science of body (emotional) odor.

Posted in Conference, Videos | Tagged , , , , , , , | Leave a comment

QSEU14 Breakout: Passive Sensing with Smartphones

Today’s post comes to use from Freek Van Polen. Freek works at Sense Observations Systems, where they develop passive sensing applications and tools for smartphones. At the 2014 Quantified Self Europe Conference Freek led a breakout session where attendees discussed the opportunities, pitfalls, and ethical challenges associated with the increasing amount of passive data collection that is possible through the many different sensors we’re already carrying around in our pockets. We invite you to read his short description of the breakout below and continue the conversation on our forum.

Passive Sensing with Smartphones
by Freek van Polen

The session started out by using Google Now as an example of what passive sensing is, and finding out what people think about usage of sensor data in such a way. It quickly became apparent that people tend to be creeped out when Google Now suddenly appears to know where they live and where their work is, and especially dislike it when it starts giving them unsolicited advice. Following this discussion we arrived at a distinction between explicit and implicit sensing, where it is not so much about whether the user has to actively switch on sensing or enter information, but rather about whether the user is aware that sensing is going on.

From there the “uncanny valley” with respect to sensing on smartphones was discussed, as well as what would people be willing to allow an app to sense for. An idea for a BBC-app that would keep track of how much attention you pay to what you’re watching on television, and that would subsequently try to get you more engaged, was met with a lot of frowning. It was furthermore pointed out that passive sensing might be risky in the vicinity of children, as they are easily impressionable, are not capable of assessing whether it is desirable to have passive sensing going, and can be tricked into giving up a lot of information.

Posted in Conference | Tagged , , , , , , | Leave a comment

Stefan Hoevenaar: My Father, A Quantified Diabetic

Stefan Hoevenaar’s father had Type 1 Diabetes. As a chemist, he was already quite meticulous about using data and those habits informed how he tracked and made sense of his blood sugar and insulin data. In this talk, presented at the 2014 Quantified Self Europe Conference, Stefan describes how his father kept notes and hand-drawn graphs in order to understand himself and his disease.

Posted in Conference, Videos | Tagged , , , , | Leave a comment

QSEU14 Breakout: An Imaging Mind

Today’s post comes to us from Floris van Eck. At the 2014 Quantified Self Europe Conference Floris led a breakout session on a project he’s been working on, The Imaging Mind. As imaging data become more prevalent it is becoming increasingly important to discuss the social and ethical considerations that arise when your image it stored and used, sometimes without your permission. As Floris described the session,

The amount of data is growing and with it we’re trying to find context. Every attempt to gain more context seems to generate even more imagery and thus data. How can we combine surveillance and
sousveillance to improve our personal and collective well-being and safety?

We invite you to read Floris’ great description of the session and the conversation that occurred around this topic then join the the discussion on our forum.

QSEU14_imagingMind

Imaging Mind QSEU Breakout Session
by Floris Van Eck

Imaging Mind Introduction
Imaging is becoming ubiquitous and pervasive next to being augmented. This artificial way of seeing things is quickly becoming our ‘third eye’. Just like our own eyes view and build an image and its context through our minds, so too does this ‘third eye’ create additional context while building a augmented view through an external mind powered by an intelligent grid of sensors and data. This forms an imaging mind. And it is also what we are chasing at Imaging Mind. All the roads, all the routes, all the shortcuts (and the marshes, bogs and sandpits) that lead to finding this imaging mind. To understand the imaging mind, is to understand the future. And to get there we need to do a lot of exploring.

The amount of available imagery is growing and alongside that growth we try to find context. Every attempt to gain more context, seems to generate even more imagery and thus data. We are watching each other while being watched. How can we combine surveillance and sousveillance to improve our personal and collective wellbeing and safety? And what consequences will this generate for privacy?

Quantified Selfie
With about 15 people in our break-out session it started with a brief presentation about the first findings of the Imaging Mind project (see slides below). As an introduction, everyone in the group was then asked to take a selfie and use it to quickly introduce themselves. One person didn’t take a selfie as he absolutely loathed them. Funnily enough, the person next to him included him on his selfie anyway. It neatly illustrated the challenge for people that want to keep tabs on online shared pictures; it will become increasingly difficult to keep yourself offline. This leads us to the first question: What information can be derived from your pictures now (i.e. from the selfies we started with)? If combined and analyzed, what knowledge could be discovered about our group? This was the starting point for our group discussion.

Who owns the data
Images carry a lot of metadata and additional metadata can be derived by intelligent imaging algorithms. As those algorithms get better in the future, a new context can be derived from them. Will we be haunted by our pictures as they document more than intended? This lead to the question “who uses this data?” People in the group were most afraid of abuse by governments and less so by corporations, although that was still a concern for many.

People carrying a wearable camera gather data of other people without their consent. Someone remarked that this is the first time that the outside world is affected. Wearable cameras that are used in public are not about the Quantified Self, but about the ‘Quantified Us’. They are therefore not only about self-surveillance, but they can be part of a larger surveillance system. The PRISM revelations by Edward Snowden are an example of how this data can be mined by governments and corporations.

Context
How are wearable cameras different from omnipresent surveillance cameras? The general consensus here was that security cameras are mostly sandboxed and controlled by one organisation. The chance that its imagery ends up on Facebook is very small. With wearable devices, people are more afraid that people will publish pictures on which they appear without their consent. This can be very confronting if combined with face recognition and tagging.

One of the things that everyone agreed on, is that pictures often give a limited or skewed context. Let’s say you point at something and that moment is captured by a wearable device. Depending on the angle and perspective, it could look like you were physically touching someone which could look very compromising when not placed in the right context. Devices that take 2,000 pictures a day greatly increase the odds that this will happen.

New social norms
One of the participants asked me about my Narrative camera. I wasn’t the only one wearing it, as the Narrative team was also in the break-out session. Did we ask the group for permission to take pictures of them? In public spaces this wouldn’t be an issue but we were in a private conference setting. Some people were bothered by it. I mentioned that I could take it off if people asked me, as stated by Gary in the opening of the Quantified Self Conference. This lead to discussing social norms. Everyone agreed that the advent of wearable cameras asks for new social norms. But which social norms do we need? This is a topic we would like to discuss further with the Quantified Self Community in the online forum and at meetups.

Capturing vs. Experiencing
We briefly talked about events like music concerts. A lot of people in the group said that they were personally annoyed by the fact that a lot of people are occupied by ‘capturing the moment’ with low quality imaging devices like smartphones and pocket cameras instead of dancing and ‘experiencing the moment’. Could wearable imaging devices be the perfect solution for this problem? The group thought some people enjoy taking pictures as an action itself, so for them nothing will change.

Visual Memory
Wearable cameras create some sort of ‘visual memory’ that can be very helpful for people with memory problems like Alzheimer or dementia. An image or piece of music often triggers a memory that could otherwise not be retrieved. This is one of the positive applications of wearable imaging technology. The Narrative team has received some customer feedback that seems to confirm this.

Combining Imaging Data Sets
How to combine multiple imaging data sets without hurting privacy of any or all subjects? We talked for a long time about this question. Most people have big problems with mass surveillance and agree that permanently combining imaging data sets is not desirable. But what about temporarily? Someone in the group mentioned that the Boston marathon bombers were identified using footage submitted by people on the street. Are we willing to sacrifice some privacy for the greater good? More debate is needed here and I hope the Quantified Self community can tune in and share their vision.

Quantified Future
One interesting project I mentioned at the end of the session is called called “Gorillas In The Cloud” by Dutch research institute TNO. The goal of the “Gorillas in the Cloud” is a first step to bring people in richer and closer contact with the astonishing world of wildlife. The Apenheul Zoo wants to create a richer visitors’ experience. But it also offers unprecedented possibilities for international behaviour ecology research by providing on-line and non-intrusive monitoring of the Apenheul Gorilla community in a contemporary, innovative way. “Gorillas in the Cloud” provides an exciting environment to innovate with sensor network technology (electronic eyes, ears and nose) in a practical way. Are the these gorillas the first primates to experience the internet of things, surveillance and the quantified self in its full force?

We invite you to continue the discussion on our forum.

Posted in Conference | Tagged , , , , , | 1 Comment

Steven Jonas: Memorizing My Daybook

Memory, cognition, and learning are of high interest here at QS Labs. Ever since Gary Wolf published his seminal piece on SuperMemo, and it’s founder Piotr Wozniak, in 2008, we’ve been delighted to see how people are using space repetition software. Our friend and colleague, Steven Jonas, has been using SuperMemo since he read Gary’s article and slowly transition to daily use in 2010. Steven has been quite active in sharing how he’s used it to track his different memorization and learning projects with his local Portland QS meeup group. At the 2014 Quantified Self Europe Conference, Steven introduced a new project he’s working on, memorizing his daybook – a daily log he keeps of interesting things that happened during the day. Watch his fascinating talk below to hear him explain how he’s attempting to recall every day of this life. If you’re interested in learning more about spaced repetition we suggest this excellent primer by Gary.


You can also download the slides here.

What did you do?
I used a spaced repetition system to help me remember when an entry in my daybook occurred.

How did you do it?
Using Supermemo, I created a flashcard each morning. On the question side, I typed what I did the previous day. On the answer side, I typed down the date. SuperMemo would then schedule the review of these cards. I also played around with adding pictures and short videos from that day to the card, as well.

What did you learn?
First, that this seems to work. I’ve built up a mental map of my experiences, unlike anything I’ve ever experienced. I also learned that I hardly ever remember the actual date for a card. Instead, it’s a logic puzzle, where I can recall certain details such as, “It was on a Saturday, and it was in October, the week before Halloween. And Halloween was on a Thursday that year.” From there, I can deduce the most likely day that it occurred. I’m also learning which details are most helpful for placing a memory. Experiences involving other people and different places are very memorable. Noting that I started doing something, like “I started tracking my weight”, are not memorable.

Posted in Conference, Videos | Tagged , , , , , , | 1 Comment

QSEU14 Breakout: Emotive Wearables

Today’s post comes to us from Rain Ashford. Rain is a PhD student, researcher, and hardware tinkerer who is interested in how personal data can be conveyed in new and meaningful ways. She’s been exploring ideas around wearable data and the hardware that can support it. At the 2014 Quantified Self Europe Conference, Rain led a breakout session on Emotive Wearables during which she introduced her EEG Visualizing Pendant and engaged attendees in a discussion around wearing data and devices. 

Emotive Wearables
By Rain Ashford

It was great to visit Amsterdam again and see friends at the 3rd Quantified Self Europe Conference, previously I have spoken at the conference on Sensing Wearables, in 2011 and Visualising Physiological Data, in 2013.

There were two very prominent topics being discussed at Quantified Self Europe 2014, firstly around the quantifying of grief and secondly on privacy and surveillance. These are two very contrasting and provocative areas for attendees to contemplate, but also very important to all, for they’re very personal areas we can’t avoid having a viewpoint on. My contribution to the conference was to lead a Breakout Session on Emotive Wearables and demonstrated my EEG Visualising Pendant. Breakout Sessions are intended for audience participation and I wanted to use this one-hour session to get feedback on my pendant for its next iteration and also find out what people’s opinions were on emotive wearables generally.

I’ve been making wearable technology for six years and have been a PhD student investigating wearables for three years; during this time I’ve found wearable technology is such a massive field that I have needed to find my own terms to describe the areas I work in, and focus on in my research. Two subsets that I have defined terms for are, responsive wearables: which includes garments, jewellery and accessories that respond to the wearer’s environment, interactivity with technology or physiological signals taken from sensor data worn on or around the body, and emotive wearables: which describes garments, jewellery and accessories that amplify, broadcast and visualise physiological data that is associated with non-verbal communication, for example, the emotions and moods of the wearer. In my PhD research I am looking at whether such wearable devices can used to express non-verbal communication and I wanted to find out what Quantified Self Europe attendees opinions and attitudes would be about such technology, as many attendees are super-users of personal tracking technology and are also developing it.

Demo-ing EEG Visualising Pendant

My EEG Visualising Pendant is an example of my practice that I would describe as an emotive wearable, because it amplifies and broadcasts physiological data of the wearer and may provoke a response from those around the wearer. The pendant visualises the brainwave attention and meditation data of the wearer simultaneously (using data from a Bluetooth NeuroSky MindWave headset), via an LED (Light Emitting Diode) matrix, allowing others to make assumptions and interpretations from the visualizations. For example, whether the person wearing the pendant is paying attention or concentrating on what is going on around them, or is relaxed and not concentrating.

After I demonstrated the EEG Visualising Pendant, I invited attendees of my breakout session to participate in a discussion and paper survey about attitudes to emotive wearables and in particular feedback on the pendant. We had a mixed gender session of various ages and we had a great discussion, which covered areas such as, who would wear this device and other devices that also amplified one’s physiological data? We discussed the appropriateness of such personal technology and also thought in depth about privacy and the ramifications of devices that upload such data to cloud services for processing, plus the positive and the possible negative aspects of data collection. Other issues we discussed included design and aesthetics of prominent devices on the body and where we would be comfortable wearing them.

I am still transcribing the audio from the session and analysing the paper surveys that were completed, overall the feedback was very positive. The data I have gathered will feed into the next iteration of the EEG Visualising Pendantprototype and future devices. It will also feed into my PhD research. Since the Quantified Self Europe Conference, I have run the same focus group three more times with women interested in wearable technology, in London. I will update my blog with my findings from the focus groups and surveys in due course, plus of course information on the EEG Visualising Pendant’s next iteration as it progresses.

A version of this post first appeared on Rain’s personal blog. If you’re interested in discussing emotive wearable we invite you to follow up there, with Rain on Twitter, or here in the comments. 

Posted in Conference | Tagged , , , , , , | Leave a comment

Laurie Frick: Experiments in Self-tracking

As much as we talk about self-tracking being about health or fitness. . . I think it’s about identity. I think it’s about us. It’s about seeing something meaningful in who we are.

Laurie Frick is a self-tracker and visual artist. It this unique combination that has led her down a path of learning about herself while using the data she collects to inform her artistic work. What started with time and sleep tracking rapidly expanded to included other types of data. In this short talk, presented at the 2014 Quantified Self Europe Conference, Laurie explains how her past experiences have informed her new way of thinking about data, “Don’t hide. Get more.”

If you’re interested in Laurie’s artistic work I highly recommend spending some time browsing the gallery on her website.

Posted in Conference, Videos | Tagged , , , , , , | 1 Comment

QSEU14 Breakout: Families and Self-tracking

Today’s post comes to use from our friend and co-organizer of the Bay Area QS meetup group, Rajiv Mehta. Rajiv and Dawn Nafus worked together to lead a breakout session that focused on self-tracking in the family setting at the 2014 Quantified Self Europe Conference. They focused on the role families have in the caregiving process and how self-tracking can be used in caregiving situations. This breakout was especially interesting to us because of the recent research that has shed a light on caregivers and caregiving in the United States. According to research by the Pew Internet and Life Project, “39% of U.S. adults are caregivers and many navigate health care with the help of technology.” Furthermore, caregivers are more likely to track their own health indicators, such as weight, diet and exercise. We invite you to read the description of the breakout session below and then join the conversation on the forum.

Families & Self-Tracking
by Rajiv Mehta

In this breakout session at the Amsterdam conference, we explored self-tracking in the context of family caregiving. In the spirit of QS, we decided to “flip the conversation” — instead of talking about “them”, about how to get elderly family members to use self-tracking technologies and to allow us to see their data, we talked about “us”, about our own self-tracking and the benefits and challenges we have experienced in sharing our data with family and friends. These are the key themes that emerged.

Share But Not Be Judged
Feeling like you’re being judged, and especially misjudged, by someone else seeing your data is a very negative experience. People want to feel supported, not criticized, when they open up. Ironically, people felt that reminders and “encouragement” by an app, knowing that it is based on some impersonal algorithm, was sometimes easier to accept than similar statements from family. The interactions we have with family members aren’t neutral “reminders” to do this or that; they’re loaded with years of history and subtext. One participant commented “What I really want is an app that trains a spouse how not to judge.”

Earn The Right
So much is about learning how to earn the right to say something—that’s an ongoing negotiation, and both people and machines have to earn this. Apps screw it up when they try to be overfamiliar, your “friend.” I recalled a talk from the 2013 QS Amsterdam conference of a person publicly sharing his continuous heart rate monitoring, whose boss had noticed that the person’s heart rate had not gone up and demanded to know why he was not taking a deadline seriously! Such misjudgments can kill one’s enthusiasm for sharing.

Myth Of Self-Empowerment
Just because you’re tracking something, and plan to stick to some regimen or make some behavioral change, doesn’t mean you’re actually empowered to make it so. Family members need to be sensitive to the fact that bad data (undesirable results, lack of entries, etc.) may be a “cry for help” rather than an occasion for nagging.

Facilitating Dialog and Understanding
On the positive side, sharing data can lead to more understanding and richer conversations amongst family members. One participant described his occasional dieting efforts, which he records using MyFitnessPal and shares the information with his mother. This allows her to see how he is able to construct meals that fit the diet parameters (and so learn from his efforts), and also to just know that he is eating okay. I described the situation of a friend with a serious chronic disease who was tracking her energy levels throughout the day. In considering whether or not to share this tracking with her family she realized that they had very little appreciation of how up-and-down each day is for her. So, before she’s going to get benefits from sharing continuous energy data, she’s going to have to help her family understand the realities of her condition.

Sense of Control
Everyone felt that one key issue was that the self-tracker feel that s/he is the one making the decision to share the data, and has control over what to share, when to share, and who to share with.

We hope that before people design and deploy “remote monitoring” or “home tele-health” systems to track “others”, they first take the time to share their own data and see what it feels like.

If you’re interested in reading further about technology and caregiving we suggest the recently published report from the National Alliance for Caregiving, “Catalyzing Technology to Support Family Caregiving” by Richard Adler and Rajiv Mehta.

Posted in Conference | Tagged , , , , | Leave a comment