Topic Archives: Personal Informatics
We recently released our QS Access app, which allows you to see HealthKit data in tabular format. Not very many tools feed data into HealthKit yet, but Apple’s platform does pick up step data gathered by the iPhone itself. I have step data on HealthKit going back about two weeks. When Ernesto Ramirez and I were playing around with QS Access, loading the data into Excel and looking at some simple charts, I learned something: Even when I’m active, I’m sedentary.
My daily step totals ranged from a depressing 3334 steps on Thursday, September 18 to an inspiring 21,634 steps on Friday, September 25, but – as these charts clearly show – even on the extreme days my activity was concentrated into relatively short periods when I got up from my desk and went out to do something. Most hours, every day, were spent with hardly any movement at all. I’m sitting at my desk, and sitting at my desk some more, and sitting at my desk still more. That’s probably not good. No, not good at all.
Pulling my data out of HealthKit and seeing a few simple charts gave me a bit of insight that I hope will lead to a change in how much I sit. It was a great to be able to easily make some simple analysis of my data. I hope you’ll find QS Access useful also (you can learn more about it here). Please share what you learn in the QS Access thread in the QS Forum or by emailing us about your projects: email@example.com.
Sami Inkinen, triathalete, self-quantifier, and founder of Trulia, measures his mood on a five point scale every morning, within five minutes of waking up. This method fascinates me. I do something similar (though I use only a three point scale). Sami has found that this quick and easy measurement reliably correlates with his athletic performance, suggesting that it indeed measures something significant about his overall well being in the day ahead.
Read Sami’s full post here: What the first 2 minutes after waking up can tell you about the day ahead?
We’ve already published this QS Show&Tell talk by Mark Drangsholt about using self-tracking to identify the triggers of his heart problems, lessen their frequency, and make good decisions about treatment. I’m re-posting it here to focus on attention on the interesting and powerful method Mark used, the case-crossover design, and invite you to think about whether this has promise for your own self-tracking projects.
Mark is a professor and chair of oral medicine at the University of Washington School of Dentistry. He’s a triathlete and long time self-tracker. He is in good physical condition, but suffers from heart ailments that are frightening and dangerous. For instance, he has tachycardia (sudden acceleration of heart rate). At times his heart goes from 60 to 220 beats per minute. It feels like his heart is going to jump out of his chest. He also has atrial fibrillation, with palpitations, a feeling of immanent doom, and a sense that he is choking.
“The first time it happened in 2003 I really thought I was dying,” Mark says in his talk. He had always assumed that if he ever had a heart attack he, of all people, would know to pick up the phone and call 911, but the opposite happened. He just thought to himself “this is it,” and slumped down in his chair. Fortunately, he survived, and when he recovered he asked himself whether he could identify the triggers of these unpleasant events and avoid them. He created a simple Excel table of all episodes for one year, on which he recorded information about his attacks.
Mark is an expert on evidence based medicine, so he was naturally curious about what kind of evidence his self-tracking data contained. In standard reference material on medical evidence, students learn about a hierarchy that goes something like this:
- 1 or more randomized controlled trials
- 1 or more cohort studies
- 1 or more case-control studies
- 1 or more case-series
- expert opinion without above evidence
Mark’s self-tracking data didn’t naturally fit with any of these approaches. To understand whether these triggers actually had an effect on his arrhythmias, he used a special technique originally proposed by the epidemiologists Murray Mittleman and K. Malcolm Maclure. A case-crossover design is a scientific way to answer the question: “Was the patient doing anything unusual just before the onset of the disease?” It is a design that compares the exposure to a certain agent during the interval when the event does not occur to the exposure during the interval when the event occurs.
Using this method, Mark discovered that events linked to his attacks included high intensity exercise, afternoon caffeine, public speaking to large groups, and inadequate sleep on the previous night. While these were not surprising discoveries, it was interesting to him to be able to rigorously analyze them, and see his intuition supported by evidence.
“A citizen scientist isn’t even on the conventional evidence pyramid,” Mark notes. “But you can structure a single subject design to raise the level of evidence and it will be more convincing.”
Please let us know if you use this method in your own projects. We’ll post more reports when we have them.
REFERENCES AND GUIDES
There are some tricks to doing a good case-crossover study on yourself. Mark’s video provides a basic introduction. For technical details, this detailed introduction to case-crossover design by Yue-Fang Chang especially useful.
The seminal paper on case-crossover design is “The Case-Crossover Design: A Method for Studying Transient Effects on the Risk of Acute Events” by Malcom Maclure. (1991) [PDF] A search on Google Scholar for case-crossover design will get you deep into this literature. Unfortunately very little of it involves the kind of n-of-1 studies we’re usually interested in, but there are many technical details that may contain clues for dedicated experimenters.
One paper that will be of special interest is this one: “Should We Use a Case-Crossover Design?” by K. Malcolm Maclure and his collaborator Murray Mittleman. (2000) [PDF] In the midst of discussing technical details important for scientists proposing to use this method in studies funding by research grants whose reviewers may not be familiar with it, Maclure and Mittlemen describe using case-crossover analysis to retrospectively understand more about the death of Maclure’s father. I quote the relevant section below:
We did an n-of-1 case-crossover study of hypothesized triggers of repeated syncope experienced by Kenneth Maclure (MM’s father), who was diagnosed with sick sinus syndrome and died of fatal MI at age 73 during a morning swim, after several other potential triggers. The target person times wereKenneth’s 62nd–74th years (and subsequent years if he had lived longer). The study base comprised the years 1980–1981 and 1986, during which there were 33 instances of syncope. We restricted the study base to those years because his wife, Margaret, was willing to review only 3 years of her diaries because the memories rekindled her grief. We had no intention to generalize the findings to other individuals, only to other years. Our goal was to identify triggers to which Kenneth may have been susceptible and to test Margaret’s general hypothesis, “Perhaps I should have done more to help him avoid stress.” Hypothesized triggers included visitors to the home, trips out of town, eating out, unusual exertion, and so on. The 24-h period before an episode of syncope was classified as a case day. Each case day was matched with a control day, the same 24-h period 2 weeks before. Margaret was surprised by our null findings and relieved some lingering feelings of guilt.
Personal Informatics in Practice: Enabling People to Capture, Manage and Control Information for Lifelong Goals
Bob Kummerfeld is an Associate Professor of Computer Science in the School of Information Technologies at the University of Sydney. Bob carries out research into system support for pervasive user models.
People’s long term, important goals are drivers for using personal informatics tools. For example, if a person’s goal is achieve and maintain good health, this is a driver to capture data such as blood pressure, exercise, activity, sleep and food eaten. Personal informatics tools aim to make it easy for people to capture such information and so that it is available for self-monitoring, so people can see how they are progressing towards their goals. It can also help people decide how to alter their behaviour and then to see if this helps them achieve their goals.
Our research aims to create a personal informatics framework for lifelong goals, by enabling people to have a new form of flexibility and control to:
- set relevant and realistic personal goals;
- link these flexibly to tools that capture relevant personal data;
- monitor their progress towards goals;
- and manage the data over the long term (update, share, delete, archive).
As one might expect, given the importance of goal setting and tracking, there are many goal setting systems, such as HealthMonth, GoalsOnTrack, stickK. While these provide a variety of valuable support for goal setting, they lack support for 2 and 4 above. We aim to address the broad challenges of enabling people to flexibly manage and control their data associated with their long term important goals.
User control over personal data during goal setting:
To help people think about the personal data that will be useful for achieving their goals, we are exploring a rich representation of goals. This should enable people to think more effectively about their goals and the kinds of personal data that could be useful. We draw on theories such as Goal-setting Theory and Social Cognitive Theory which point to the importance of aspects such as specificity, importance and difficulty of the goal, deadlines and feedback about the goal, commitment and self efficacy about being about to complete the goal. So we aim to help people think about these aspects. We explain each of these at the goal setting interface. We suggest personalised default values, and explain the reasons for those recommendations, and allow users to set their own values if they wish.
User control over personal data while linking devices to goals:
Social cognitive theory also indicates that if a person is aware of their potential resources (e.g. monitoring tools, social support) towards achieving goals, they gain insight about their own capabilities. In our system, for example, if a person acquires a step counter, they are advised to set an initial goal of using it to get a baseline, by tracking daily steps walked each day over a week. Suppose this indicates they walk an average 5,000 steps a day. Our system recommends an initial goal of 6,000 steps a day for the next week, explaining that while it is well below the recommended 10,000, it is more likely to be attainable from this person’s baseline. Thus our framework both recommends goals that are likely to be achievable and explains the reasons for the recommendation.
Personal informatics now has many different tools for monitoring health and activity. Users can choose different tools for monitoring different goals. This can create a problem which we call ‘scattered subgoals’. For example, maintaining wellbeing includes several subgoals such as “Walking 10000 steps a day”, “Do at least 30 minutes moderate activity per day”, or “Avoid more than 30 minutes of sitting in front of computer”. Users might use step counters such as Fitbit for monitoring a step goal, mobile applications for logging minutes of activity, or notifiers to remind them if they are in static posture for more than 30 minutes. In most cases, they have to visit different web sites to monitor different goals. This makes it hard to monitor goals. Available goal setting systems have not addressed this issue so far.
Our vision is to make it much more easier for people to monitor their diverse goals because our system enables them to aggregate their personal data for all their goals, extracting it from different systems and keeping it in a single store that the individual controls. Since more and more APIs are becoming available for developing mashups for personal health informatics, we can readily extract such information. The challenge still remains to ensure the person can control this aggregation and then manage the information effectively so that it serves their goals.
User access to aggregated information for goal monitoring:
An important part of our work is to enable people to see several goals together and to log salient notes about them. The example in Figure 1 shows a hypothetical user monitoring three goals:
walking 10k steps/day goal (green graph),
having 5 periods of intense activity per week (red dots)
at least 60 minutes moderate activity daily (blue graph).
The figure illustrates the user noting a quiz that interfered with achieving the goals (just as they noted that they were sick in the previous week). Theories of metacognition indicate the importance of enabling people to for log such salient life events to explain the progress achieved and make sense of long term information and trends.
User control over managing personal data:
Finally, existing systems lack support for people to manage the lifelong personal information. We have identified several important levels of control:
determining which information can be shared with others;
easy ways to remove information, for example when sensor data is wrong (such as when they allowed someone else to use their step counter);
transforming the information into compacted forms, for example, reducing fine-grained sensor data into higher level information about goals, so reducing the amount of information kept, reducing the risk to privacy it creates.
To achieve user control over goal related data, we will design and evaluate interfaces for managing goals and reflection over long term by defining goals; monitoring the social and cognitive information associated with each goal; and reviewing goals. These will enable users to connect sensors and choose the type and frequency of feedback, including e-mail, tweets, desktop notification and ambient displays. The driving design goal of our framework is to ensure user control of personal data.
Bon Adriel Aseniero is currently a computer science undergraduate researcher at the University of Calgary under the supervision of Dr. Sheelagh Carpendale and Dr. Anthony Tang. He has an interest in Art and Aesthetic Design, while his research is mainly in Personal Informatics and Visual Analytics.
I have used some applications in my phone that keep track of my activities. Most of them do a good job in their own right; however, they always seem to come out short –no single application tracks my activities in the way I really want it to be tracked, and the feedback is almost always some graphs which are either unappealing or doesn’t give room for self-discovery. I can’t play with my data.
From the above anecdote, we can agree that users of personal informatics tools are not just members of a generalized population but also individuals. As such, they have their own goals and reasons on why they use the tools, and use a variety of reflection methods, some of which may be unique to the individual. While it is true that these goals and reflection methods may be similar enough that they can be addressed by a generalized one-size-fits-all type of personal informatics tool, but I just can’t let go of the fact that some of their needs may not be met fully. Moreover, the feedback mechanism lacks participation from the individual –what you see is what you get (WYSIWYG); there is little room for an individual to experiment on his or her data to answer questions beginning with “why” or “what if”.
So if Personal Informatics is all about Personal Data, why not make the tools for reflection personalized as well?
As a possible way of supporting the above question, I propose Deep Personalization which is the process of allowing individuals to create, or to customize to a certain extent visualizations that represent and or integrate their data. In addition to the ability to have more meaningful visualizations as a result, I argue that the process of tailoring and customizing different visualizations as an activity that in of itself provides considerable insight to individuals.
This idea stems from the time when I created three different visualizations of different aspects of my life which I found interesting, and their integration. The first visualization is Activity River, which shows a stream representing my activities throughout a day. The second visualization is D’Ripples or Directional Ripples, which shows ripples representing the directions I’ve looked at through the day and the things I see in those directions. Lastly, Place Well is a visualization of the places I went to in a day. Integrating all of these visualizations is Hours, in which I took the visual aspects I deemed important in the previous three visualizations and combined them into a new interactive visualization. The design process of each visualizations required several sketches which provided me with a wealth of insight that is generally not accounted for by pre-created visualizations. Not only did it ensure that the resulting visualization visualizes my data correctly, but it also allowed me to find personally meaningful representations of my data. Furthermore, being able to participate in the feedback mechanism allowed me to uncover correlations that I may not have seen with current WYSIWYG feedback tools. It is almost like when we learn new things e.g. cooking; it is better to actually try to perform or participate in the act of cooking rather than to just look at someone else do it.
However, even though the rewards of Deep Personalization may prove really beneficial to the individual, it faces a big challenge. Much like cooking, not everyone who tries to do it on their own actually ends up cooking something great, some fails at cooking while some excels. Creating visualizations is not a trivial task. Some questions we as a community should try to address could be “to what extent should the individual be able to customize the visualizations or any other tools for reflection?”, “What type of tool should we provide for Deep Personalization? A tool as extensively freehand as Photoshop, or a more restrictive tool that gives the individual a set of building blocks to play with?” Nevertheless, there is a philosophical benefit that can rise from Deep Personalization and it all lies in finding an effective method for providing its support in our current Personal Informatics tools.
Adrienne Andrew is a PhD candidate in Computer Science at the University of Washington. She is interested in studying how people use food diaries on mobile phones: what challenges “typically motivated” users have, balance between capturing less detailed yet still valuable food information, and identifying new ways to organize food databases to support a wider range of dietary analysis.
One primary concern for the field of personal informatics (PI) is supporting people in making changes in their life. A driving theory for pesonal informatics (PI) designers and researchers is Social Cognitive Theory [Bandura, 1977], which posits that a person’s behavior, environment and inner qualities all contribute to how a person functions. This theory has been applied to understanding how people learn, how social environments impact what people do, and how people regulate their own behavior. A key component in this theory is self-efficacy (SE), which is summarized as a belief in one’s abilities.
The question I pose to both PI researcher and self-quantifiers is if your experiences support whether self-efficacy truly reflects intent and ability to engage in key behavior change strategies.
SE is traditionally measured by self-report. To develop SE measurements for a particular domain, researchers use open-ended approaches to identify common challenges and barriers to the problem. They then develop a series of statements of the form “How confident are you that you can [achieve goal] even though [challenge]?” with a 4-unit response scale ranging from “Cannot do it” to “Highly certain can do”. An example of a statement is “How confident are you that you can stick to a healthy eating plan after a long, tiring day at work?” SE measures provides valuable feedback about whether an intervention is supporting adherence to behavior change strategies, and indicate whether participants complete the study with an intention to continue.
This is an important feature for PI researchers: we are familiar with a domain and common challenges, so can build the scales easily; we usually use short-term studies to indicate long-term impact; and properly designed scales can help us to discover where a PI tool breaks down.
Now that we have described how SE can be measured and its relevance to PI researchers, it is important to acknowledge factors that may impact the measurements as applicable to PI tools. In addition to basic usability (which I would also argue is more important to the “common consumer” as opposed to highly-motivated quantified-selfers), a user’s goals (internal motivation) and trust in the technology are key.
How well the tool matches the user’s goals.
This is a point that is likely more important to researchers than to quantified-selfers. It refers to both a goal the user has and that the user has a belief in what they need to do in order to attain that goal. A user who is trying to lose weight may choose to focus on restricting caloric intake as well as increasing caloric expenditure, or choose to focus on only one of those areas. Social cognitive theory says these beliefs are based on what the user has observed amongst their peers, and how similar or different the user is from their peers.
We observed this in the BALANCE studies. BALANCE consisted of a food diary to capture caloric intake, an automatic physical activity detection platform to measure caloric expenditure, and a visualization that provided real-time feedback of the person’s caloric intake/expenditure balance throughout the day, all on a mobile phone. Overall, about 40 people participated in the evaluation by carrying the phone and tracked their food intake for 3 days.
One recurring theme in the feedback was that tracking food intake with such detail was too much work, and would only be worth it if they had a medical condition that made it very important to keep detailed records. However, some participants wanted to reflect on a coarser grained summary of their dietary intake for general health and disease prevention. There participants had a different wellness goal, and therefore didn’t have the internal motivation to make this tool useful to them.
Understanding the underlying technology.
Another factor is how well the user understands the technology, or more specifically, how the technology may fail. Part of the BALANCE project was using sensors to identify and calculate calories expended via activity throughout an entire day. Other related tools are GPS-based run trackers that use GPS to track the location, duration and other metrics of the run. Technologies that use sensors to identify bouts of physical activity have some level of uncertainty associated with the recognition. This uncertainty comes from a variety of sources, such as parameters that reflect a tradeoff between power consumption and accuracy. GPS trace quality depends on terrain and location of satellites in the sky.
A recent New York Times article reflects the concern of GPS run tracker users. Runners sometimes measure certified race courses, and report discrepancies to the organizers. These runners appear to trust the technology more than the organization. In the case of BALANCE (which exposes less detailed data about the calorie calculation and depends on more parameters), some users reported a feeling that the calculation “didn’t feel right”, but were unable to express how they thought it might be wrong. With both of these examples, the uncertainty with the technology could impact measures of SE. This raises the question of what other factors influence a person’s trust in the technology, as well as how SE may be impacted, and how it may vary from person to person.
So, I pose this question to quantified selfers: What do you track, and what aspects of the tools you use impact whether or not you can or will keep using them?
Chloe Fan has been self-tracking since she was 14 years old and saw the first Harry Potter movie in theaters. She is currently a Ph.D. student at Carnegie Mellon’s Human Computer Interaction Institute. After finding her passion for data visualization and information design for self-tracking tools, she has decided to take a year off grad school to pursue her dreams at full speed in the Bay Area. She is available for consulting or full time positions!
There is a huge increase in the number of personal informatics tools over the last few years that help us track various aspects of our daily lives. The majority of consumer tools use visualizations, often in the form of charts (i.e., column graphs, line graphs, scatterplots), to help users understand the numerical data that they are collecting, and find meaning in their behavioral patterns. Research tools have also used nature metaphors, like a garden or a fish tank, that thrives on physically active users. While promising in motivating behavior change, they can also be punishing when users are inactive (sad fish or wilting flowers).
I am specifically interested in exploring abstract art as visualization for physical activity. It’s able to present information in an aesthetic and neutral way that is non-judgmental. Reflecting on abstract artwork by Wassily Kandinsky, Piet Mondrian, and Jackson Pollock, I created Spark, a system that uses abstract art in a dynamic ambient display for physical activity.
Each visualization is an animation that unfolds as the day progresses. Circles are created based on step counts. The size of the circle represents number of steps, and the color of the circle represents intensity (casual walking, brisk walking, or running).
Every five minutes, a circle appears in the middle of Spiral that represents the steps taken during that five-minute period. It pushes previous circles outward in a spiral, so steps taken earlier that day appear at the edge. If no steps are taken in that five-minute period, no circles appear.
In Flora, rings of color are added around a circle for every five-minute period with step counts. The result is a series of concentric circles showing periods of activity throughout the day, with the final size indicating the total step count for the day.
In Bucket, colorful circles fall from the top and fill up the screen to represent steps taken every five-minutes. We found that the use of concentric circles made the visualization more aesthetically pleasing; however, the concentric circles do not yet represent anything meaningful.
Inspired by Jackson Pollock, Pollock is the most random of these abstract visualizations. A white line draws randomly across the screen, and when there is activity, the canvas is splattered with color.
I conducted a study with Spark deployed on tablets in 5 homes over 3 weeks, just to see how people interacted with this kind of display. Everyone reacted positively to it, but the most interesting finding was that the 3 younger adults (20+ females) preferred Spiral and Rings because they were looking for specific time and intensity data regarding their gym sessions. The 3 older adults (58-71 years old) preferred Bucket and Pollock because they were interested in daily cumulative totals from walking a lot.
Some of the things people said motivated them to do more activity short-term were the colors, variety of visualizations, and the visual challenge of filling up the screen with colors. I also got lots of good feedback on displaying the data differently, like as a screensaver or hung on a wall like a piece of artwork.
There are many planned features for Spark, including:
- Weekly and monthly view showing the final visualization for each day
- Aggregate daily/weekly/monthly statistics
- Adding more charts to compare with the abstract visualizations
- Inclusion of other physical activity properties, such as speed, location, indoor/outdoor activity, and type of activity (i.e., biking vs. swimming). Currently, the Fitbit tracker does not distinguish between activities, so this feature will need data streams from other sensors.
Spark is still in early stages, but if you’d like to check it out with your own Fitbit data, you can sign up at www.sparkvis.com/fitbit/auth. It will connect to your Google account (username data only) to identify you on the Spark site, and you will also need to log in to your Fitbit account to get your Fitbit data (it’s a hassle, but the easiest way I can get it to run right now). Would love to hear your thoughts on improving Spark!
Victoria Schwanda Sosik is a PhD student in Information Science at Cornell University. She designs and evaluates technologies that support people towards goals of mental and physical wellbeing. She works with Dan Cosley in the Reimagination Lab.
Personal Informatics systems often deal in domains and utilize data that are just that: personal. These systems use data that we create through our daily activities (such as going for a run with Nike+) and help us review it in a way that encourages reflection and self-knowledge. While systems often have unintended uses and consequences, it is especially important that designers of Personal Informatics systems think about how their systems may be used and impact users, because they are dealing in domains that are closely tied to individuals’ ideas of self (such as weight and body image). We’ve encountered examples of these side effects in our studies of people’s experiences using Personal Informatics systems in health and fitness, interpersonal relationships, and reminiscing.
Overly Negative Feedback Can Discourage Use (and Users)
Tools designed to encourage weight loss and physical activity like Nike+, FitBit, SparkPeople and Wii Fit strive to help users reach their goals by tracking data such as calories consumed, amount of exercise and/or current weight. One way these tools motivate users is by having them set goals in the system and then displaying the collected data back to the user as positive or negative progress towards their goal. This strategy follows from theory that shows motivation is sustained by people setting small, achievable goals, identifying the difference between their current state and their goal state, and then exerting effort to achieve the goal.
Presenting these data without considering users’ mental states and potential reactions to the data can be harmful, however. One example can be seen in Wii Fit’s Body Test. As part of creating their Wii Fit profile and their system avatar—or Mii—users must complete a Body Test that weighs them, tests their balance and asks them to set a goal. If the user is overweight, they see an animation where their Mii’s girth increases and looks down at its midsection with disbelief, accompanied by an ominous sound effect. The system then displays how far away from a normal BMI the user is. My own first experience with Wii Fit was right after I received it as a Christmas gift. I was at my future in-laws’ house and in front of the whole family when I stepped on the Wii Fit balance board. This was right after my freshman year in college (freshman 15 anyone?) and I wasn’t quite prepared to have my BMI prominently displayed on the 52” screen—needless to say, it wasn’t the most positive first experience with a tool that was supposed to encourage me to be more active.
In principle, according to theories of motivation this should be valuable and useful feedback that helps people know what they need to do. In practice, however, I was not the only one with such a reaction to the Body Test. Participants in two studies on experiences with Wii Fit rarely returned to track their progress using the Body Test because they often found this display “harsh” and thought “it’s one thing to see your [weight], it’s another thing to see yourself–your [avatar]–as a Stay-Puft Marshmallow man.” If Wii Fit used a more constructive and less degrading visualization, perhaps users would have found the feature motivating and would not have abandoned it after a few weeks as most of our users did in.
Displaying Certain Types of Data Can Create Tunnel Vision
Another potential use of Personal Informatics tools is to help people gain broader self-knowledge about areas of their lives such as their interpersonal relationships. Communication tools like text messaging, email, and Facebook capture interactions that are important to the expression and development of relationships and can be used after the fact to help people make sense of these relationships through visualizations such as Themail shown below (right).
Perhaps the most commonly used tool that aggregates and displays communication data from a relationship is Facebook’s See Friendship page (shown above, left). See Friendship gathers wall posts and comments, photos, mutual events, liked topics, and friends in common between two Facebook users. While this visualization includes several types of data about a friendship, when we asked people to spend some time reflecting about a friendship using the See Friendship page, we found that the data limited what participants reflected on. Participants often started with the most recent content since See Friendship displays data in reverse chronological order, and didn’t always go far enough back to view content from early on in their friendship. Pictures also tended to receive more attention than text. These pictures reminded people of events and activities that were shared but rarely encouraged reflection on deeper, more personal, and longer-term aspects of a friendship such as its evolution. The overall positive communication that happens on Facebook and the lack of capture of mundane, daily communication further biased reflection towards positive and novel events in a friendship.
Designing Personal Informatics Systems With an Eye Towards Side Effects
Our work suggests that health interventions, and other kinds of Personal Informatics systems, are likely to frequently lead to unintended side effects that occasionally might be harmful to either system use or to the users themselves. We suggest that designers think much more carefully about the potential impacts these systems might have on people’s lives and of the practical and ethical responsibilities that accompany the design of systems that help people know and change themselves.
Health Mashups: Helping People Find Long-Term Trends Between Wellbeing and Activities in Their Lives
Frank Bentley is a Principal Staff Research Scientist at the Motorola Mobility Applied Research Center outside of Chicago, IL. He creates new mobile applications and services that help people connect with each other and with data about their lives. He then studies how these systems are integrated into daily life over weeks and months.
Do you sleep better on days when it’s warmer? Walk less on days packed with meetings? Gain weight on the weekends? A growing number of consumers are turning towards specialized devices that track particular aspects of their lives and wellbeing. Whether it’s the Zeo to track sleep, the FitBit to track daily step counts, the MOTOACTV to track workouts, or the WiThings scale to track their weight, there is currently a wealth of personal data that is being stored about daily activities. However, most of these services continue to be silos. Even where the ability to import data from one device into another’s service exists, data is only combined superficially, providing at most a graph of steps and weight over time, obscuring long-term and periodic interactions. The questions presented above cannot be answered without great effort – effort that many in the Quantified Self community devote to understanding themselves. But can it be easier?
We see the key value of tracking multiple aspects of one’s life to be understanding the interaction of data from wellbeing sensors with other sensors as well as with contextual data about a person’s life (where they spent time, how busy their day was, the weather, etc.). We want to enable people to discover these hidden trends in their lives without resorting to complex Excel files and a PhD in statistics.
The Health Mashups system
The Health Mashups system was built through a collaboration between KTH University and the Motorola Mobility Applied Research Center. It consists of a server that aggregates data from a variety of sensors and a mobile application to automatically capture a user’s context and display the resulting correlations calculated by the server. Users can connect their FitBit accounts for step count and sleep data as well as their WiThings account for weight data. An Android application uploads contextual information automatically each day including the number of hours busy on the user’s calendar as well as the current location at a city level and weather for that location. After the initial setup, no further actions are required from the user to keep this data flowing to our server (although we also support manual food and exercise logging through the mobile phone application). Each night, our server computes correlations between sensors and deviations on data from a given sensor and generates a feed of items that are statistically significant. This feed is then accessible on the phone or web for users to view and reflect upon. Users can see feed items such as: “You lose weight on weeks when it is warmer” or “Yesterday you walked much less than you normally do on Saturdays.” This eliminates the need for manual log books and messy Excel files, and opens Quantified Self-style investigations to those with no technical background.
We wanted to understand how a broad range of users would integrate this system into their lives. We conducted a two-month field trial and recruited ten diverse participants in Chicago and Stockholm to take part. They came from a wide range of ages and educational backgrounds and had a variety of reasons for participating: from particular issues with sleep or excessive weight that they wanted to address to a general curiosity to understand themselves better. Participants were given a FitBit and a WiThings scale and asked to use these in their lives for the first month. Whenever they had an insight about their wellbeing, they were asked to call us and leave a voicemail describing their insight. For the second month of the trial, they were given the Health Mashups interface on their phone and again were asked to call us with new insights.
For the first month of the trial, none of our participants called with insights across sensors or time scales. While many reported general trends (e.g. “I’ve been losing weight this week” or “Yesterday I didn’t walk as many steps as I thought I did”), their insights did not connect their sleep, weight loss, or step counts to each other in any way. Nor did they include insights about patterns on specific days of the week or comparisons/deviations from week to week.
In the second month, participants were able to understand their wellbeing in much deeper and complex ways. The system showed them insights across sensors and varying timescales. Our participants reported understanding and relating to these feed elements. The mashups data helped our participants to better understand how aspects of their lives were related and to make positive changes in their lives (e.g. eating a little less fried chicken on Sundays or walking more on specific days of the week).
The Future of Health Mashups
We see a promising future for personal data analytics related to one’s wellbeing. With massive amounts of wellbeing and contextual data now being collected, systems are needed that make sense of this data for people and allow them to focus on what is significant to their lives without a large amount of effort. With Health Mashups our participants could gain these insights, combining data that is automatically collected as they live their lives. We believe these types of insights have the power to raise awareness about situations that lead to poor life choices, resulting in positive changes in behavior and ultimately happier, healthier lives. This summer we will be conducting a larger quantitative study to investigate the impacts of this system across a wider group of participants. If you are interested in participating, you can register your interest here.