Topic Archives: Personal Informatics
Health Mashups: Helping People Find Long-Term Trends Between Wellbeing and Activities in Their Lives
Frank Bentley is a Principal Staff Research Scientist at the Motorola Mobility Applied Research Center outside of Chicago, IL. He creates new mobile applications and services that help people connect with each other and with data about their lives. He then studies how these systems are integrated into daily life over weeks and months.
Do you sleep better on days when it’s warmer? Walk less on days packed with meetings? Gain weight on the weekends? A growing number of consumers are turning towards specialized devices that track particular aspects of their lives and wellbeing. Whether it’s the Zeo to track sleep, the FitBit to track daily step counts, the MOTOACTV to track workouts, or the WiThings scale to track their weight, there is currently a wealth of personal data that is being stored about daily activities. However, most of these services continue to be silos. Even where the ability to import data from one device into another’s service exists, data is only combined superficially, providing at most a graph of steps and weight over time, obscuring long-term and periodic interactions. The questions presented above cannot be answered without great effort – effort that many in the Quantified Self community devote to understanding themselves. But can it be easier?
We see the key value of tracking multiple aspects of one’s life to be understanding the interaction of data from wellbeing sensors with other sensors as well as with contextual data about a person’s life (where they spent time, how busy their day was, the weather, etc.). We want to enable people to discover these hidden trends in their lives without resorting to complex Excel files and a PhD in statistics.
The Health Mashups system
The Health Mashups system was built through a collaboration between KTH University and the Motorola Mobility Applied Research Center. It consists of a server that aggregates data from a variety of sensors and a mobile application to automatically capture a user’s context and display the resulting correlations calculated by the server. Users can connect their FitBit accounts for step count and sleep data as well as their WiThings account for weight data. An Android application uploads contextual information automatically each day including the number of hours busy on the user’s calendar as well as the current location at a city level and weather for that location. After the initial setup, no further actions are required from the user to keep this data flowing to our server (although we also support manual food and exercise logging through the mobile phone application). Each night, our server computes correlations between sensors and deviations on data from a given sensor and generates a feed of items that are statistically significant. This feed is then accessible on the phone or web for users to view and reflect upon. Users can see feed items such as: “You lose weight on weeks when it is warmer” or “Yesterday you walked much less than you normally do on Saturdays.” This eliminates the need for manual log books and messy Excel files, and opens Quantified Self-style investigations to those with no technical background.
We wanted to understand how a broad range of users would integrate this system into their lives. We conducted a two-month field trial and recruited ten diverse participants in Chicago and Stockholm to take part. They came from a wide range of ages and educational backgrounds and had a variety of reasons for participating: from particular issues with sleep or excessive weight that they wanted to address to a general curiosity to understand themselves better. Participants were given a FitBit and a WiThings scale and asked to use these in their lives for the first month. Whenever they had an insight about their wellbeing, they were asked to call us and leave a voicemail describing their insight. For the second month of the trial, they were given the Health Mashups interface on their phone and again were asked to call us with new insights.
For the first month of the trial, none of our participants called with insights across sensors or time scales. While many reported general trends (e.g. “I’ve been losing weight this week” or “Yesterday I didn’t walk as many steps as I thought I did”), their insights did not connect their sleep, weight loss, or step counts to each other in any way. Nor did they include insights about patterns on specific days of the week or comparisons/deviations from week to week.
In the second month, participants were able to understand their wellbeing in much deeper and complex ways. The system showed them insights across sensors and varying timescales. Our participants reported understanding and relating to these feed elements. The mashups data helped our participants to better understand how aspects of their lives were related and to make positive changes in their lives (e.g. eating a little less fried chicken on Sundays or walking more on specific days of the week).
The Future of Health Mashups
We see a promising future for personal data analytics related to one’s wellbeing. With massive amounts of wellbeing and contextual data now being collected, systems are needed that make sense of this data for people and allow them to focus on what is significant to their lives without a large amount of effort. With Health Mashups our participants could gain these insights, combining data that is automatically collected as they live their lives. We believe these types of insights have the power to raise awareness about situations that lead to poor life choices, resulting in positive changes in behavior and ultimately happier, healthier lives. This summer we will be conducting a larger quantitative study to investigate the impacts of this system across a wider group of participants. If you are interested in participating, you can register your interest here.
Personal Informatics in Practice: Ambivalence about (Inter)Personal Informatics for Smoking Cessation
Bernd Ploderer is a Research Fellow at the University of Melbourne, Australia. His research interest lies in the potential of social media to support engagement, learning and behavior change. He works with Wally Smith, Steve Howard, Jon Pearce, and Ron Borland to design an interpersonal informatics system to help people quit smoking.
In this research project my colleagues and I are trying to design an interpersonal informatics system to help people quit smoking. Previous studies show the various benefits of such systems for helping people to change their habits, like the ability to learn more about personal habits, to reflect on them and to develop strategies to change them. These systems can also enhance the awareness of how people around one influence one’s habits, and some people derive motivation from cooperating with others in the same situation. However, previous research also points out that many people are reluctant to share personal information via interpersonal informatics systems due to privacy concerns.
Hence, before we started our development we conducted a study to explore this ambivalence about interpersonal informatics systems. Rather than indifference, ambivalence connotes strong simultaneous conflicting states, in this case both a desire to gain the aforementioned benefits of interpersonal informatics systems as well strong concerns about potential risks. In our study we used a prototype of a smoking cessation application to unpack these conflicting states about collecting, sharing and reflecting on personal information.
Prototype mock-ups and study approach
We invited twelve people (six smokers and six people who recently quit) for an interview. We used screen mock-ups of a fictitious mobile and social smoking cessation smartphone service called Consider Quitting (CQ). CQ is designed as a smartphone application. It allows smokers to take photos of the things, people, places, and activities that trigger their smoking. These photo diaries can be shared with other users of the service. People can browse through the photo diaries of other users to further explore their triggers for smoking, to initiate connections and interactions between users, or simply to get inspiration for their own quit attempt.
Different kinds of ambivalences
Our findings point out a number of different tensions that contribute to their ambivalence about CQ.
- Ambivalence about behavior change: Smokers (and ex-smokers) are ambivalent about the behavior change itself. Every participant wanted to quit, but at the same time they also expressed a desire to continue smoking.
- Ambivalence about collecting and reflecting on personal information: The participants were interested in information from other smokers on how to quit. However, most of them were reluctant to collect their information about their smoking triggers. Some of the participants felt that they knew them already and others felt it would increase their desire to smoke again. Participant 11 commented, “it’s like the elephant in the room, it’s best not to talk about it.”
- Ambivalence about sharing personal information: Several participants had seen friends post about their quit attempts on Facebook, yet they were reluctant to do the same about their own quit attempts (neither on Facebook nor on QC). Some participants were generally concerned about sharing personal information online regardless of the content; others were specifically concerned about sharing information about their desired behavior change because of the added pressure from committing to other people the and risk of failure and losing face.
We are currently analyzing the data in further depth to develop our understanding of the different sources of ambivalence and their interrelationships further. In future work we aim to deploy prototypes to develop a dynamic understanding of ambivalence, showing how different aspects of ambivalence ebb and flow in their influence over smoking cessation. In this workshop we present our current analysis and discuss ideas for prototypes to address such an ambivalent state to help people quit smoking.
Ryan Muller is a PhD student at the Human-Computer Interaction Institute at Carnegie Mellon University. He researches principles for designing technology that stimulates our intrinsic drive for mastery-based learning.
Although the internet has fundamentally changed the speed and the scale of accessing information, that change has not seen such an impact in traditional forms of education. With popular new efforts like the video and exercise resource Khan Academy and online courses from Stanford (now spinning off into sites like Udemy and Coursera), people are talking about a revolution of personalized education – learners will be able to use computer-delivered content to learn at their own pace, whether supplementing schoolwork, developing job skills, or pursuing a hobby.
How personal informatics can help learning
There’s a problem here: learning on one’s own is not easy. Researchers have repeatedly found that people hold misconceptions about how to study well. For instance, rereading a passage gives the illusion of effective learning, but in reality quizzing oneself on the same material is far more effective for retention. Even then, people can misjudge the which items they will or will not be able to remember later.
The process of self-regulated learning works best when people accurately self-assess their learning and use that information to determine learning strategies and choose among resources. This reflective process fits well into the framework of personal informatics used already for applications like keeping up with one’s finances or making personal healthcare decisions.
For most people, their only experience quantifying learning is through grades on assignments and tests. While these can allow some level of reflection, the feedback loop is usually not tight enough. We are unable to fix our mistakes, making grades feel less like a opportunity for improvement and more like a final judgement.
How personal learning data can be collected
With computer-based practice, there is a great opportunity for timely personalized feedback. Several decades of research in the learning sciences have developed learner models for estimating a person’s knowledge of a topic based on their actions in a computer-based practice environment, often called an intelligent tutoring system. For example, a learner model for a physics tutor may predict the error rate of responses in defining the potential energy as a step in a physics problem — we see that the error rate decreases over the number of opportunities to use that skill, indicating learning (see below; from the PSLC DataShop). Such systems can not only track progress and give feedback but also make suggestions for effective learning strategies.
Our proposal envisions a web API that collects data from web-based learning resources into a personal central repository. Learner models analyze the data to provide quantified indicators of learning progress. The advantage of a central location is to compare and combine information across heterogeneous resources, as well as to enable self-experimentation with different types of learning interventions or strategies. Accumulation of enough data would allow findings to be shared among the community and give researchers access to data that could be used to improve learning. Finally, the API could also push back recommendations to the learning resources, taking advantage of the combined data and saving resource developers the difficulty of implementing learner model algorithms.
With personal informatics in learning, we see an opportunity not only for improving self-paced learning of more-or-less traditional content, but a grand vision of personalized learning: setting a vision of your future self, using the wealth of resources on the web to achieve your learning goals, and tracking your steps along the way.
Matthew Kay is a PhD student in Computer Science & Engineering at the University of Washington. He develops technology to help people understand and improve their sleep habits, motivated partially by his own poor sleep habits. He works with Julie Kientz and Shwetak Patel.
Clinical sleep centers can easily evaluate a person’s sleep quality, but because these tests do not occur in the home, they cannot help identify factors in the bedroom that might make their sleep quality worse. A variety of personal informatics tools exist for sleep tracking—e.g. Zeo and Fitbit. These tools automatically measure sleep quality, but generally leave other factors to be self-reported.
One of the goals of the Lullaby project is to add environmental data to the automated sleep-tracking process. Lullaby is a suite of environmental sensors—including sound, light, temperature, and motion—combined into a system about the size of a bedside lamp. Using an Android tablet, Lullaby presents the environmental data it collects together with data from an off-the-shelf sleep tracking device, like a Zeo or Fitbit, to help people determine what is disrupting their sleep and how they can improve their sleep environment.
Using Lullaby, we have begun to explore what can be done with multi-faceted data streams in personal informatics. Combining all of our data streams into the same user interface is the simplest first step; users can see the temperature, light level, sound, motion, and sleep quality data from each moment over the course of the night (below, left). They can also play back the audio and show images collected by an infrared camera alongside their data (below, right). By combining multiple data streams together, we can start to paint a more complete picture of a person’s sleep and sleep environment.
Capturing unconscious experience
There are some interesting challenges for lifelogging when the events recorded by a system occur while the user is unconscious. When capturing spontaneous events, such as with systems like SenseCam, arguably users are aware of the occurrence of the event as it happens or shortly thereafter. In contrast, the domain of sleep is one where events of interest are not known by users until well after their occurrence, when the user goes looking for them. Sleep, as far as conscious experience is concerned, is a “black box”.
On the one hand, this makes it a fascinating area to explore: a Lullaby user myself, I find it very interesting to play back parts of my sleep and to see how much I move around. Indeed, many of our users are surprised by how much they move around during sleep.
On the other hand, this presents a challenge: How do we help people find those events of interest within an 8-hour chunk of time they did not consciously experience? We try to provide guidance in this process by highlighting data that is out-of-range according to recommendations from sleep literature (and thus possibly of interest); for example, if the temperature goes over 75 degrees, the recommended max bedroom temperature according to the National Sleep Foundation, it will be shown in red on the graphs. In the future, we aim to further improve the process of finding moments of interest by adding intelligent video and audio summaries, for example, by employing computer vision techniques.
We are also looking at more sophisticated ways of combining the data we collect and have been testing prototype interfaces of graphical summaries and visualizations. We have found interesting results from our current deployment, such as one user with a (weak, but existent) correlation between temperature and sleep quality: as the temperature in their room goes up by a few degrees, their sleep quality dips slightly.
Looking at a result like this, we can ask: what is the best way for a self-tracking system to organize and present its data so that the user can find and understand this correlation? For example, we could present this user with a scatter plot of sleep quality and temperature and a trendline (we can even do this for all of the factors recorded). However, we can boil this data down even further. What if your sleep-tracking device told you: “When your room temperature goes up by 5 degrees, your sleep efficiency decreases by 10%. You should consider setting your thermostat to 65 degrees to get the best sleep”?
We think there is a lot of potential for this kind of simple, plain-English feedback in the quantified self area. Graphs play an important role in providing a rich way to explore data, and many quantified-self types, like myself, just like digging through data on ourselves (is there any greater vanity?). But sometimes, those high-level inferences are what we want at the end of the day.
This is a guest post by Dominikus Baur, a postdoctoral fellow at the University of Calgary. Dominikus is interested in personal visualization – how to make the large amounts of personal data available online accessible to their creators through visualization. His previous projects focused on visualizing personal music listening histories.
Our busy daily lifes often make it difficult to keep track of one’s impact on the environment. The personal carbon footprint, i.e., the amount of carbon dioxide emissions caused by one’s actions, directly hinges on a myriad of small decisions: take the train instead of the car? Walk the two miles to the supermarket instead of driving? Use the bike even though it looks like rain? Knowing the exact carbon impact of one option versus the other and estimating the sum of all of those decisions is hard – which leads to most people simply giving up and accepting a bad conscience as the prize of comfort. And while they are aware that, yes, taking the car is bad for the environment, it can’t be that bad, right?
Personal informatics applications can help with keeping one’s carbon footprint low and making decisions based on data instead of opinions. Most people by now carry versatile smartphones in their pockets that not only allow directly working with and analyzing one’s data but also collecting it through a multitude of sensors. But more critical than the actual data collection is the way this information is presented. For lowering your carbon footprint, the hard thing to do is usually also the right thing to do. Therefore, an application developer’s main goal has to be to motivate without scolding, frustrating or even angering. I think a good benchmark for a personal informatics interface is imagining how it would fare in the worst scenarios imaginable: for example, when trying to encourage its owner to wait for half an hour for the train after a long and exhausting day at work. In such a context, annoying sound effects, patronizing on-screen text or childish graphical representations can easily lead to the opposite of the developers’ intentions: a frustrated owner deleting the app from their phone.
For our ECO|Balance project we set out to design several mobile visualizations with this worst-case scenario in mind. Instead of going for prescriptive, patronizing or dry, we wanted to create applications that would provide value in themselves. Our hypothesis was that with these apps being interesting and enticing enough, people would regularly launch them just to kill the time or enjoy the visuals and thus start to reflect about their behavior without paternalism. Having abstract representations instead of accusations of wrongdoing should also lead to a calmer interaction and have a soothing effect. We partially felt inspired by ecological environments, such as underwater life. We also tried to create serene and calm interfaces, and include reminders of organic life without being too obnoxious about it.
We started the design process by hooking up three of our lab members with pedometers and notepads and let them keep track of their activities for ten days. Based on this realistic data-set we created multiple interface sketches using coloured pencils which reduced the amount of time required. To be better able to estimate the visual appeal of the designs, we took the most promising ones, re-did them using water colours and sketched animated transitions within them in Microsoft Powerpoint. With this two-fold design process we quickly arrived at a large number of designs but were still able to gauge their visual impact. Using coloured pencils, water colours and Powerpoint animations proved to be a suitable alternative to more high fidelity prototyping approaches such as Adobe’s Flash.
Our resulting sketches range from abstract and organic designs based on metaphors to more traditional chart-inspired visualizations.
Organic Flowers shows an abstract artwork of one’s behavior: each unit of time (depending on the zoom-level months, days or single activities) becomes a flower. The amount of produced carbon dioxide is reflected in the size of the bloom, while the length of the stem depicts the number of steps taken. Dragging and dropping one timeframe into another (middle) switches to a direct comparison (right).
Another design, Jelly Fish (upper left), shows days as jelly fish whose elevation and size encodes CO2 production. Activities on a day become the jelly fish’s tentacles with colour-coding for type and length for duration. To make comparisons easier, all tentacles of one jelly fish can also be laid out vertically (lower left), forming a more traditional bar chart.
Finally, Footprints (right) is the most verbatim of our charts. It shows all activities of one day in a grid with hours as rows and columns as 6 minute segments. Icons show the type of activity (feet = walking, rails = going by train, etc.) and their colour depicts the carbon dioxide impact.
You can find more details about these designs in our paper.
In designing personal informatics applications, coming up with as many ideas as possible usually leads to the best results. Creating the ECO|Balance designs taught us that two things are important: First, having an actual, real-world data set, so the impact of the produced visualizations does not hinge on one’s own preconception of the data. And second, to lower the threshold for creating designs, using low-overhead analog tools such as pencils and creating predefined animations in Powerpoint is preferable to digital drawing tools and implementing visualization algorithms. Regarding motivation strategy, supporting instead of scolding and providing value in itself instead of being a reminder for one’s bad conscience should make for a kinder and more efficient interaction. We plan to implement our most promising designs and make them available as tools to make reducing one’s carbon footprint easier and more enjoyable.
This is a guest post by Patrick Burns, who is a PhD candidate in the School of Computing and Information Systems at the University of Tasmania in Hobart, Australia. He is researching the use of technology to promote physical activity. His interests include ubiquitous and wearable computing and ambient displays.
According to the World Health Organization more than one in ten adults worldwide in 2008 was obese. This figure has more than doubled since 1980. We know that obesity is a major risk factor for heart disease, diabetes, osteoarthritis and some forms of cancer. Unsurprisingly a combination of inadequate exercise and increased consumption of fatty, sugary foods is to blame. When it comes to a lack of physical activity, some people point the finger at technology. Cars let us drive to places we used to walk. Machines do jobs we used to do by hand. Video games, DVDs, television and the Internet provide a wealth of sedentary entertainment options. But could technology actually help us to do more physical activity, and ideally help prevent obesity?
One approach is to help people to better track their activity. The hope being that if we make a person more aware of their physical activity (or lack of it) that they will be motivated to do more. There are a number of existing devices designed to do just that. Examples are FitBit, Jawbone UP and Nike FuelBand. Each integrates a motion sensor (accelerometer) into a wearable device to track how much the wearer moves around during the day. There are also stand-alone smartphone apps which use the phone’s in-built accelerometer to track physical activity. In the case of the UP and some smartphone apps, the user can supplement their activity data with information on the type of food they’re eating. The UP and FitBit also monitor users’ sleep habits.
The data collected are processed and delivered to the user in the form numbers and graphs. A count of steps taken each day. Time active vs. sedentary. Steps climbed. Calories burned. There is an assumption that, when it comes to activity tracking, more is better. That we should collect more data from more sensors. That we should perform more analyses on that data and present it to the user in multiple forms. We should make our interfaces more engaging, to encourage users to continue to monitor their activity data. In the words of Jawbone’s VP of product development, “you have to create a Facebook-like engagement that keeps people coming back”.
The truth or otherwise of these assumptions is very much dependent on individual users, and the way in which they employ a particular technology in their lives. Users who are very motivated to do exercise, sometimes playfully called “fitness freaks” or “gym junkies”, employ activity monitoring technology in a supporting role. They already do a lot of physical activity and enjoy being able to record, quantify and analyse that activity. But what about less motivated users – people who don’t do enough physical activity and know that they should do more? For those users technology plays a motivating role, one in which the technology is (and should be) more peripheral to their day-to-day lives.
If we give these users an interface that is too complex, or that requires a continuing and significant time commitment, we run the risk that they will lose interest, “burn out” and return to old habits. Many of us have had the experience of starting a diet, joining a gym or buying exercise equipment only to give up soon after. These experiences underline the need to make small changes, slowly, that can be sustained in the long term. We need to design technology to support this type of change.
I argue that for these less motivated users, simpler interfaces could be just as effective as more complex, more engaging interfaces. Do we really need to know exactly how many steps we’ve taken or how many calories we’ve burned? Or is it good enough just to know that we’ve “done well” today or that we need to “do more”. Do we really need graphs and figures, or could we convey information in a simpler way. Say, through coloured lights.
I’m currently researching the use of simple interfaces to deliver physical activity information to users, with a specific focus on wearable technology. I recently designed and evaluated such a device – ActivMON. ActivMON is a wrist-watch like device containing a motion sensor and coloured light. The motion sensor detects the user’s physical activity and the light changes colour (on a spectrum from red to orange to green) to show the user’s daily activity level as compared to an activity goal. ActivMON then shares this data through the Internet with other devices. If you’re doing physical activity then the lights on your friends’ devices will pulse to let them know. If they’re doing physical activity, your device will pulse. Supporting social influence is important, and I wanted to see if this could be done using a wearable ambient display.
This work is still in its early stages, but I feel it raises some interesting questions. Should we deliver information differently (and more or less information) depending on a user’s level of motivation to change? How engaging should interfaces be? How little information can we deliver, and yet still realise a motivational effect? This is not to argue against the quantified self. Rather to pose the question of how best to present data to users once we’ve collected it.
This is a guest post by Sean Munson, a PhD candidate at the University of Michigan’s School of Information. Sean studies individual preferences and nudges, particular for encouraging people to read more diverse political news and helping them to live happier and healthier lives.
Personal informatics is inherently tied to behavior: reported behavior, monitored behavior, and planned behavior. When people interact with systems that help them keep track of and reflect on this behavior, they are doing so using tools and contexts that exert a variety of behavioral nudges. In my work, I have been considering when and how different behavioral nudges should be applied, and to what extent they should be applied. I have encountered these questions in classrooms as well, sometimes to the visible discomfort of those less familiar with the persuasive technology field.
A spectrum — with systems that push people to do something without their knowledge, or in a way that overrides their own autonomy, at one end and systems that support people in gaining insight into their existing behavior and achieving a behavior change they desire at the other — may be a useful framework for how researchers and designers think about systems for personal informatics. The first category might be persuasive technology, and the second category, reflective or mindful technology.
One definition of persuasive technology might be systems that push the people who interact with them to behave in certain ways, with or without those people choosing behavior change as an explicit goal. Though this definition is narrow, the category actually encompasses most systems: their design and defaults will favor certain behaviors over others. Whether or not it is the designer’s intent, any environment in which people make choices is inherently persuasive; this is not novel to digital environments.
In a coercive environment, the influence is so great as to override individual autonomy.
Mindful (or reflective?) Technology
For now, I’ll call technology that helps people reflect on their behavior, whether or not people have goals and whether or not the system is aware of those goals, mindful technology. I’d put apps like Last.fm and Dopplr in this category (though behaviors surfaced in these social applications are subject to normative influences, of course). I might also include applications that nudge users to meet a goal they have set, which might be more commonly classified as persuasive technology, such as UbiFit, LoseIt, and other trackers. While designers of persuasive technology steer users toward a goal that the designers’ have in mind, or to other goals unintentionally, the designers of mindful technology work to enable give users to better know their own behavior, to support reflection and/or self-regulation in pursuit of goals that the users have chosen for themselves.
I’ll use two examples to illustrate persuasion vs. reflection, one from Kickstarter and one my own research.
KickStarter lets people raise money for projects from visitors to the website. One of the bits of feedback it gives funders is a “pie” that fills in as they fund projects in different categories (right). I’d argue that this is light persuasion – like in Trivial Pursuit, you’re going to want to fill that pie. It’s not merely reflective of the categories of projects people have funded, but it nudges them to fund in categories they have not. A more reflective design might merely show a bar graph of the number of projects (or total dollars) a funder has contributed to projects in the various categories.
BALANCE. One of my research areas has been encouraging people to read more diverse political news. In one of our studies, we tested a “BALANCE man” — a character who, if you read mostly liberal or conservative news, teeters on the brink of falling off of a tight rope (right). If you read a balance of stories, the character appears quite happy. This is a persuasive design to nudge people toward reading a range of political opinion. A more coercive design might begin automatically changing the balance of stories available for a reader to select, while a more reflective design might simply allow the user to explore their own reading behavior.
I often hesitate to use persuasion because of of concerns about using persuasion poorly, rather than concerns using persuasion at all. Persuading people who have not opted into a particular application can be an important part of public awareness campaigns in a variety of domains, and unintentional persuasion is an inevitable consequence of other designs. Unsure of when or how to appropriately persuade, though, I often choose to surface as much data as possible, as neutrally as possible.
Addressing some questions — some research questions and some as questions of our field’s ethics — might make me a more comfortable designer of persuasive systems. These include:
- Do we have standards for when it is “okay” to employ different persuasive techniques, or when it might be appropriate to use coercion? How transparent should designers and systems be about the persuasive techniques they are using?
- How can we improve on exception handling in persuasive systems?
- How do different personalities respond to different techniques for persuasion and promoting reflection? A stimulus that one finds challenging, another may find shaming.
- Are there design techniques that will help make for “better” persuasive systems? e.g., activities that encourage designers to more critically engage with what it is like to live with a system.
- Do reflective and persuasive systems have different effects on users’ development and different implications for long-term use? If so, what?
- Is persuasion vs. mindfulness or reflection even the right question or spectrum? Paul Resnick proposes that goals vs. no goals and, if there are goals, whether they are set by the system or the people using it, might be a more useful framing.
These are some admittedly rough thoughts on the relationship between persuasive and reflective systems, and some open questions. What do you think?
Better understanding of the intricate relations between our brains and behaviors is key to future improvements in well-being and productivity. Conventional tools for measuring these relations such as functional magnetic resonance imaging (fMRI) or positron emission tomography (PET) typically rely on complex, heavy hardware that offer limited comfort and mobility for the user. This means that measuring brain activity has been confined to expensive laboratories and that it has been a challenge to perform longer term continuous monitoring of brain signals in real life conditions.
Electroencephalography (EEG) is a method for recording the electrical activity along the scalp. EEG measurements can determine different states of brain activity, for instance is this signal used by the popular Zeo Sleep Manager to determine when the user is in different sleep stages, using just a few electrodes. More electrodes enable a richer picture of the brain state and in laboratory settings typically 64, 128, or 256 electrodes are used. However, these systems are time consuming and cumbersome to install and their wiring limits user mobility and behaviors.
With our ‘Smartphone Brain Scanner’ system we aim to enabled continuos monitoring and recording of brain signals (EEG) in everyday natural settings. For that purpose we use an off-the- shelf low-cost wireless Emotiv EPOC neuroheadset with 14 electrodes, which we have connected wirelessly to a smartphone. The smartphone receives the EEG data with a sampling rate of 128 Hz and our software on the smartphone then perform a complex real-time analysis in order to do brain state decoding. That is, estimate the sources from which the brain activations occurred and then show the result in a 3D model of the brain on the smartphone display. This allows the user to observe his/her brain activations in 3D in real time. The video below provides a demonstration.
The smartphone brain scanner enable complete user mobility and continuous logging of brain activities either for real-time neuro feedback purposes or for later analysis. The user can interact with the 3D brain model on the device using touch gestures and the system allow up to 7.5 hours continuos recording.
From a personal informatics perspective the ability to obtain continuous bio-feedback is interesting as it has been shown that such bio-feedback may lead to improvements in behavior, reaction times, emotional responses, and musical performance. Within the clinical domain it has been shown to have a positive effect on attention deficit, hyperactivity disorder, and epilepsy. For such applications a low-cost and easy-to-use brain monitoring system enabling complete mobility could be beneficial. Furthermore, the ability to monitor and record brain signals over longer durations in natural settings might allow the user to gain new insights, and the low-cost setup even allow studying EEG signals in group settings.
More information about the smartphone brain scanner is available here: http://milab.imm.dtu.dk/eeg
With a Nokia N900 and the Emotiv EPOC headset you can try the system for yourself by downloading the brain scanner software from the Maemo repository. We also have a version for Android-based smartphones and tablets, however the software is not released yet.
Every day you interact with the web. You log on. You upload, you download. You tap and you click. You search, you “like”, you pin, and you retweeet. These actions make the web work for you, but they also make you work for the web. It should come as no surprise to even the casual technology observer that we are now living in the age of data. Some call it “big data”, but instead of thinking about it as a thing, we can also think of it as a an ecosystem that can be described by its fundamental structure – the database. Our lives and the actions we engage in on a daily basis are constantly being accessed and stored in a database. Our actions may be passively collected (think about how Google’s Adsense operates) or actively collected (checking in on Foursquare or updating Twitter). While it may seem as if we are living and engaging with a dystopian ecosystem, we believe that there are possibilities for engaging and enhancing our current health experiences by taking advantage of our personal and social databases.
We don’t need to rehash the idea that we are also in the midst of an explosion of tools and services that support the gathering of health-related data. If you’re reading this, you know that the Quantified Self movement is gaining traction and new devices and applications are being introduced at a rapid rate. Naturally, these tools are heard towards helping an individual lead a healthier life. This inherently creates a future-focus environment in which the user is presented data, analytics and recommendations for positive health behavior change in the future. This is typically accomplished through two methods, information on current behavior and goal progress information. We argue that many of these tools and services are not taking full advantage of the vast amount of information that is available to them.
The wide-spread proliferation of application programming interfaces (APIs) that allow developers and users to access large amount of data opens up numerous possibilities for possibly improving the health and behavior conversation between a user and his or her tools/system of choice. We foresee unique opportunities to use historical behavioral data, contextual information (e.g. location, social interactions), and health actions to highlight patterns and provide feedback through three mechanisms: 1) reminders of success, 2) behavioral prompting, and 3) contextual reminders.
The road to good health is not an easy one and there are numerous examples of individuals who unfortunately lapse into negative or poor behavior patterns. We are proposing that when “failure” points are identified there is an fantastic opportunity to remind the user of previous success. Reminding a user that they have had success in the past may help to limit self-doubt and reductions of self-efficacy. The psychological burden associated with failing to meet goals could be quickly replaced with a positive a reminder of the user’s mental and physical capability that is based on objective historical information. Instead of just having an empty “You can do it!” we envision future services that say, “We believe you can do it because, look, you’ve done it before!”
We also see the potential for building upon the concept of modeling illustrated in social learning theory and social cognitive theory. While modeling is typically thought of in the social sense, we propose that services can use historical data and contextual information to create powerful and meaningful representations of a user (maybe as a digital avatar). By presenting a user with their past self they can use it as a tool for comparison (“What am I typically like?”) or competition (“How can I be better than my previous self”). Imagine, for example, waking up in the morning and seeing your past self and associated behavioral data in your bathroom mirror or on a display on your refrigerator. We believe that this past self could act a positive guide to help you lead a healthier life.
Lastly, the large amount of information stored in your behavioral databases has an inherent ability to converge and provide information about contextual factors associated with behavior. For example, we can easily find out if you get more or less steps on days it is raining or if you tend to eat worse when you check in to airports around dinner time. Using simple data mining and contextual linking it is possible to identify positive behaviors patterns and bring them to light. By tapping into the rich digital histories being captured and stored across many services we may not only help a user remember, but also enhance their ability to celebrate and re-enjoy healthy behaviors.
Too often, we encounter warnings of services tracking out behavior and using if for their own personal gain. It is time that we ask the tools and applications we use to help us lead healthier lives by taking full advantage of the vast amount of historical information we are collecting. The Spanish philosopher, George Santayana told us, “Those who do not remember the past are doomed to repeat it.” Our increasing digital lives allow use to not only remember the past, but harness that powerful information to help us lead better, healthier lives.
This article is a summary of a position paper by Ernesto Ramirez and Eric Hekler that will be discussed at the Personal Informatics in Practice workshop at CHI 2012 in Austin, TX on May 6, 2012. The workshop will be a gathering of researchers, designers, and practitioners exploring how to better support personal informatics in people’s everyday lives.