Understanding Self-Efficacy and the Design of Personal Informatics Tools
May 8, 2012
Adrienne Andrew is a PhD candidate in Computer Science at the University of Washington. She is interested in studying how people use food diaries on mobile phones: what challenges “typically motivated” users have, balance between capturing less detailed yet still valuable food information, and identifying new ways to organize food databases to support a wider range of dietary analysis.
One primary concern for the field of personal informatics (PI) is supporting people in making changes in their life. A driving theory for pesonal informatics (PI) designers and researchers is Social Cognitive Theory [Bandura, 1977], which posits that a person’s behavior, environment and inner qualities all contribute to how a person functions. This theory has been applied to understanding how people learn, how social environments impact what people do, and how people regulate their own behavior. A key component in this theory is self-efficacy (SE), which is summarized as a belief in one’s abilities.
The question I pose to both PI researcher and self-quantifiers is if your experiences support whether self-efficacy truly reflects intent and ability to engage in key behavior change strategies.
SE is traditionally measured by self-report. To develop SE measurements for a particular domain, researchers use open-ended approaches to identify common challenges and barriers to the problem. They then develop a series of statements of the form “How confident are you that you can [achieve goal] even though [challenge]?” with a 4-unit response scale ranging from “Cannot do it” to “Highly certain can do”. An example of a statement is “How confident are you that you can stick to a healthy eating plan after a long, tiring day at work?” SE measures provides valuable feedback about whether an intervention is supporting adherence to behavior change strategies, and indicate whether participants complete the study with an intention to continue.
This is an important feature for PI researchers: we are familiar with a domain and common challenges, so can build the scales easily; we usually use short-term studies to indicate long-term impact; and properly designed scales can help us to discover where a PI tool breaks down.
Now that we have described how SE can be measured and its relevance to PI researchers, it is important to acknowledge factors that may impact the measurements as applicable to PI tools. In addition to basic usability (which I would also argue is more important to the “common consumer” as opposed to highly-motivated quantified-selfers), a user’s goals (internal motivation) and trust in the technology are key.
How well the tool matches the user’s goals.
This is a point that is likely more important to researchers than to quantified-selfers. It refers to both a goal the user has and that the user has a belief in what they need to do in order to attain that goal. A user who is trying to lose weight may choose to focus on restricting caloric intake as well as increasing caloric expenditure, or choose to focus on only one of those areas. Social cognitive theory says these beliefs are based on what the user has observed amongst their peers, and how similar or different the user is from their peers.
We observed this in the BALANCE studies. BALANCE consisted of a food diary to capture caloric intake, an automatic physical activity detection platform to measure caloric expenditure, and a visualization that provided real-time feedback of the person’s caloric intake/expenditure balance throughout the day, all on a mobile phone. Overall, about 40 people participated in the evaluation by carrying the phone and tracked their food intake for 3 days.
One recurring theme in the feedback was that tracking food intake with such detail was too much work, and would only be worth it if they had a medical condition that made it very important to keep detailed records. However, some participants wanted to reflect on a coarser grained summary of their dietary intake for general health and disease prevention. There participants had a different wellness goal, and therefore didn’t have the internal motivation to make this tool useful to them.
Understanding the underlying technology.
Another factor is how well the user understands the technology, or more specifically, how the technology may fail. Part of the BALANCE project was using sensors to identify and calculate calories expended via activity throughout an entire day. Other related tools are GPS-based run trackers that use GPS to track the location, duration and other metrics of the run. Technologies that use sensors to identify bouts of physical activity have some level of uncertainty associated with the recognition. This uncertainty comes from a variety of sources, such as parameters that reflect a tradeoff between power consumption and accuracy. GPS trace quality depends on terrain and location of satellites in the sky.
A recent New York Times article reflects the concern of GPS run tracker users. Runners sometimes measure certified race courses, and report discrepancies to the organizers. These runners appear to trust the technology more than the organization. In the case of BALANCE (which exposes less detailed data about the calorie calculation and depends on more parameters), some users reported a feeling that the calculation “didn’t feel right”, but were unable to express how they thought it might be wrong. With both of these examples, the uncertainty with the technology could impact measures of SE. This raises the question of what other factors influence a person’s trust in the technology, as well as how SE may be impacted, and how it may vary from person to person.
So, I pose this question to quantified selfers: What do you track, and what aspects of the tools you use impact whether or not you can or will keep using them?