Tag Archives: Sean Munson
This is a guest post by Sean Munson, a PhD candidate at the University of Michigan’s School of Information. Sean studies individual preferences and nudges, particular for encouraging people to read more diverse political news and helping them to live happier and healthier lives.
Personal informatics is inherently tied to behavior: reported behavior, monitored behavior, and planned behavior. When people interact with systems that help them keep track of and reflect on this behavior, they are doing so using tools and contexts that exert a variety of behavioral nudges. In my work, I have been considering when and how different behavioral nudges should be applied, and to what extent they should be applied. I have encountered these questions in classrooms as well, sometimes to the visible discomfort of those less familiar with the persuasive technology field.
A spectrum — with systems that push people to do something without their knowledge, or in a way that overrides their own autonomy, at one end and systems that support people in gaining insight into their existing behavior and achieving a behavior change they desire at the other — may be a useful framework for how researchers and designers think about systems for personal informatics. The first category might be persuasive technology, and the second category, reflective or mindful technology.
One definition of persuasive technology might be systems that push the people who interact with them to behave in certain ways, with or without those people choosing behavior change as an explicit goal. Though this definition is narrow, the category actually encompasses most systems: their design and defaults will favor certain behaviors over others. Whether or not it is the designer’s intent, any environment in which people make choices is inherently persuasive; this is not novel to digital environments.
In a coercive environment, the influence is so great as to override individual autonomy.
Mindful (or reflective?) Technology
For now, I’ll call technology that helps people reflect on their behavior, whether or not people have goals and whether or not the system is aware of those goals, mindful technology. I’d put apps like Last.fm and Dopplr in this category (though behaviors surfaced in these social applications are subject to normative influences, of course). I might also include applications that nudge users to meet a goal they have set, which might be more commonly classified as persuasive technology, such as UbiFit, LoseIt, and other trackers. While designers of persuasive technology steer users toward a goal that the designers’ have in mind, or to other goals unintentionally, the designers of mindful technology work to enable give users to better know their own behavior, to support reflection and/or self-regulation in pursuit of goals that the users have chosen for themselves.
I’ll use two examples to illustrate persuasion vs. reflection, one from Kickstarter and one my own research.
KickStarter lets people raise money for projects from visitors to the website. One of the bits of feedback it gives funders is a “pie” that fills in as they fund projects in different categories (right). I’d argue that this is light persuasion – like in Trivial Pursuit, you’re going to want to fill that pie. It’s not merely reflective of the categories of projects people have funded, but it nudges them to fund in categories they have not. A more reflective design might merely show a bar graph of the number of projects (or total dollars) a funder has contributed to projects in the various categories.
BALANCE. One of my research areas has been encouraging people to read more diverse political news. In one of our studies, we tested a “BALANCE man” — a character who, if you read mostly liberal or conservative news, teeters on the brink of falling off of a tight rope (right). If you read a balance of stories, the character appears quite happy. This is a persuasive design to nudge people toward reading a range of political opinion. A more coercive design might begin automatically changing the balance of stories available for a reader to select, while a more reflective design might simply allow the user to explore their own reading behavior.
I often hesitate to use persuasion because of of concerns about using persuasion poorly, rather than concerns using persuasion at all. Persuading people who have not opted into a particular application can be an important part of public awareness campaigns in a variety of domains, and unintentional persuasion is an inevitable consequence of other designs. Unsure of when or how to appropriately persuade, though, I often choose to surface as much data as possible, as neutrally as possible.
Addressing some questions — some research questions and some as questions of our field’s ethics — might make me a more comfortable designer of persuasive systems. These include:
- Do we have standards for when it is “okay” to employ different persuasive techniques, or when it might be appropriate to use coercion? How transparent should designers and systems be about the persuasive techniques they are using?
- How can we improve on exception handling in persuasive systems?
- How do different personalities respond to different techniques for persuasion and promoting reflection? A stimulus that one finds challenging, another may find shaming.
- Are there design techniques that will help make for “better” persuasive systems? e.g., activities that encourage designers to more critically engage with what it is like to live with a system.
- Do reflective and persuasive systems have different effects on users’ development and different implications for long-term use? If so, what?
- Is persuasion vs. mindfulness or reflection even the right question or spectrum? Paul Resnick proposes that goals vs. no goals and, if there are goals, whether they are set by the system or the people using it, might be a more useful framing.
These are some admittedly rough thoughts on the relationship between persuasive and reflective systems, and some open questions. What do you think?