Tag Archives: mindfulness
Charles Wang is one of the co-founders of Lumo BodyTech, the makers of the LUMOback posture sensor. When he’s not building new self-tracking tools, he’s taking a some time to watch the world around him. Watch this great Ignite talk from Charles to hear about his observations and how they apply to his long-term posture tracking.
We’ll be posting videos from our 2013 Global Conference during the next few months. If you’d like see talks like this in person we invite you to join us in Amsterdam for our 2014 Quantified Self Europe Conference on May 10 and 11th.
Can you use technology to be more mindful?
This question was at the core of a wonderful presentation by Nancy Dougherty at the second annual Quantified Self Conference:
Nancy acknowledges at the start of her talk that QS is often thought to be mainly about technology. But not everybody sees it this way. Alex Carmichael, for instance, has described QS as “a very mindful community.”
In her talk, Nancy explains how she stumbled upon the idea of integrating mindfulness into her QS practice: Continue reading
Catherine Kerr does brain science related to mindfulness at Brown University Medical School. She points out that mindfulness traditions ask practitioners to simply focus closely on body sensations in order to bring attention to the present moment. Why does this help with depression? In the video below, Catherine explains some of her magnetoencephalopathy (MEG) research to answer this question, and suggests that detecting cortical changes may be one of the earliest, instinctive QS-related forms of self-awareness. (Filmed by the Boston QS Show&Tell meetup group.)
This is a guest post by Sean Munson, a PhD candidate at the University of Michigan’s School of Information. Sean studies individual preferences and nudges, particular for encouraging people to read more diverse political news and helping them to live happier and healthier lives.
Personal informatics is inherently tied to behavior: reported behavior, monitored behavior, and planned behavior. When people interact with systems that help them keep track of and reflect on this behavior, they are doing so using tools and contexts that exert a variety of behavioral nudges. In my work, I have been considering when and how different behavioral nudges should be applied, and to what extent they should be applied. I have encountered these questions in classrooms as well, sometimes to the visible discomfort of those less familiar with the persuasive technology field.
A spectrum — with systems that push people to do something without their knowledge, or in a way that overrides their own autonomy, at one end and systems that support people in gaining insight into their existing behavior and achieving a behavior change they desire at the other — may be a useful framework for how researchers and designers think about systems for personal informatics. The first category might be persuasive technology, and the second category, reflective or mindful technology.
One definition of persuasive technology might be systems that push the people who interact with them to behave in certain ways, with or without those people choosing behavior change as an explicit goal. Though this definition is narrow, the category actually encompasses most systems: their design and defaults will favor certain behaviors over others. Whether or not it is the designer’s intent, any environment in which people make choices is inherently persuasive; this is not novel to digital environments.
In a coercive environment, the influence is so great as to override individual autonomy.
Mindful (or reflective?) Technology
For now, I’ll call technology that helps people reflect on their behavior, whether or not people have goals and whether or not the system is aware of those goals, mindful technology. I’d put apps like Last.fm and Dopplr in this category (though behaviors surfaced in these social applications are subject to normative influences, of course). I might also include applications that nudge users to meet a goal they have set, which might be more commonly classified as persuasive technology, such as UbiFit, LoseIt, and other trackers. While designers of persuasive technology steer users toward a goal that the designers’ have in mind, or to other goals unintentionally, the designers of mindful technology work to enable give users to better know their own behavior, to support reflection and/or self-regulation in pursuit of goals that the users have chosen for themselves.
I’ll use two examples to illustrate persuasion vs. reflection, one from Kickstarter and one my own research.
KickStarter lets people raise money for projects from visitors to the website. One of the bits of feedback it gives funders is a “pie” that fills in as they fund projects in different categories (right). I’d argue that this is light persuasion – like in Trivial Pursuit, you’re going to want to fill that pie. It’s not merely reflective of the categories of projects people have funded, but it nudges them to fund in categories they have not. A more reflective design might merely show a bar graph of the number of projects (or total dollars) a funder has contributed to projects in the various categories.
BALANCE. One of my research areas has been encouraging people to read more diverse political news. In one of our studies, we tested a “BALANCE man” — a character who, if you read mostly liberal or conservative news, teeters on the brink of falling off of a tight rope (right). If you read a balance of stories, the character appears quite happy. This is a persuasive design to nudge people toward reading a range of political opinion. A more coercive design might begin automatically changing the balance of stories available for a reader to select, while a more reflective design might simply allow the user to explore their own reading behavior.
I often hesitate to use persuasion because of of concerns about using persuasion poorly, rather than concerns using persuasion at all. Persuading people who have not opted into a particular application can be an important part of public awareness campaigns in a variety of domains, and unintentional persuasion is an inevitable consequence of other designs. Unsure of when or how to appropriately persuade, though, I often choose to surface as much data as possible, as neutrally as possible.
Addressing some questions — some research questions and some as questions of our field’s ethics — might make me a more comfortable designer of persuasive systems. These include:
- Do we have standards for when it is “okay” to employ different persuasive techniques, or when it might be appropriate to use coercion? How transparent should designers and systems be about the persuasive techniques they are using?
- How can we improve on exception handling in persuasive systems?
- How do different personalities respond to different techniques for persuasion and promoting reflection? A stimulus that one finds challenging, another may find shaming.
- Are there design techniques that will help make for “better” persuasive systems? e.g., activities that encourage designers to more critically engage with what it is like to live with a system.
- Do reflective and persuasive systems have different effects on users’ development and different implications for long-term use? If so, what?
- Is persuasion vs. mindfulness or reflection even the right question or spectrum? Paul Resnick proposes that goals vs. no goals and, if there are goals, whether they are set by the system or the people using it, might be a more useful framing.
These are some admittedly rough thoughts on the relationship between persuasive and reflective systems, and some open questions. What do you think?
Nancy Dougherty made her own set of “mindfulness pills” – placebos labeled Focus, Willpower/Energy, Calm, and Happy. The pills were embedded with sensors that transmitted signals to her phone, recording each time she took the different pills, as well as her heart rate, activity rate, and sleep. Nancy works at Proteus Biomedical, in case you’re wondering how she made this self-experiment happen. She learned that taking an “Energy” pill actually made her bike harder to work and have a higher heart rate, and taking a “Focus” pill actually made her do more work. Watch her fun talk on managing mood and playing with the placebo effect below. (Filmed at the Bay Area QS Show&Tell Meetup #19, at Singularity University.)
I’ve been thinking for some time about the connection between self-tracking and mindfulness. At first glance they seem to be very different – picture the wired-up gadget wizard sitting next to the unadorned meditating guru. But step to the side and look from a different angle, and you may see meditation and self-tracking as two parallel tools that lead down the same path toward mindfulness.
While these thoughts were swirling through my mind, I got an email from Alex Pang. Alex is a futurist currently housed at Microsoft Research Cambridge, where he is studying the relationship between self-tracking/self-experimentation and mindfulness in a project he calls “contemplative computing”. Wow. Alex just finished writing an article on this topic, using his own experience with weight loss as an example, and delving both into the past and into the future to come to some interesting conclusions. His paper is available here, and I’d love to know if anyone else out there has been thinking about this connection as well.
Maybe the modern-day version of the gong and the meditation cushion are the self-tracking app and the device that runs it?