Tag Archives: psychology
Like many of us, Michael Cohn had a hard time “rationally regulating” his behavior. Even as a psychology researcher at UCSF, he was falling victim to procrastination and time wasting. He started exploring “irrationally regulating” his behavior by stating personal commitment contacts then using self-tracking via spreadsheets to understand how he spends his time and his progress on different personal commitments. In this talk, presented at teh Bay Area QS meetup group, he explains his history, his use of tracking, and what happens when he falters.
Ryan Hagen is a doctoral student in clinical psychology who’s terrified of people getting therapy through Siri. That said, for his PhD project he was inspired to extend the work of Sandy Pentland and ginger.io correlating people’s passive smartphone behavior data with anxiety and depression. In the video below, Ryan explains his current three-month study, which you can check out or join here. (Filmed by the Boston QS Show&Tell meetup group.)
Noticing that flaxseed oil improved my balance led me to measure its effects on other tests of brain function.
It also made me wonder what else in my life affected how well my brain
works. Eventually I measured the mental effects of flaxseed oil with
four tests, but each had problems:
- Balance. Time-consuming (15 minutes for one daily test), not portable.
- Memory search. Anticipation errors, speed-accuracy tradeoff.
- Arithmetic. Speed-accuracy tradeoff.
- Digit span. Insensitive.
“Speed-accuracy tradeoff” means it was easy to go faster and make
more errors. It wasn’t easy to keep the error rate constant. If I got
faster, there were two possible explanations: (a) brain working better
or (b) shift on the speed-accuracy tradeoff function. The balance and
digit span tests had other weaknesses. Only the balance test was
I’m still doing the arithmetic test, which has been highly
informative. However, I want to regularly do at least two tests to
provide a check on each other and to allow test comparison (which is
more sensitive?). I tried a test that involved typing random strings of
letters several times but as I got faster I started to make many
I have recently started doing a test that consists of one-fingered
typing of a five-letter string. There are 30 possible five-letter
strings. Each trial I see one of them and type it as fast as possible.
15 trials = one test. Takes three minutes.
I am doing one-finger rather than regular typing because I hope
one-finger typing will be more accurate, very close to 100%. With the
error rate always near zero, I won’t have to worry about speed-accuracy
tradeoff. Another reason is the need for skilled movement and hand-eye
coordination. Doing this sort of task can be enjoyable. One-finger typing (unlike regular typing) is skilled movement with hand-eye coordination; maybe it will be fun.
I restricted the number of possible letter strings to 30 to make
learning easier. Yet 30 is too large to cause the anticipation errors I
might make if there were only a few strings.
Here are early results.
So far so good. Accuracy is high. On any trial, it isn’t easy to go faster, so speed-accuracy tradeoff is less of a problem. Even better, it’s vaguely enjoyable. Doing the task is a little like having a cup of tea. A pleasant break. There’s no need to do the test four times/day; I just want to.
Deep mysteries of human nature will be exposed by self-tracking, aspects of our behavior so disconcerting and bizarre that they will lead us to question whether we understand ourselves at all. I know this is true because such disconcerting results are already being produced at a rapid pace by experimental psychologists, and self-tracking brings the methods of experimental psychology into our daily lives; if, that is, we think we can stand to learn the lessons they teach.
Watch this video published from a story in New Scientist by Lars Hall and Petter Johansson.
[I]n an early study we showed our volunteers pairs of
pictures of faces and asked them to choose the most attractive. In some
trials, immediately after they made their choice, we asked people to
explain the reasons behind their choices.
to them, we sometimes used a double-card magic trick to covertly
exchange one face for the other so they ended up with the face they did
not choose. Common sense dictates that all of us would notice such a
big change in the outcome of a choice. But the result showed that in 75
per cent of the trials our participants were blind to the mismatch,
even offering “reasons” for their “choice”.
This is troubling enough, but there’s more. When people are fooled into thinking they made a different choice than the one they actually made, and then articulate their “reasons” for this supposed choice, they then may actually change their future preferences to conform to their confabulated preference.
Importantly, the effects of choice blindness go beyond snap judgments.
Depending on what our volunteers say in response to the mismatched
outcomes of choices (whether they give short or long explanations, give
numerical rating or labeling, and so on) we found this interaction
could change their future preferences to the extent that they come to
prefer the previously rejected alternative. This gives us a rare
glimpse into the complicated dynamics of self-feedback (“I chose this,
I publicly said so, therefore I must like it”), which we suspect lies
behind the formation of many everyday preferences.
Lars Hall and Petter Johansson lead the Choice Blindness Laboratory at Lund University, Sweden. At the end of their New Scientist piece, they suggest that learning about this experiment should make people better at understanding their own choices.
In everyday decision-making we do see ourselves as connoisseurs of our
selves, but like the wine buff or art critic, we often overstate what
we know. The good news is that this form of decision snobbery should
not be too difficult to treat. Indeed, after reading this article you
might already be cured.
Unfortunately, this is not convincing. It is common for biases persist even when we are warned about them. I suspect we are in no position to stand guard over our judgments without the help of machines to keep us steady. Assuming, that is, that deliberative consistency is a value we care to protect.
“Are self-trackers narcissists? Results from NPI-16″ at the QS Show&Tell; video by Paul Lundahl.
Are self-trackers narcissists? In the video above, from the recent QS Show&Tell, I report on trying to find an answer. Here I give a quick summary of that talk and a reference link. I decided to run this test because a few weeks ago Alexandra Carmichael made a detailed and helpful report on her self-tracking project. Sandy Lane made the following comment:
It was a fair question, and in the comments thread I proposed answering this question in our own way: with numbers. So in a survey of QS readers I included all the questions from the NPI-16, an instrument to measure narcissism that has been used and tested in psychological assessment research for many years. I go through the details in the talk, but the short answer is no; in our small sample of 37 self-trackers, the mean narcissism scores were almost at the center of the range of mean scores in a set of five large surveys used to validate the NPI-16 against a longer and well-validated measure of narcissism, the NPI-40.
There is a caveat, however. I took the question to mean: do self-trackers have the overweening sense of self typical of narcissists? There are other definitions of narcissism. Many people mean “narcissism” more loosely; more or less as a synonym for “annoying.” If narcissism means annoying, then this test doesn’t resolve the issue.
Reference: The NPI-16 as a short measure of narcissism, Daniel R. Ames, Paul Rose, Cameron P. Anderson, Journal of Research in Personality 40 (2006) 440-450 (PDF)