Tag Archives: Memory
Memory, cognition, and learning are of high interest here at QS Labs. Ever since Gary Wolf published his seminal piece on SuperMemo, and it’s founder Piotr Wozniak, in 2008, we’ve been delighted to see how people are using space repetition software. Our friend and colleague, Steven Jonas, has been using SuperMemo since he read Gary’s article and slowly transition to daily use in 2010. Steven has been quite active in sharing how he’s used it to track his different memorization and learning projects with his local Portland QS meeup group. At the 2014 Quantified Self Europe Conference, Steven introduced a new project he’s working on, memorizing his daybook – a daily log he keeps of interesting things that happened during the day. Watch his fascinating talk below to hear him explain how he’s attempting to recall every day of this life. If you’re interested in learning more about spaced repetition we suggest this excellent primer by Gary.
You can also download the slides here.
What did you do?
I used a spaced repetition system to help me remember when an entry in my daybook occurred.
How did you do it?
Using Supermemo, I created a flashcard each morning. On the question side, I typed what I did the previous day. On the answer side, I typed down the date. SuperMemo would then schedule the review of these cards. I also played around with adding pictures and short videos from that day to the card, as well.
What did you learn?
First, that this seems to work. I’ve built up a mental map of my experiences, unlike anything I’ve ever experienced. I also learned that I hardly ever remember the actual date for a card. Instead, it’s a logic puzzle, where I can recall certain details such as, “It was on a Saturday, and it was in October, the week before Halloween. And Halloween was on a Thursday that year.” From there, I can deduce the most likely day that it occurred. I’m also learning which details are most helpful for placing a memory. Experiences involving other people and different places are very memorable. Noting that I started doing something, like “I started tracking my weight”, are not memorable.
Today’s post comes to us from Steven Jonas who led the Spaced Repetition breakout session at the 2014 Quantified Self Europe Conference. Spaced repetition is a common topic in the Quantified Self community and we’ve seen great examples from Jeopardy champion Roger Craig and Steven. In this breakout session, conference attendees discussed reasons for using spaced repetition, past experiences, and potential pitfalls. You’re invited to read the description of the session and then join the discussion on the QS Forum.
By Steven Jonas
The Spaced Repetition breakout was a knowledge sharing session around the use of spaced repetition tools, such as SuperMemo, Anki, and Memrise. There were two major themes during the discussion: what can spaced repetition be used for, and what is the value of it?
Many people use Spaced Repetition to memorize vocabulary while learning a foreign language. But it has other uses also. Novel uses of spaced repetition include: remembering the faces of authors of books and articles and memorizing entries from one’s own datebook to construct a mental timeline . We explored other possible uses of this powerful tool, such as remembering facts about people, or using it to keep in mind projects that one would like to do.
Why memorize information when most facts are just a web search away? We discussed a few reasons to commit facts to memory. One is that most breakthroughs come from connecting ideas together. So, by retaining what one has already learned, it makes it easier to make connections with new ideas as they are encountered.
Also, spaced repetition can be used to change your overall relationship with a subject of knowledge. One person told of how he tried to multiple times to learn Spanish with poor results. His conclusion was that he just wasn’t good at learning languages. After using spaced repetition to build his vocabulary, he changed his self-assessment. It wasn’t that he was bad at languages, he just needed a better process. Or consider the experience of memorizing poetry. Holding a poem in memory changes one’s relationship to it. Adding a poem to one’s repertoire creates a sense of ownership over the poem.
We acknowledged in our discussion that spaced repetition practice is fragile, because for it to be most effective it must be done every day. A neglected spaced repetition system leads to an overwhelming number of cards to be reviewed, which can lead to abandoning the practice altogether. This is a problem that, so far, does not seem to have a good solution.
If you’re interested in keeping this conversation going about what should happen to our data after we’re gone you’re invited to join the discussion on the QS Forum.
Our QS Conferences are organized to maximize discovery and serendipity. The entire program results from us inviting attendees to present and participate. You’re never quite sure what you’ll get, but it’s hardly ever boring! I didn’t know what to expect when Caspar Addyman took the stage in Amsterdam to talk about “Tracking your brain on booze”, but he very quickly grabbed my attention. His talk reminded me that, as Malcolm Gladwell once reported, “How much people drink may matter less than how they drink it.”
Q: How do you describe Boozerlyzer? What is it?
Addyman: The Boozerlyzer is a drinks-tracking app for Android phones. It lets you count your drinks and their calories and tells you your current blood alcohol. Crucially, it also lets you record your mood and play a range of simple games that measure your coordination, reaction time, memory and judgment.
What Boozerlyzer explicitly does not do is tell people how much to drink. We think people would find it patronizing and off-putting. Rather we hope that it will help people get better insight into how drinking affects them.
In addition, if users agree, their data is sent to our servers to contribute to our research on how drink affects people. I’m a researcher with the Center for Brain and Cognitive Development, Birkbeck College, University of London, and this project was started as a way to collect data beyond the artificial setting of a laboratory.
Addyman: I originally had the idea back in 2003 while doing my undergraduate psychology degree. I was interested in how to study the affects of recreational drugs. The web technology of the time couldn’t be used when people were out at the pub or club so I didn’t pursue it.
In summer of 2010 I took part in a science & technology hack day in London and the idea occurred to me again, this time using smartphones. So I told a few friends about it. Mark Carrigan, a sociologist at Warwick University, opened my eyes to the more sociological types of data that we could gather. This broadened the aims from my initial very cognitive focus to think about the emotional and social experiences involved with drugs and alcohol. That was at the end of 2010. All that remained then was to invent the app. I’m not really a developer and have been working on this in my spare time so it has taken longer than I’d expected.
Q: What impact has it had? What have you heard from users?
Addyman: I have been using the app myself for 6 months now and the thing that has surprised me the most is how rapidly the drinks accumulate if I’m out with friends. A few drinks early in an evening, then a couple of glasses of wine with a meal and then more drinks all through the night. Over a particularly sociable weekend I find myself drinking a disturbing amount even though it doesn’t seem that way at the time.
We started our first public beta in December 2011 and have a hundred or so users. I still have to analyse the first batch of data and usage statistics. But, a first look at the data from December and January showed something surprising: the Christmas season seems to ratchet up drinking levels, normalising heavy drinking on into January. Unfortunately, I don’t think I’ve got enough data to tell if this is real trend.
In terms of direct feedback from users, generally, we’ve had positive reaction to the idea but there are plenty of things we can improve. One of the biggest problems with the enterprise is that our users forget to actually use the app when in the bar, or when they’ve stopped drinking. Also, people are willing to track their drinks and their mood as they go along, as that takes very little time. But at the moment the games take a little too long to play, and the game feedback is a bit too abstract. We aren’t yet giving estimates of drunkeness based on game performance. Here we are in a bit of Catch 22: more compelling feedback ought to be possible once we’ve got a reasonable base set of group data to run some regression analysis but without interesting feedback we have trouble getting people to play the game in the first place.
Q: What makes it different, sets it apart?
Addyman: One big difference between our app and many tools in the personal health world is that our focus is not on behavior change, but instead on data for scientific research and self-learning.
Also, this is an academic, non-commercial project. Our app will always be free. We will never collected any data that could directly identify you nor will we sell any of the data we collect. We believe in open systems, open data and open minds. The code we write is open sourced. The data we collect will be available to anyone that wants to study it.
Q: What are you doing next? How do you see Boozerlyzer evolving?
Addyman: The Boozerlyzer is our first app and there are still plenty of improvements to make to it. But, in addition, we want to broaden our scope and apply the same principle to recreational drugs and the effects of various medications.
As an example, I met Sara Riggare Sara Riggare from the Parkinson’s Movement at the Amsterdam QS conference. She pointed out that a version of Boozerlyzer could help Parkinson’s patients track their medication intake and quantify the effects of the medications on mood, coordination, memory, etc. We are starting a collaboration to redesign the app for this purpose.
Meanwhile, my own motivation for starting this project was always to be able to do better research into recreational drugs. This has never been a more pressing concern, and I am hoping that a drugs tracker app can help. Obviously, this is fraught with legal and ethical difficulties so we are having to tread carefully. See here and here for more background on this.
Q: Anything else you’d like to say?
Addyman: We have already benefited greatly from our contact with QS community. The conference was a great inspiration and I wish could get to more of the lively London meet ups. If anyone out there would like to get involved with our project, we’d love to hear from you. Any advice or experience you could lend us would be greatly appreciated. Our project is both open source and open science. We believe in the power of collaboration and so would love to hear from anyone with similar projects in mind.
This is the 12th post in the “Toolmaker Talks” series. The QS blog features intrepid self-quantifiers and their stories: what did they do? how did they do it? and what have they learned? In Toolmaker Talks we hear from QS enablers, those observing this QS activity and developing self-quantifying tools: what needs have they observed? what tools have they developed in response? and what have they learned from users’ experiences? If you are a “toolmaker” and want to participate in this series, contact Rajiv Mehta at firstname.lastname@example.org.
This is the eight post in the “Toolmaker Talks” series. The QS blog features intrepid self-quantifiers and their stories: what did they do? how did they do it? and what have they learned? In Toolmaker Talks we hear from QS enablers, those observing this QS activity and developing self-quantifying tools: what needs have they observed? what tools have they developed in response? and what have they learned from users’ experiences?
For me, some of the most interesting QS talks have been by those creatively repurposing existing sensor technologies for novel self-tracking applications — such as Mikolaj Habryn’s Noisebridge, at an early QS meetup, and Hind Hobeika’s ButterflEye goggles, at the QS Amsterdam conference. It’s fascinating to hear what the inventors are thinking long before their product is in the market. Here, Eric Gradman, master hardware hacker, tells how he is applying his skills to a focused life-logging application.
Q: How do you describe Facelogger? What is it?
Gradman: The Facelogger is a passive lifelogger that helps me remember every person I meet by creating flashcards of their face, name, where we met, and our conversation. Facelogger consists of an always-on videocamera necklace, a software suite to process the video, and a smartphone interface for reviewing the flashcards.
The camera is a commercially available Looxcie camera, which was modified with a prism so it hangs around the neck. This camera continuously captures activity, and has a button that allows you to save the preceding 30 seconds of footage (footage that’s not saved is automatically discarded). When I meet someone for the first time and they introduce themselves, I press the button. The camera preserves the previous 30 seconds of footage which hopefully includes a good video frame of the person, their name, and what they said about themselves.
When I next plug the camera into the computer, all the captured video clips are automatically uploaded to a server, and sent to Amazon Mechanical Turk. There, human beings identify the most representative faces from the video, determine their names, and even transcribe the conversation.
Facelogger gathers all the information and creates a Facecard, which can be reviewed later on a smartphone. A Facecard is like a flashcard, but it shows someone’s smiling face, their name, a map of where you met, a link to the video of the conversation, and often even a transcript of the introduction. I can search the text of the Facecards, sort them chronologically, or by geographic proximity.
Q: What’s the back story? What led to it?
Gradman: Like any self-respecting geek, I’ve always tried to stay a technological step ahead of my peers and a technological leap ahead of my parents. But when I discovered that my parents use the same model smartphone I do, I realized I was beginning to lose my edge. To me, the next frontier for personal electronics is wearable technology, and the natural application is self quantification.
But what to quantify? As a hardware hacker and artist, my first foray into QS wearable technology was definitely more for entertainment purposes. Called the Narcisystem, it was a biosensor suit featuring sensors for pulse, heading, EEG, pedometry, and breath alcohol level. I used the output of these sensors to drive the lights, sounds, and ambiance at a party venue. Fun, but not really a form of human augmentation.
I have terrible trouble remembering the names and faces of people I meet. Its hard to say which is worse: my face-blindness or my memory for names. I’ll meet someone, shake their hand, and we’ll introduce ourselves. Moments later I realize with panic that I’ve already forgotten their name! And hours later, if they’ve changed clothes, altered their hair, or removed their glasses, I’ll blithely reintroduce myself like we’ve never met. At least I’m not shy!
I’ve always wanted to offload the mental burden of remembering people. When I was in school and I needed to remember something I used flashcards. Why couldn’t that technique work for people too?
Q: What impact has it had? What reactions have you had?
Gradman: Because the Facelogger is a first-stage prototype I am its only user. Has it helped me remember people I meet? You bet it has. I’m amazed by the quality of the Facecards and by how effective they are at jogging my memory. I get the general sense that reviewing Facecards a day or two after meeting someone gives me an opportunity to properly commit someone’s name and face to memory at my own pace … something I simply cannot do “on the fly” as we meet.
There’s another purely psychological effect: because I’m confident that my technology is taking care of remembering for me, I can relax into the conversation. I was never shy about saying “hi” to people before, but I did experience stress over the fact that I immediately forgot their name and face. Now with that interaction captured and searchable, I’m not bothered at all.
I’m sensitive to the ethical concerns with capturing someone on video without their consent. When asked what I’m wearing around my neck—and as you might expect, that happens a lot—I never lie. I explain that I’m wearing a video camera to help me remember people I meet. Invariably, I’m asked “is it recording me now?” I’ve been asked to turn it off, and I always comply. But a surprising number of people tell me they want their own Facelogger. It turns out there’s demand for a device to help remember people’s faces and names!
Some have questioned the legality of wearing a video camera. But there are already cameras trained on us wherever we go. You can buy a video camera hidden in a pen, or a pair of sunglasses. Will our social mores (or our laws) surrounding cameras trail so far behind the technology?
Very few have actually questioned the morality of wearing a video camera. In the age of pervasive social networking we’re living highly examined lives. For anyone with a camera on their mobile phone, its not such a stretch to imagine wearing the camera around their neck.
Also, I’m careful to remove the Facelogger when I’m not likely to meet new people:at home, in a business meeting, etc. I do this because the purpose of this device is not to have a record of every conversation I have.
Q: What makes it different, sets it apart?
Gradman: Life-logging is always something that fascinated me, but I felt that an ever growing cache of unsearchable video of my life would just be a huge burden. Facelogger is an experiment in constrained lifelogging. By only capturing moments that share a particular characteristic and have common features Facelogger allows for a well-defined process of data extraction and collation that address a specific shortcoming.
Gordon Bell, the pioneer of life-logging described his always-on MyLifeBits image recorder as “write-once, read-never.” For me, the decline in storage costs is not sufficient reason to record my entire life on video. Huge amounts of unprocessed video is just video I’ll have to review someday! That’s why I find it so easy to resist the temptation to press the “capture button” more often. Unless I have automatic tools to convert video into a compact searchable representation—in this case, a Facecard representing a person I’ve met—the video just isn’t worth saving.
There are other tools out there designed to help remember names and faces. Evernote recently released Hello, a mobile app to record people. What distinguishes Facelogger is it’s passive form of information capture.
Q: What are you doing next? How do you see Facelogger evolving?
Gradman: Currently, a Facecard only expresses information captured in the 30-second clip. But APIs for face identification are getting really good. Soon the Facelogger will dig through my social network, and connect a Facecard to the social profile of the person it represents.
Next I will passively capture my meals, and use Mechanical Turk to help catalog my meals.
Face logging and food logging are only two well-defined applications of life-logging. I intend to identify others, and make them available as software for anyone wearing a compatible life-logging rig.
Q: Anything else you’d like to say?
Gradman: Face-blindness and poor memory for names are widespread problems! I designed the Facelogger with my own shortcomings in mind, but I’m now examining how I can make these tools more widely available, perhaps as a subscription service.
If you’re interested in updates on this project, have ideas to improve the system, or want to be contacted when the Facelogger service is available for beta-testing, please join the mailing list.
Platform: Currently, iOS. Coming soon to any HTML5 enabled smartphone.
Price: not yet for sale; to be contacted for beta-testing, please join the mailing list
(If you are a “toolmaker” and want to participate in this series, contact Rajiv Mehta at email@example.com)
Ernesto Ramirez strikes again! His recent talk at QS San Diego called Quantified Self on a Budget is one of our most viewed videos. We were lucky enough to have Ernesto come up to San Francisco to share his thoughts on how self-tracking can enhance your memories and help you reflect on your life. He is currently using two tools: 4squareand7yearsago to tell him every morning where he checked in a year ago, and Lifelapse to take pictures every 30 seconds as he goes about his day. Ernesto has started a Visualized Self experiment, creating a History of Ernesto. It’s almost as though self-tracking is a sort of neural prosthesis enhancing our humanity. (Filmed at the Bay Area QS Show&Tell Meetup #19 at Singularity University.)
Calling all beta testers!
Kresten Bjerg, at the Institut for Psykologi at the University of Copenhagen, is working on an open source electronic diary app that uses pictograms or “syntactic glyphs” as part of the thought-recording process. He is looking for people to help test the diary and possibly build on it. Please write directly to Kresten if you are interested in checking it out.
Here are some of the pictograms he has built into the diary:
Mark Carranza has been keeping a list of his ideas since 1984. His list has more
than a million entries, with more than 7 million connections between
entries. Although the media window above appears to contain a video of Mark’s talk at QS #3, Mark preferred that MX not be shown yet, so what you see on screen during the sound recording of talk are the faces of some
of us in group, listening with interest. But even without being able to
see the tool in action, Mark’s description is wonderful. Some
highlights: Mark describes his personal workshop as “The institute for
the prevention of design.” His program is DOS software, last updated in 1992. MX
stands for memory experiment, but if you say these letters again and
again you will notice another allusion. Any element in a list of thoughts can be associated with any other element. “I use it as a thought tool,” Mark says.
At last Monday’s meeting, Mark gave us a quick update on the MX project. Since Esther Dyson was sitting nearby, he went back into his notes and in an instant could recount every time he had heard Esther talk, and what she had talked about. From the look on her face, it seemed that Esther’s memories were being activated by this exercise, too.
Below are Mark’s complete notes from the meeting. I am using them to construct my own account of what happened, and finding them marvelously helpful. The key, for me, in appreciating this experiment, is to understand that these notes are memory aids. They are not meant to stand alone as a story about what happened. Rather, their magic lies in stimulating a chain of associations that bring the older train of thought to mind, updating it in the process. This is working for me after a week has passed, and it seems to work for Mark after years have passed. If you were at the most recent Show&Tell, go ahead and review Mark’s notes and see if they work for you.
talk: Atilla Csordas: rookie coder gene transmission: QS #8; Monday, September 14, 2, 2009
1) 5:52 pm
3) meetup: Quantified Self #8: IFTF: Monday, September 14, 2009
4) accelerating change
5 a pet project
6 open social network
8 23 and me
9) to access that data
10 to access that data from your phone
11 when you share your genome
12 when you share your genome with others
13 5 Mb text file
14 a 5 Mb text file
15 600,000 rows
16 a rookie coder
18 to build a twitterbot
19 building a twitterbot
20 twitter as a real-time search engine
21 searching sequences
22 searching nucleotide sequences
23 searching nucleotides
24) nucleotide sequences
26 the constraints can give us creativity
27 how many accounts did you create?
28 twitter spam filters
29 twitter whitelist
30 a low barrier of entry
31 SMS gateways
32 it’s a pet project
34) following me
35 following my chromosome
37 SMS as a low barrier
38 grassroots action
39 tweet what you eat
40 pet gene
41) cell phone
talk: Gopal: memory practice: QS #8: Monday, September 14, 2009
1) 7:42 pm
3 talk: Gopal: extreme cognitive enhancement: QS: March 31, 2009
4) QS talks
5) meetup: Quantified Self #8: IFTF: Monday, September 14, 2009
6 much intuition
7 the source of much intuition
8 the source of much intuition is introspection
9 the facts of memory
10 the internal sight
12) mnemonic techniques
13 your ability to visualize
14 the memory championship
15 what have you learned?
16 speed of recollection
17 accuracy of recollection
18 they’re tracking their eyes
19 your gaze doesn’t move at all
20 eye movements during memorization
21 eye movements during recall
22 eye movement during recall
23 the eye movement during recall video
24 the NLP ‘eye movement during prompted recall’ video at the CIIS class
25 to improve working memory
26) working memory
27 simple training tasks
28 dual and back
29 improving working memory
30 press the space bar
31 practicing the task
32 performance on IQ tests
33 memori loci
34 memory loci
35) extreme cognitive enhancement
36 cognitive enhancement
37 what if you simply studied IQ tests?
38) IQ tests
39) IQ test
40 IQ test performance
talk: Sri: facet of life: QS #8: Monday, September 14, 2009
1) 8:51 pm
3 meetup: Quantified Self #8: IFTF: Monday, September 14, 2009
4 I quit my job
5 my pain is this much
6 I wasn’t disciplined enough
7) I was lazy
8 we stole it
9 the interesting part
10 we stole the name
11 the words you typed in the comments
12 forcing me to log it
13 it’s texting you
14 some kind of intelligent algorithm
15 a comment on the blog
16 to draw conclusions from self-tracking
17 when the self-tracking has a negative cast
18 it’s all self-generated
19 did you talk to your doctor?
21 Stanford Pain Clinic
22 facet of life
23 facet of life website
25) that kind of data
talk: Bill Jerrold: speech analysis: QS #8: Monday, September 14, 2009
1) 8:55 pm
3) meetup: Quantified Self #8: IFTF: Monday, September 14, 2009
4) Bill Jerrold
5) speech analysis
6) QS talks
7 someone who likes to do it
8 automated speech analysis
9 to harness your speech
10 untapped information
11 automated speech recognition
12 automatic speech recognition
13 your speech
14) your voice
15 your speech carries a lot of information
16 J. B. Pennebaker
17 poets that commit suicide
18 poets that don’t commit suicide
19 poets that didn’t commit suicide
20 poet-suicide data
21 poet-suicide correlation
22 poetry-suicide correlation
23 the frequency of first person words
24 the higher frequency
25 the higher the frequency
26 the higher the frequency of first person words
27 the sad word frequency
28 sad word frequency
29 sad words
30 clinical feature
31) social feature
32) a social feature
33 male speech
34 female speech
35 male speech/female speech
36 social words
37 social processing
38 correlation despite errors in data
39 correlation despite noisy data
40) UC Davis
42 happy word frequency
43) happy words
45) natural language processing
46 when your computer says, “hey, you’re depressed! ”
47 “You are cognitively impaired.” OK, Cancel
48 fewer social words
49 deceptive speech
50 characteristics of deceptive speech
51 words associated with the self
52 fewer words associated with the self
53 deceptive speech uses fewer words associated with the self
54 deceptive speech has more positive words and uses fewer words associated with the self
56 low idea density
57 idea density
58 a lower idea density
59 measuring idea density
60 essays written by nuns
62 an essay written by a nun
63 an educated reader
65 nuns with Alzheimer’s
66 the number of prepositions
67 the number of propositions
68 the number of prepositions/the number of propositions
69 many other findings
70 cognitively normal
71 referential failure
72 literary forensics
73) language features
74) a famous example
75 book: True Colors
77 literary forensicist
78) the Clinton Administration
79 more dominant
81 dominance measure
84 an approach which is completely automatic
86 laboriously go over it
87) tagged data
88 to laboriously go over it
89 laboriously, laboriously, laboriously
90 laborious, laborious, laborious
91 the scale of these kinds of studies
92 look for new patterns
93 machine learning
94) David Rumelhart
96 some disease
97 some neurological disease
98 some rare neurological disease
99 some rare aphasia
101 semantic dementia
104 semantic dementias
105 this speech recognition
106 8:01 pm
107 negative emotion words
110) emotion words
111 positive emotion words
112 negative emotion words/positive emotion words
113 positive emotion words/negative emotion words
114 its accuracy
115 guessing at random
116 a lot of the speech was slurred
117 average word error rate
118) word error rate
119 high word error rate
120 high word error rates
121 pretty high word error rates
122 the Western Aphasia Battery
123 to describe a picnic scene
124 to establish rapport
125 the answer seems to be yes
126 the answer seems to be: yes
127 an answer that seems to be yes
128 ambient speech
129 ambient speech samples
130 a set of recordings
131 what is my impression of this?
132 what is my impression of this talk?
133 the purpose of this study
134 relationships between personality and heart disease
135 a question that was self-focused
136 are you satisfied by all the events in your life?
137 the two distributions
138 some of these effects
139) to gain insight
140 to gain insight into ourselves
141 spontaneous speech
142 people who aren’t older adults
143 people who aren’t old
144 decent diagnostics
145 frequency profile
146 frequency profiles
147 if you’re experimenting with something
148 we need tagged data
149 every phone conversation
150 how good I felt
151 how happy I felt
152 people who are willing to do a lot of work yourself
153 people who are willing to do a lot of work themselves
154 people who are willing to do a lot of work
155 people who are willing to do a lot of work on their own
156 automated scanning
157 automated scanning of telephone conversations
158 the garage experimenter
159 taking a classifier
160 taking a classifier and repurposing it
161 diary speech
163) speech processing
164) speech production
165) speech perception
166 automated speech perception
167 another hope
168 patterns we can’t fake
170 speech biometrics
171 murdered poets
172 favorite murdered poets
talk: Esther Dyson: 23 and Me: QS #8: Monday, September 14, 2009
1) 8:38 am
3) meetup: Quantified Self #8: IFTF: Monday, September 14, 2009
4) many eyes
5) Esther Dyson
6 do a demo
7 to be brief
8 percent similarity
9 some kind of no-op
10 nigerian person
11 people who aren’t related
12) my brother
13) my sister
14 23 chromosomes
16 fully identical
17 some of the same sequences
18 the ancestry painting
19 a typical pattern
20 what do you have that is useful?
21 getting more involved with your-self understanding
22 different genomic regions
25) gene sequencing
26 semi-private gene sequencing
27 much lumpier similarities
28 owning their own sets
29 if you’ve done anything interesting
30 a social network platform
31 your genome is the slowest changing thing
32 your genome is the slowest changing thing about you
33 a bunch of surveys
34 people who are benefactors
36 connecting to medical records
37 disease groups
38 going after disease groups
39 in a way that was meaningful
40 to have that data manipulated
41 people need to trust us
42 you can’t de-identify people
43 privacy nuts
44 ethnic person
45 unidentified person
46) talk: Esther Dyson: social networking: Sunday, June 6, 2004
47 that kind of data
48 a threaded narrative
49 mystical scariness
50 by the health system
51 genetic non-discrimination
52 getting them to pay
53 getting them to pay for what happened
54 getting insurance companies to pay for what happened
55 it’s not disclosed
56 if your behavior can change by knowing
talk: Joe Belesqua: passive quantification: QS #8: Monday, September 14, 2009
1) 8:53 am
3 iphone app
4 other dreams I have
5 things that are kind of boring
6 when I measure certain things
7 when I measure certain things, it changes my relationship with them
8 if I had less relationship to this process
9 if I had less relationship to the process
10 if I had less relationship to the process of record-keeping
11) a step function
12 a bed scale
13 a different step function
14) it knows
15 the bed knows
16 it knows what your weight is
17 lots of automatic data collection
18 automatic data collection
19 baby weighing scale
20 baby weighing scales
21 what force needs to be applied to vibrate the body
22 determining mass in low gravity
23 determining mass in zero gravity
24 what force needs to be applied
25 force needs to be applied
26 personal instantaneous feedback
27 greed for tools
28 unpleasant correlations
29 a trailing record
30 don’t suspend your bed with bungee cords
31) meetup: Quantified Self #8: IFTF: Monday, September 14, 2009
talk: Bo Adler: walking data self-guinea pig: QS #8: Monday, September 14, 2009
1) 9:09 am
3) meetup: Quantified Self #8: IFTF: Monday, September 14, 2009
7 data-driven health care
8 cots sensors
9 it goes red
10 you have to get used to wearing it
11 I’m a walking experiment
12 I’m not sure what you’re asking me
13 to synchronize them
14 to synchronize that
15 how accurate the devices are
16 when I’m all wired up
17 sleep apnea
18 could I tell?
19 I would do my own experiments
20 the apnea events
21 the twitter stream
22 I use a regular language
23 what you do with it
25 it doesn’t feel like it helps
26 graphs from the data
27 big variability
28 the first thing I learned
29 it felt like it made a big difference
34 it would relax the muscles
35 it’s kind of the same
36 what is the data saying?
37 almost the same
38 a sleep apnea person
39 the sleep apnea guy
40 check your neighborhood
41 circadian rhythm
42) circadian rhythms
43 the heart rates
44 the sleep apnea club
45 data tracking
46 painfully evil
47 the csv files
48 to grab the csv files
49 to grab the realtime data
51 dashboard-style software
52 to last about a week
54 the number of batteries
55 a green sensor
56 rechargeable batteries
57 [private: email]
58 Bo Adler: [private: email]
59 everybody’s got their own little systems here
60 I’ll try to link to it
61 finding time for everything is hard
These are the notes I took at the 8th Quantified Self Meetup, Monday
night, September 14, 2009. I wrote them paper at the meetup, then
entered them into my e-memory system MX a day or two after. I hope
sharing them helps people re-encounter, re-think, at least some of the
great ideas shared there.
For those interested, Gordon Bell and Jim Gemmel will be presenting
their new book ‘Total Recall: How the E-Memory Revolution Will Change
Everything’ at PARC Forum in Palo Alto, next Thursday, September 24,
2006 from 4 to 5pm. Details at: http://www.parc.com/event/948/
The “)” after a number marks entries already in the system when added to a list.
each individual list here, 75% to 87% are new entries, thoughts never
enetered before. Over all the lists in this set, 85% of the entries
Each entry on a list is itself a list, a link to the list of which
it is the title. It’s trememdously important that every connection is
a cross-link. For example, #31 on the first list is shown as a list,
on which the first list is #3. I’ve also included from the first list,
lists there numbered: 32, 33, 34, 37, 38, 39. These are all the talks
except mine and then unfortunately Steve Brown’s after.
Please forgive the software not being ready to release where these lists are active. Thanks!
Deep mysteries of human nature will be exposed by self-tracking, aspects of our behavior so disconcerting and bizarre that they will lead us to question whether we understand ourselves at all. I know this is true because such disconcerting results are already being produced at a rapid pace by experimental psychologists, and self-tracking brings the methods of experimental psychology into our daily lives; if, that is, we think we can stand to learn the lessons they teach.
Watch this video published from a story in New Scientist by Lars Hall and Petter Johansson.
[I]n an early study we showed our volunteers pairs of
pictures of faces and asked them to choose the most attractive. In some
trials, immediately after they made their choice, we asked people to
explain the reasons behind their choices.
to them, we sometimes used a double-card magic trick to covertly
exchange one face for the other so they ended up with the face they did
not choose. Common sense dictates that all of us would notice such a
big change in the outcome of a choice. But the result showed that in 75
per cent of the trials our participants were blind to the mismatch,
even offering “reasons” for their “choice”.
This is troubling enough, but there’s more. When people are fooled into thinking they made a different choice than the one they actually made, and then articulate their “reasons” for this supposed choice, they then may actually change their future preferences to conform to their confabulated preference.
Importantly, the effects of choice blindness go beyond snap judgments.
Depending on what our volunteers say in response to the mismatched
outcomes of choices (whether they give short or long explanations, give
numerical rating or labeling, and so on) we found this interaction
could change their future preferences to the extent that they come to
prefer the previously rejected alternative. This gives us a rare
glimpse into the complicated dynamics of self-feedback (“I chose this,
I publicly said so, therefore I must like it”), which we suspect lies
behind the formation of many everyday preferences.
Lars Hall and Petter Johansson lead the Choice Blindness Laboratory at Lund University, Sweden. At the end of their New Scientist piece, they suggest that learning about this experiment should make people better at understanding their own choices.
In everyday decision-making we do see ourselves as connoisseurs of our
selves, but like the wine buff or art critic, we often overstate what
we know. The good news is that this form of decision snobbery should
not be too difficult to treat. Indeed, after reading this article you
might already be cured.
Unfortunately, this is not convincing. It is common for biases persist even when we are warned about them. I suspect we are in no position to stand guard over our judgments without the help of machines to keep us steady. Assuming, that is, that deliberative consistency is a value we care to protect.
In a hospital or a doctor’s office, you are usually asked what medications you have taken recently. Under conditions of distraction, I’m sometimes uncertain if I’ve given a correct answer. I think to myself: “okay, if I’m a professional question-asker, and I’m doubting my memory in this situation, how often are incorrect responses given by people who have more prescriptions and less practice?
We now know the answer.
A study by Dr. Stephen Persell at Northwestern University looked at 119 low income patients, average age 55, under treatment for high blood pressure. Nearly fifty percent were unable to name a single medication listed on their chart. The paper will be published in the November issue of the Journal of General Internal Medicine. (Press announcement here.)
Obviously, there should be a machine to remember this for us. But this is more easily said than done. Even when drugs are administered in a hospital setting, errors are common.
The solution hospitals many beginning to use is to track all drug dispensing with bar codes. It’s a complicated, expensive method, but a paper published in the Archive of Internal Medicine last April reported that the money saved through reduction in “adverse drug events” from implementing bar codes would recoup the investment within one year.
The question now: how can quality-of-care innovations like pharmacy bar codes give us better ability to know and control our own medical information?
Why not give patients a small booklet, similar to the booklet the Savings & Loan banks gave to depositors. Thought these booklets were cheap to produce, their design and the way they were used suggested they were valuable. Customers would hand their booklet to the teller, who would fill in the deposit or withdrawal amount and the new balance.
I’m not suggesting that prescription records be kept only in little paper booklets. No – the booklet would be a patient interface to the electronic prescription records that doctors, hospitals, and pharmacies are beginning to put into a standard electronic form. At the end of a visit to the doctor, we hand our booklet to the receptionist. The receptionist prints out sticky labels and presses them into the booklet – one label per prescription, one page per visit. The label would show the drug name, dosage, and bar code. We could take this booklet with us to the pharmacy. If a generic was substituted, the pharmacist could print out a label and add it to the page.
This is not as elaborate as it sounds. The booklet would sit on top of bar code technology that is already being installed. It is just a matter of giving the information to the patient in a handy form: portable, well-organized, and standard. It would work for people of all ages and temperaments, not just for those who like to track their medications online. Because the prescription book is just an interface, it doesn’t matter too much if it gets lost. A new one can be easily made. But it would also function in situations where the online information was not available – for instance, a patient could read it to a nurse over the phone.
This may be wrong and I’d be happy to hear about why.