Matthew Cornell

Matthew Cornell
Location
Matt is a terminally-curious ex-NASA engineer and avid self-experimenter. His projects include developing the Think, Try, Learn philosophy, creating the Edison experimenter's journal, and writing at his blog, The Experiment-Driven Life. Give him a holler at matt@matthewcornell.org
Posts

Keeping motivated in your tracking

I recently received an email from someone having trouble keeping up with her experiment. While there is lots of general advice about discipline and motivation, this got me thinking about how doing personal experiments might differ. Following are a few brief thoughts, but I’d love to hear ways that you keep motivated in your quantified self work.

The desire to get an answer. The main point of an experiment is to get an answer to the initial question. “Will a Paleo diet help me manage my weight?” “Does talking less bring me closer to my kids?” Maybe the principle at play is that experiments which motivate start with great questions.

Built-in progress indicators. If you’ve set up your experiment well, you should have measures that come in regularly enough to keep you interested. This is assuming, of course, that you care about the results, i.e., that you’ve linked data and personal meaning (see below). But unlike other types of projects, maybe we can use the periodic arrival of measurements to stimulate our motivation, such as celebrating when new results appear.

The joy of satisfying a mental itch. Curiosity is a deep human motivation, and experiments have the potential of giving your brain a tasty shift – such as when you are surprised by a result. I especially like when a mental model of mine is challenged by a result. Well, sometimes I like it.

Sharing with like-minded collaborators. At a higher level of motivation, experimenting on yourself is an ideal framework for collaboration with folks who are either 1) interested in your particular topic (e.g., sleeping better or improving your marriage), or 2) are living an experiment-driven life. It is encouraging to get together with people to share your work, and to receive support, feedback, and ideas. Of course it feels good to so the same for them.

Desire to make a change. Finally, if we come back to why we experiment, there should be a strong self-improvement component to what we are tracking. My argument is that, ultimately, it’s not about the data, but about making improvements in ourselves for the purpose of being happier. If the change you are trying is not clearly leading that direction, then it might make sense to drop it and try something more direct. Fortunately, with self-experimentation there is usually something new you can try.

Underlying all of these, however, is the fact that the work of experimentation takes energy. Every step of an experiment’s life-cycle involves effort, from thinking up what you’ll do (creating a useful design), through running the experiment (capturing and tracking data), to making sense of the results (e.g., the “brain sweat” of analysis). Given our crazy-busy lives, there are times when we simply can’t take on another responsibility. So if you find yourself flagging and losing interest in one of your self-experiments, then maybe that is itself some data. Thoughts?

[Image from Steve Harris]

Posted in Discussions | Tagged , , , | 2 Comments

What’s the oddest thing you’ve tracked?

We see a lot of cool things here that people are experimenting with, such as health (sleep, water intake, mood) or productivity (interruptions, hours/day, attention), but we are also trying odder things. My interest is in widening the definition of what could be considered an experiment, so I thought I’d ask, what off-the-wall things have you tracked? I’m also curious to know what kind of support or push back you got from those around you, if they were social experiments. While maybe not terribly odd, here are some of the things I’ve tried:

  • Experimented with ways to keep my feet warm while mountain biking in winter (tracked left/right foot comfort).
  • Tried changing my thinking around positive events (tracked the event and whether it helped me feel happier to relive it later).
  • Played with different ways to prevent “wintry mix” ice buildup on sidewalks (tracked likelihood of falling – with careful testing). (Are you detecting a northern climate?)
  • Tested different kinds of one-day contact lenses (tracked ease of insertion, visibility, and comfort).
  • Dressed better in public (normally I’m very casual), including wearing a hat (tracked psychological and physical comfort, reactions of others, including – surprise! – special treatment at businesses).

 

[Image: Office Board by John F. Peto

Posted in Discussions, Personal Projects | Tagged , , | 8 Comments

What makes a successful personal experiment?

As I continue trying to stretch the concept of experiment so that a wide audience understands applying a scientific method to life, I struggle with defining success. While the trite “You can always learn something” is true, I think we need more detail. At heart is the tension between the nature of experimentation’s trial-and-error process (I prefer the term Edisonian approach) – which means outcomes are unpredictable – and our need to feel satisfaction with our work. Here are a few thoughts.

Skillful discovery. Rather than being attached to a particular outcome, which we have limited control over, I’ve found it’s better to focus on becoming an expert discoverer and mastering the process of experimentation. Because you have complete control over what you observe and what you make of it, you are guaranteed success. Fortunately, there’s always room to develop your investigatory skills.

Fixing the game. At first it might seem contrived, but carefully choosing what you measure can help implement a scientific perspective on success. For example, instead of framing a diet experiment as “Did I lose weight?,” it is more productive to ask “How did my weight change?” The former is a binary measure (losing weight = success, not losing = failure) and one that you don’t necessarily have control over. After all, you are trying an experiment for the very reason that you don’t know how it will work out. The latter phrasing is better because it activates your curiosity and gives you some objectivity, what I call a “healthy sense of detachment.”

Improving models. As essentially irrational creatures, we run the risk of not questioning what we know. Updating our mental models of people, situations, and the world helps us to be more open to improvements. And the leading edge of that is the conflict between expectation (predicted outcome) and reality (actual results, AKA data). The quantified way to work that is by explicitly capturing our assumptions, testing them, taking in the results, and adjusting our thinking as necessary. This also leads to better predictions; from The Differences Between Innovation and Cooking Chili:

Of course, all of the experimental rigor imaginable cannot guarantee success. But it does guarantee that innovators learn as quickly as possible. Here, “learn” means something specific. It means making better predictions. As predictions get better, decisions get better, and you either fail early and cheap (a good outcome!) or you zero in quickly on something that works.

Getting answers. Another way to guarantee success is by going into an experiment with clearly formulated questions that your results will answer. Structured correctly, you know you will get answers to them. I think of it as regardless of what happens, you have found something out. (Hmm – maybe thinking of the process as active discovery is a richer concept than the generic “you learned something.”)

Designing for surprise. If the product of your experiment was not very surprising, then maybe you should question your choice of what you tried. Exciting experiments probe the unknown, which ideally means novelty is in store. Fill in the blank: “If you’re not surprised at the end of your experiment, then __.”

Zeroing in. Because we usually dream up experiments with a goal in mind, chances are we come out the other end having moved some amount in the direction of attaining that goal. Progress is a success, so give yourself a pat on the back.

Taking action. Finally, each experiment is a manifestation of personal empowerment, which is a major success factor in life. While health comes to mind (do difficult patients have better results?), I think generally the more we take charge of our lives, the closer we get to happiness.

What do you think?

 

[Image from lincolnblues]

(Matt is a terminally-curious ex-NASA engineer and avid self-experimenter. His projects include developing the Think, Try, Learn philosophy, creating the Edison experimenter’s journal, and writing at his blog, The Experiment-Driven Life. Give him a holler at matt@matthewcornell.org)

Posted in Discussions | Tagged , , | 4 Comments

Personal Development, Self-Experiments, and the Future of Search

We experiment on ourselves and track the results to improve the way we work, our health, and our personal lives. This rational approach is essential because there are few guarantees that what works for others will work for us. Take the category of sleep, for example. Of the hundreds of tinctures and techniques available, clearly not all help everyone, or there would be exactly one title in the sleep section of your bookstore, called “Sleep,” and no one could argue about its effectiveness. Treating these improvements experimentally, however, requires a major shift in thinking.

But being human isn’t that simple. There are variables and confounding factors that mean you have to take matters actively into your hands if you want to really know what’s personally effectual. That’s why what we do here is so exciting. Instead of accepting common sense, we take a “prove it to me” approach and work to find out for ourselves. Operating from this basis, rather than faith, is more effective in the long run. (It’s why we use science to understand the world, rather than astrology or phrenology, for example. Just look at what we’ve accomplished.)

As I tried to say in Making citizen scientists, this is heralding a move from citizens-as-helpers to true citizen scientists – people who get genuinely curious about something and decide to test things out for themselves, rather than simply trusting what others say will work. If we expand that vision five or ten years in the future, I think there could be a major shift in how we search for ways to improve ourselves, and that’s what I want to share here.

Continue reading

Posted in Discussions | Tagged , , , | 10 Comments

Micro Experiments

What’s the smallest thing you’ve tracked that had a short turnaround time but generated useful results? I’ve noticed that the kinds things we try here in the Quantified Self community are often longer-term experiments that seem to be a week or two long at a minimum. I think this is primarily due to the effects of what we try need time to emerge. (This brings up the issue of how much value there is in investigating subtle results, which came up at our recent Boston QS Meetup – recap here.)

However, as I work to adopt an experimental mindset about life, I’ve noticed these efforts can vary in scope, duration, and complexity. Because interesting things happen at extremes, I’ve been exploring the very smallest class of activity, what I call micro experiments. I’ve found that trying little things like these is a great way to test-drive treating things as experiments, and maybe offer the chance for non-QS’ers to dip their toes in the idea of tracking on a tiny scale. (Of course you shouldn’t risk shortening your life over any.) Researching the idea didn’t turn up much, though Micro-Experiments and Evolution was stimulating.

Here are some examples I’ve tried and their results. Are they true experiments? Are they useful? I’m curious to know what you think.

Jing: I tried using Jing, a free tool for doing short screencasts, to explain a bug I found in my site. I usually write them up, but because it was complex, it would have taken a lot to explain it. Instead I created a four-minute screencast, emailed the link to my developer, and measured the results. Conclusion: Worked great! Time to record: 4 minutes. His understanding of the problem: High. Enjoyment level of trying a new tool: Fun.

Testing expectations: Left unchecked, I tend to be pessimistic and anxious, which I continue working to improve. Here’s a technique I stumbled on that works well in micro experiment form. The idea is to treat your expectations as a model, make your assumptions and predictions explicit, then put them to the test. I applied it to two difficult phone calls I had scheduled, and found that my expectations were way off. In one case I was asking a fellow writer for a favor (mentioning an ebook I created), and instead of turning me down (my working model), he was happy to help. The other was a sales call in my last career to a prospective client, which I expected to go swimmingly. Instead it was a disaster! After analyzing what happened and comparing it to my model, I formed a couple of new ideas on how to do future ones. Surprisingly, the minute I thought of these as an experiments and wrote down my expectations, I felt immediate relief before the calls.

Pay for someone’s parking: As a touchy-feely micro experiment, I was standing in line to pay for parking at a garage, and on a lark I decided to pay the next person’s fee (it’s almost always $0.50). I didn’t know how they’d react (find it odd and refuse, for example), but the result: Evident happiness level of subject: High (I got a nice smile). My feeling: Walked away with a lighter step.

Disabling email: I continue to struggle keeping email from sucking my time and attention, so I tried disabling my email program for a day. This email vacation was helpful, but surprisingly uncomfortable. Not being able to monitor it clearly indicated a bit of an addiction. I didn’t end up adopting it.

Decisions and glue: I sometimes stress about getting something new perfect the first time. Yes it’s unrealistic, but that’s the brain I’m stuck with. Treating the decision as a micro experiment helps me enjoy things more. For example, I had to repair two broken lawn chairs at home, and couldn’t decide which of two glues to try. Then I realized this was a natural parallel type of experiment, and tried them both, one per chair. Result: Gorilla glue worked far better than the GOOP. Trivial? Maybe, but next time I don’t have to wonder.

Not eating before exercise: Eating breakfast is commonly considered important, so I wondered what would happen if I skipped eating all morning then mountain biking at 1pm for an hour. Result: My performance was just fine, but I was hungry afterwards! Now I don’t worry so much if I’m pressed for time.

Getting a bank fee waived: My wife needed a document notarized, so I brought her to the mega-bank where I was forced to do business for a time. The teller said she couldn’t do notarize it because my wife wasn’t listed on my account. In a bold (for me) move I did a social experiment by asking for the manager, who ended up OK’ing it, no problem. I was a little embarrassed until I thought of it experimentally.

Chocolate skin, cranberry sauce: There are lots of ways to experiment in the kitchen; here are two micro experiments I tried. First, I drink hot chocolate every morning (melt the expensive dark stuff into milk) and it sometimes develops a skin on top. (Hey – I discovered pudding!) To avoid that, I tried putting the heat on high and stirring constantly, instead of my usual medium heat with less stirring. The question was whether heat/time would affect skin forming. Result: ~50% reduction. As a second example, we had some leftover cranberries (I live in New England) and I wanted to make a sauce, but I was too lazy to follow a time-consuming recipe. Instead I microwaved a handful of them in a bowl with a little orange juice and honey. Result: An explosion of flavor. (Literally – it blew up while cooking.) Edibility was marginal.

[Image from windsordi]

(Matt is a terminally-curious ex-NASA engineer and avid self-experimenter. His projects include developing the Think, Try, Learn philosophy, creating the Edison experimenter’s journal, and writing at his blog, The Experiment-Driven Life. Give him a holler at matt@matthewcornell.org)

Posted in Discussions | Tagged , | 4 Comments

Quantified Self Boston Meetup #5, The Science of Sleep: Recap

QS Boston Meetup #5 was held on Wednesday on the topic “The Science of Sleep,” a subject that comes up here regularly. The event was major success and, to my mind, demonstrated powerfully the potential of the self-experimentation movement and the exceptional people making it happen. Here is a brief recap of the evening, with my comments on what was discussed. A big thanks to Zeo for their generous support of the meeting, QS Boston leader Michael Nagle, and sprout for hosting the event.

Experiment-in-action: A participatory Zeo sleep trial

Michael put the theme into action uniquely by arranging for a free 30-day trial of Zeo sleep sensors to any members who were interested in experimenting with it and willing to give a short presentation about their results. Over a dozen people participated, and the talks were a treat that stimulated lots of discussion. I thought this was an excellent use of the impressive members of this community, as the talks demonstrated.

Steve Fabregas

Zeo research scientist Steve Fabregas kicked off the meetup by explaining the complex mechanisms of sleep, and the challenges of creating a consumer tool that balances invasiveness, fidelity, and ease of use. He talked about Zeo’s initial focus (managing Sleep inertia by waking you up strategically), which – in prime startup fashion – developed into the final product. Steve also gave a rundown of the device’s performance, including the neural network-based algorithm that infers sleep states from the noisy raw data, something he said that even humans have trouble with. There were lots of questions afterward, including about their API and variations in data based on age and gender. All in all, a great talk.

Sanjiv Shah

Sanjiv started out the sleep trial presentations with a lively talk about the many experiments he’s done to improve his sleep, including a pitch-black room, ear plugs, and no alcohol or caffeine. But the biggest surprise (to him and us) was his discovery of how a particular color of yellow glasses, worn three hours before bed, helped his sleep dramatically. This is apparently based on research into the sleep-disturbing frequencies of artificial light. He shared how wearing these also helped reduce jet lag. The talk was a hit, with folks clamoring to know where to get the glasses. I found this page helpful in understanding the science. (An aside: If you’re interested in trying these out in a group experiment, please let me know. I am definitely going to test them.)

Adriel Irons

Adriel studied the impact on weather and his sleep (via the Zeo’s calculated ZQ) by recording things like temperature, dew point, and air pressure. He concluded that there’s a possible connection between sleep and changes in those measures, but he said he needs more time and data. Audience questions were about measuring inside vs. outside conditions, sunrise and sunset times, and cloudiness.

Susan Putnins

Susan tested the effect of colored lights (green and purple) on sleep. Her conclusion was that there was no impact. As a surprise, though, she made a discovery about a the side-effects of a particular medication: none! This is a fine example of what I call the serendipity of experimentation.

Eric Smith

Eric tried a novel application of the Zeo: Testing it during the day. His surprise: The device mistakenly thought he was asleep a good portion of his day. He got chuckles reflecting on Matrix-like metaphysical implications, such as “Am I really awake?” and “Am I a bizarre case?” His results kicked off a useful discussion about the Zeo’s algorithms and the difficulty of inferring state. Essentially, the device’s programming is trained on a particular set of individuals’ data, and is designed to be used at night. Fortunately, the consensus was that Eric is not abnormal.

Jacqueline Thong

Jacqueline finished up the participatory talks with her experiment to test whether she can sleep anywhere. Her baseline was two weeks sleeping in her bed, followed by couch then floor sleep. Her conclusion was that her sleep venue didn’t seem to matter. One reason I liked Jacqueline’s experiment is that, like many experiments, surprises are so rich and satisfying. Think bread mould. She said more data was needed, along with more controls. Sadly, she wondered whether her expensive mattress was worth it. Look for it on eBay.

Matt Bianchi

Matt Bianchi, a sleep doctor at Mass General, finished out the meetup with a discussion of the science and practice of researching sleep. Pictures and a description of what what a sleep lab is like brought home the point that what is measured there is not “normal” sleep: 40 minutes of setup and attaching electrodes, 200′ of wires, and constant video and audio monitoring make for a novel $2,000 night. He said these labs give valuable information about disorders like sleep apnea, and at the same time, what matters at the end of the day is finding something that works for individuals. Given the multitude of contributing factors (he listed over a dozen, like medications, health, stress, anxiety, caffeine, exercise, sex, and light), trying things out for yourself is crucial. He also talked about the difficulties of measuring sleep, for example the unreliability of self-reported information. This made me wonder about the limitations of what we can realistically monitor about ourselves. Clearly tools like Zeo can play an important role. Questions to him included how to be a wake more (a member said “I’m willing to be tired, but not to die sooner,”) to which he replied that the number of hours of sleep each of us needs varies widely. (The eight hour guideline is apparently “junk.”)

Matt’s talk brought up a discussion around the relative value of exploring small effects. The thought is that we should look for simple changes that have big results, i.e., the low hanging fruit. A heuristic suggested was if, after 5-10 days, you’re not seeing a result, then move on to something else. A related rule might be that the more subtle the data, the more data points you need. I’d love to have a discussion about that idea, because some things require more time to manifest. (I explored some of this in my post Designing good experiments: Some mistakes and lessons.)

Finally, Matt highlighted the importance of self-experimentation. The point was that large trials result in learning what works for groups of people, but the ultimate test is what works for us individually. (He called this “individualizing medicine.”) This struck a chord in me, because the enormous potential of personal experimenting is exactly what’s so exciting about the work we’re all doing here. All in all, a great meetup.

[Image courtesy of Keith Simmons]

(Matt is a terminally-curious ex-NASA engineer and avid self-experimenter. His projects include developing the Think, Try, Learn philosophy, creating the Edison experimenter’s journal, and writing at his blog, The Experiment-Driven Life. Give him a holler at matt@matthewcornell.org)

Posted in Meeting Recaps | Tagged , , , | 18 Comments

How I wasted two years on Twitter, all because I wasn’t tracking

Between 2007 and 2009 I spent a ton of time in Twitter before it finally hit me that 1) the net improvement to my life was zilch, and 2) had I thought of it going in as an experiment, I would have quit a long time ago and freed up energy for more effective efforts. Of course social media tools can provide plenty of value, but, as Alex said, Social media is an addictive time suck.

How do we go about measuring the value of Twitter? Business calls it ROI, but I think of it as simply what you hope to get out of it. The key is deciding why you’re using it. In my case I was dabbling, which is a fine motivation, as long as it’s done experimentally. After all, how many discoveries came from just getting curious and trying out something new? But here I should have set a time limit, and I’d still want to have something quantified, even it it’s as soft as “perceived value.”

But for more specific uses, coming up with measures is important. Are you trying to get more customers? Do you want to hear from people who can give you ideas for your product or book? Or maybe it’s more of a social pulse use – keeping in touch. Some metrics are straightforward, such as # inquiries about your business, or number of tweets from others that made you smile. However, I think a major challenge is latency – the time delay between action on your part and resulting effects seen in your life. For example, it might be months before you hear from someone who’s been silently reading your tweets. Maybe in those cases we could make the measure more direct by asking them explicitly what the impact is. I’m not sure.

While I didn’t treat using Twitter as an experiment per se, I managed a few times to use Twitter itself as a platform for experimentation. Continue reading

Posted in Personal Projects | Tagged , , , | 4 Comments

Just do it? But HOW? 24 productivity experiments I tried, plus a QS time management recap

Some time ago I was asked for the ultimate productivity tip, and instead of giving a straightforward take-away, I said that in the end the answer is “it depends.” That wasn’t a cheap shot because what works for you might not work for the next guy, and vice versa. Sound familiar? It’s the same case for medications, meditation, and most anything else we humans do. That’s why it’s best to experiment, examine your results, and decide based on the data. In other words, quantify!

But there’s a complication. Coming up with metrics that reflect the value of what we do, rather than the individual efforts, can be a challenge. While the latter are simpler to measure, (there’s a reason that some jobs require you to clock in – “seat time” is an easy metric), the real test is more how effective we are, not just how efficient. I may be cranking widgets at a fast pace, but what if I’m making the wrong ones?

Until we have general-purpose and quantified framework for measuring value (“accomplishment units?”), we have to keep being creative. In this long post I want to seed some discussion by sharing two things: some specific productivity experiments I’ve tried, with their results, and a recap of the cool productivity experiments found here on Quantified Self. Please share techniques that you’ve found helpful.

Productivity experiments I’ve tried

Adopt a system. The single biggest productivity change I made was trying a system for organizing my work. In my case I got the GTD fever (Getting Things Done), and my results were clear, including getting far more done more efficiently, feeling more in control, and freeing up brainpower for the big picture. At the time (five years ago) I wasn’t thinking of it in terms of an experiment, but it certainly qualified. From a QS perspective it can function as a kind of tracking platform because it has you keep a comprehensive and current list of tasks (Allen calls them “actions”). I have used them for various tracking activities, mainly by characterizing or counting them.

Two-by-two charting. I’ve plotted 2D graphs of various task dimensions to analyze my state of affairs, such as importance vs. fun (a sample is here). These are a kind of concrete snapshot that I analyze over time. In the above example I decided that the upper right quadrant (vital + fun) was still a little sparse.

Continue reading

Posted in Discussions | Tagged , , , | 6 Comments

The Big Bucket Personal Informatics Data Model

Seth’s post on Personal Science (especially about “data exhaust” [1]) got me thinking about big data and the implications for the self-tracking work we do. What evidence is there that big data will infiltrate self-experimenting? Under what conditions will self-tracking move from “small data”, or “data poor” (a few hundred or a few thousand data points) to “big data” or “data rich” (terminology from The Coming Data Deluge)? Let me share some thoughts and get yours.

First, what does “big data” mean [2]? From Wikipedia:

Big data are datasets that grow so large that they become awkward to work with using on-hand database management tools. Difficulties include capture, storage, search, sharing, analytics, and visualizing.

This identifies an important problem. While it is natural to throw all our personal data into one big database, there are costs associated with doing so. I don’t mean those associated with capture (clearly we will solve the technical and cultural challenges), but the costs in sensemaking – turning data into actionable wisdom. Let’s put the problem into context and assume the future for personal science looks something like this (help me here):

Continue reading

Posted in Discussions | Tagged , , , , , | 7 Comments

12 Myths about Self-Tracking

(Let me get a little provocative this time around and share some myths of self-tracking I’ve been playing with. I’d love to hear your thoughts about these and any other myths you might know about.)

Myth: You have to use technology.
Fact
: A good guideline is to use a tool that’s appropriate for the job. I know people who get good results using spreadsheets, and paper has some wonder affordances. (Read Malcolm Gladwell’s The Social Life of Paper for a fascinating analysis of air-traffic controllers’ paper-based system.) Then again, with large sets of data, visualization tools are invaluable.

Myth: Not everything can be measured.
Fact
: I suggest that, with a little (or maybe a lot) of creativity, you can come up with something you can measure for any experiment. Check out Alex’s post, How To Measure Anything, Even Intangibles. (Bonus: Do you have any that are giving you trouble? Let’s play “stump the blogger!”)

Myth: You have to be a scientist.
Fact
: While it probably helps to have a background in science, and better yet one in statistics, we can still do valuable work with rudimentary skills, given you design a strong experiment that can teach you something.

Continue reading

Posted in Discussions | Tagged , , | 9 Comments