Topic Archives: Toolmaker Talks

Toolmaker Talk: Sampo Karjalainen (Moves)

Today we are happy to bring you another interview in our Toolmaker Talk series. We had the great pleasure of speaking with Sampo Karjalainen, the designer and founder of Moves. Over fifty percent of U.S. adults have a smartphone. That’s a lot of people walking around with a multi-sensored computer in their pockets. Moves is another example of how developers and designers are focusing on the smartphone as a Quantified Self tracking and experience tool. This is an exciting space, and one we intend to keep a close eyes on moving forward.

Watch our conversation or listen to the audio (iTunes podcast link coming soon!) then read below to learn more about Sampo and the Moves app.


How do you describe Moves? What is it?
sampo-200x200Moves is an effortless activity tracker. It’s a bit like Fitbit or Jawbone UP, but in your smartphone. There’s no need to buy, charge and carry one more device. In addition to steps and active minutes, the app also automatically recognizes activity type: walking, running, cycling or driving. It also shows routes and places and builds ‘a storyline’ of your day. It helps you remember your days and see which parts of your day contribute to your physical activity. It’s a simple, beautiful app that hasn’t existed before.

What’s the backstory? What led to it?
We started Moves to motivate us to move more. Aapo Kyrola was doing his Ph.D at Carnegie Mellon University, working hard, gaining weight and lacking the motivation to exercise. We began discussing how to motivate people like Aapo to move more. The first prototype used game motivations: we had badges, leaderboards and virtual pet to motivate people. The problem was that they still had to remember to start and stop tracking. We quickly learnt that people didn’t remember to use it for everyday walks. That made us think that maybe we could make it work continuously in the background. It took plenty of R&D to find a way to minimize battery use while still collect enough data to recognize activity types and places correctly.

What impact has it had? What have you heard from users?
We’re seeing that when you make activity visible, people start to think about it. And when they think about it, they start to do small changes in their lives. They may park their car a bit further or consider biking instead of car. They may choose to walk just to get some steps and take a break from everyday hurries. It also helps people see how long it takes to travel between places and how much they actually use time in different places.

What makes it different, sets it apart?storyline-us
Other phone-based trackers are good for tracking one run or one biking event. Moves is made to track all-day activity. Compared to activity gadgets, Moves recognizes activities by type, recognizes places and shows routes. It’s collecting a new type of a dataset that hasn’t been available before. And best of all, we now have a public API, so you can use your data as you like!

What are you doing next? How do you see Moves evolving?
Currently we’re busy with the Android version of Moves and adding some features to the iPhone version. Over time we see that Moves will become a tool to understand not only your physical activity, but also your use of time, travels – your life in general.

Anything else you’d like to say?
Moves is collecting your location in time and space. It’s a great ‘backbone’ for connecting all kinds of other data. We’re excited to see what type of visualizations and mashups people create!

Product: Moves
Website: moves-app.com
Price: Free

This is the 20th post in the “Toolmaker Talks” series. The QS blog features intrepid self-quantifiers and their stories: what did they do? how did they do it? and what have they learned?  In Toolmaker Talks we hear from QS enablers, those observing this QS activity and developing self-quantifying tools: what needs have they observed? what tools have they developed in response? and what have they learned from users’ experiences? If you are a toolmaker and want to participate in this series please contact Ernesto Ramirez.

Posted in Toolmaker Talks, TTPodcast | Tagged , , , , , , | 1 Comment

Toolmaker Talk: Charles Wang (LUMOback)

Stand up. Sit down. Walk. Run. Sleep. We engage in these activities everyday (well, maybe not running), but how much do we know about ourselves and our bodies while we’re in the midst of them? Are you standing up straight? Are you slouching at your desk while you read this sentence? In this Toolmaker Talk we’re going to hear from Charles Wang, one of the founders of LUMOback – a posture sensor and mobile application designed to support back health and improve body awareness.

Watch (or listen to) our conversation with Charles below then make sure to read our short interview to learn more about the story behind LUMOback.

Q: How do you describe Lumoback? What is it?

Charles WangLUMOback is a posture and movement sensor that you wear around your waist.  It gives you real time feedback in the form of a vibration when you are slouching, both when you are standing and sitting.  It also connects wirelessly to a mobile application, where it tracks whether you have been straight or slouching, in addition to sitting, standing, walking, running, and sleep positions.

One key feature of our mobile application is LUMO, the real time avatar.  LUMO mimics what you are doing in real time, and gives you real time visual feedback so that you can understand and be aware of what position your body is in.

Q: What’s the backstory? What led to it?

Andrew Chang, Monisha Perkash, and myself were funded by Eric Schmidt’s Innovation Endeavors to find a big problem to solve, and build a growth business around it.  We didn’t have to look very far to find the right opportunity.

Andrew, one of the cofounders, has had chronic lower back pain for the past 11 years, and nothing really seemed to help him.  He went to physicians, physical therapists, chiropractors, and tried acupuncture and other minor procedures.  It wasn’t until he learned about postural correction by taking a set of posture classes where he started to understand how critical posture was in alleviating his back pain.  In fact, once he began paying attention to his posture, his back pain significantly improved.

LUMO-product-carousel-0_largeWe as humans were designed through evolution to move, but now we spend most of our time sitting, and in most cases, sitting poorly.  This means that improving posture and encouraging more activity can have significant impact on people’s health and wellbeing.  Studies show that back health and posture are correlated, as is posture and confidence / attractiveness.  It’s no wonder that physical therapists, chiropractors, and spine physicians stress the importance of posture.

The challenge of improving posture is twofold:  1) Most people have very little body awareness, let alone understanding their sitting and standing postures, and 2) Most people don’t have the resources or the time to take posture classes.  This is where we realized that we could use technology to solve this problem, so we started prototyping and iterating, and this is what led us to create LUMOback.

Q: What impact has it had? What have you heard from users?

Users tell us that LUMOback has changed their lives, and either that their back pain has gone away through using the product or has significantly been reduced.  People also frequently tell us that they are now very aware of their slouchy posture, which leads to posture correction, and again, awareness is the key element involved in making postural changes.

Q: What makes it different, sets it apart?

In addition to telling people whether their posture is straight or slouched, we can tell them whether they are sitting, standing, walking, running, and their sleep positions.  The ability to differentiate between sitting and other activities is a clear differentiator for what we do.

LUMOback iOS application

LUMOback iOS application

Q: What are you doing next? How do you see Lumoback evolving?

We are constantly making improvements to LUMOback, from the application experience to the accuracy of our ability to detect different biomechanical states.  We pride ourselves on being open to feedback and are constantly trying to improve and iterate on our product based on what our users tell us.  This is the most exciting part — truly solving problems and needs that people have.

Q: Anything else you’d like to say?

We really are at a point in time now where mobile technologies will help us to solve challenging health problems in ways we couldn’t have imagined even several years ago.  This is what gets the LUMO team super excited!

Product: LUMOback
Website: www.lumoback.com
Price: $149

This is the 19th post in the “Toolmaker Talks” series. The QS blog features intrepid self-quantifiers and their stories: what did they do? how did they do it? and what have they learned?  In Toolmaker Talks we hear from QS enablers, those observing this QS activity and developing self-quantifying tools: what needs have they observed? what tools have they developed in response? and what have they learned from users’ experiences? If you are a toolmaker and want to participate in this series please contact Ernesto Ramirez.

Posted in Toolmaker Talks, TTPodcast | Tagged , , , , , , , | Leave a comment

Toolmaker Talks: Bastian Greshake (openSNP)

We talk about very frequently here on the QS website about tools, methods, and systems that help us understand ourselves. When it comes to the self there may be nothing more fundamental to understanding our objective ourness than our basic genetic makeup. Many of you have probably undergone or have thought of using Direct-To-Consumer genetic testing to better understand your phenotypes, disease risk, or even your ancestry. That’s all great, and I’ve spent a lot of valuable time combing through my own genetic data, but like most data true power lies in large datasets that provide observations across many individuals. So how do you participate in that type of sharing and learning? Enter the team at the openSNP.org. Today we talk with Bastian Greshake, one the developers behind the openSNP project.

How do you describe openSNP? What is it?

The too long, didn’t read version: A open platform which allows people to share their genetic information and traits, which are suspected to be at least partially genetically predisposed, which also tries to annotate those genetic variants with primary scientific literature. The data can be exported from openSNP through the website or through APIs, making it easy to re-use the data.

A longer version: openSNP has basically two target groups and users may as well fit in both categories.
First there are customers of Direct-To-Consumer (DTC) genetic testing like 23andMe who want to share their genetic information with the public for various reasons. Those can use openSNP to release their genetic data into the public domain using the Creative Commons license which is applied to the data uploaded and entered in openSNP.
As genetic information is interesting but not very useful to analyze the effect of genetic variants on bodily traits those users can also enter information about traits which might genetically influenced and create new possible categories which all other users then can enter. Those traits range from the more obvious ones, like eye and hair color, to more exotic ones like political ideology. A few weeks ago we also created a method for  users to also connect their Fitbit accounts to openSNP to make the collection of data easier and more standardized. The genetic effects on activity, sleep habits and weight loss/gain can more easily be analyzed in this fashion.

We also mine the databases of Mendeley, the Public Library of Science and the SNPedia to annotate the genetic variants users carry. This allows customers of DTC testing to find out what the recent scientific literature is able to tell them about their genetic variants. While the SNPedia is a crowd-curated Wiki, Mendeley and the Public Library of Science link back to primary literature, in the latter case even to Open Access literature which is full text available for everyone.

The second group of users who are interested in openSNP are scientists and citizen scientists who are interested in using the data for their own studies, be it to figure out what genetics can tell us about our ancestry or which effects single variants have on disease risks or other traits. The data can be downloaded from openSNP in bulk or more granularly accessed through a JSON-API and the Distributed Annotation System, a standard in Bioinformatics, which for example is used to visualize the data.

Both groups can profit from the commenting features which allows users to communicate about traits and individual genetic variants. The internal message system of openSNP also facilitates further communication, for example to share details about shared traits and diseases or to allow people who want to use the data to get back in touch with the people who uploaded the data. The latter one enables the direct exchange between those two user-groups in a bidirectional way: Researchers can ask questions about traits and people who have shared their data have a back channel as well and can get notified about the results researchers have made.

What’s the backstory? What led to it?

It more or less began with me getting my genetic information analyzed by 23andMe myself. After I received the results I published the data in a git repository on GitHub to make it available for others who might benefit of having more data. As I started to dig deeper into my own results and the raw data I wanted to have more data sets myself, to be able to compare the results. But unfortunately there wasn’t a single resource for such data. Some people also had published their data on GitHub, others on their own websites, collected publicly available data sets in a Google Spreadsheet or participated in projects like the Personal Genome Project.

This was quite frustrating: Finding the data was hard and it most often there was no additional data about traits attached. And more often than one would expect there was also no way to contact to people who made the data public. So the idea to create a platform to solve this problem grew and I contacted some friends to see if they were interested in doing such a platform, just for fun. We started out with the basic idea of creating a platform where people could upload their genetic data along with some traits they have. A couple of weeks after we started to work on the project we stumbled upon the APIs of Mendeley & the Public Library of Science and thought it might be cool to include additional data about the genetic variants as well. During the development we came up with more and more features, like the openSNP APIs. All in all the project is still growing and we’re working on adding and refining features.

What impact has it had? What have you heard from users?

We submitted the first release of openSNP to the 2011 PLOS/Mendeley Binary Battle, a competition interested in creative ways to use their APIs and won the first prize. We also secured a small grant from the German Wikimedia Foundation, which allowed us to genotype over 20 people, mainly from underrepresented groups, to diversify the available data. Those persons have now released their genetic data on openSNP as well. Right now we have over 250 genetic data sets on openSNP and just short of 600 registered users. Those numbers don’t sound to impressive in the age of one billion people on Facebook. But to put it into perspective: Genetic testing is still a niche thing and before openSNP was released there were about 40-50 of those data sets publicly available.

The feedback of our users has been very positive. Many users come up with new ideas for features they like to see added and we are really open to those suggestions and critiques. Many of the API methods, which are now implemented (and the whole Distributed Annotation System), are only in place because user let us know they wanted them. I know of users who are actively using openSNP to learn more about their test results and are in an active exchange with other users with similar traits. And while the amount of data we have so far doesn’t really allow scientifically sound studies there are already people using the data, for example there are users who run their self-written analysis-tools over the openSNP-data sets and report the results back to the users, which is amazing.

What makes it different, sets it apart?

Of course we’re not really the first to think of such an idea but are more or less a remix. For example 23andMe themselves do use the data of consenting customers for studies. They also provide questionaries about traits which users can take. But this data isn’t available to the public, due to (perfectly reasonable) concerns in terms of privacy, bio-ethics and liability. On the other hand there are projects like the Personal Genome Project, which publishes traits and genetic data of participants into the public domain. But due to similar reasons like with 23andMe the participation in the project isn’t open to everyone.

We feel that informed individuals should be in the position to share their data with the world, like they are already doing on their own websites, in an easy fashion. And of course we’re targeting a slightly different group: Probably over 150,000 people are customers of some DTC genetic testing, this is a huge potential data source which could be used to help us understand new and exciting things.

What are you doing next? How do you see openSNP evolving?

We’re still developing and refining openSNP. One of the biggest problems right now is the quality of the data for the additional traits. We have kept the process of adding data really open on purpose, to make it easy for people to provide additional information about themselves. Unfortunately this has the side-effect that the quality of the descriptions varies wildly. Those problems start of with regional idiosyncrasies: Is it “Eye Color” or “Eye Colour” and are you using the metric or the imperial system of units? And is your eye color blue or “Indeterminate brown-green with a subtle grey caste”? This granular data can be very useful, but for many applications it can be too specific. With the implementation of the Fitbit API we’ve taken a first step to keep the entering of data simple but unified at the same time. And we’re currently looking into other ways of how one could counter problems like this.

We’re also looking in more data sources to annotate the genetic variants listed in openSNP, to provide even more information for customers of DTC testing. And we’re also working on making our APIs more powerful. With the rOpenSci package there is already a great library which makes use of the APIs in the current state, but of course we would like to see more of those libraries.

And it’s hard to say in which direction openSNP will evolve as we are a bit dependent on the DTC genetic testing industry. More and more data, like Whole-Genome or Exome Sequencing, is generated and we are working on reflecting those changes on openSNP as well. And we’re open for any suggestions. So if you find that a feature is missing you should let us know, we will try to work out a way of how this might be usefully implemented.

Anything else you’d like to say?

First of all: We know, genetic information is sensitive and depending on where you are living there might not even be laws to protect you from discrimination based of your genes. Other countries, like the US with the Genetic Information Discrimination Act (GINA), have some mechanisms against this, but even those might not offer total protection in the end. And you should also keep in mind that your genetic information does not only give away details about yourself, but by design also about the next of kin. I think this is really important. If you are thinking about publishing your genetic data please keep those issues in mind. And if you come to the conclusion that this isn’t for you as you have to fear negative repercussions or just have a gut feeling of not really wanting to publish the data: Please don’t do it.

And what I also can’t stress enough is that openSNP is developed and run by a team of about four people and we are all doing this in our spare time as a fun project and as community service, without compensation. Some of us have day jobs, others are still studying and some even do both. So while we are doing our best to keep everything running it might sometimes take a while. But if you feel like contributing to the project please get in touch with us. We’d love to have more people in on this.

Product: openSNP
Website: www.opensnp.org
Price: Free

Authors note: Data sharing, especially genetic data, is a very sensitive topic in our community. I want to fully disclose my bias towards openness and sharing. I believe that our kindergarden teachers had it right when they taught us that sharing is one of the fundamental human traits we should all cultivate. To this end, I have participated in openSNP and you can view my genetic data here and my Fitbit data here

This is the 18th post in the “Toolmaker Talks” series. The QS blog features intrepid self-quantifiers and their stories: what did they do? how did they do it? and what have they learned?  In Toolmaker Talks we hear from QS enablers, those observing this QS activity and developing self-quantifying tools: what needs have they observed? what tools have they developed in response? and what have they learned from users’ experiences? If you are a “toolmaker” and want to participate in this series, contact Rajiv Mehta or Ernesto Ramirez.

 

Posted in Toolmaker Talks | Tagged , , , , , , , , | 1 Comment

Toolmaker Talk: Jonathan Cohen (Expereal)

Expereal Logo“How do you feel right now?” Such a short question can lead us toward profound insights into our lives. But how do we ask ourselves that question? How do we keep track of our answers? There are many different ideas out there about how to tackle this seemingly simple question. Many of them focus on mood, which we’ve covered in previous Toolmaker Talks (see our posts about Happiness and Mood Panda). We’re going to explore another idea in his week’s Toolmaker Talk with Jonathan Cohen, the man behind the new (and soon to be released) app, Expereal.

Jonathan Cohen photo

Q: How do you describe Expereal? What is it?

Jonathan: Expereal is a simple iPhone app that allows people to rate, analyze, share and compare their lives.  It was created to help people better understand their lives holistically, answering a most human question that cognitive biases can distort: “How’s my life going now relative to other time periods, friends and other users around the world?”  In order to arrive at an answer, the app requires active participation, which it prompts via push messages (which can be turned off), requiring users to consistently rate their lives over time.  Though it is unclear of whether millions of users care to actively measure their life, I went this route as a minimal viable product, because I was unconvinced of passive measurement’s efficacy, which crashes on the rocks of language interpretation and context.

Q: What’s the backstory? What led to it?

Jonathan: I read Daniel Kahneman’s book “Thinking, Fast and Slow,” which outlined the duality and inconsistencies in the experiencing and remembering selves.  I wondered if there might be some value in capturing the subjective opinion of the experiencing self over time to counterbalance the remembering self’s so-called “peak-end bias.”  What I found quite interesting was that the peak end bias doesn’t only affect our view of past events; it also influences how we think of our lives holistically in the present tense.  Imagine walking out of a terrible meeting in which your boss publicly reprimanded you for incompetence, and someone asks how your life is going.  How would our answer be influenced?  Would that answer accurately reflect our perceptions across a wider swath of time?  It is unlikely, as the preceding moment would act as an “anchor” in assessing our present moment lives.

From what I can discern, Kahneman doesn’t necessarily argue that the remembering self is “wrong” per se; he merely illuminates that it inaccurately captures the experiencing self.  In his book and  TED talk, he slyly asks the rhetorical question if we had to plan a vacation, would we plan it to satisfy our experiencing or remembering selves? In any cae, I thought that it would be valuable to have a more holistic perspective on my life that offered an alternate, longitudinal vantage point than what the ever-present peak end bias might offer.  Furthermore, I hoped that such information might help me “know myself better” and potentially make better decisions.  I then wondered if others had similar questions and desires.

Q: What impact has it had? What have you heard from users?

Jonathan: The app is in alpha testing.  I have received a range of feedback – some quite positive (about the design and the app’s social nature) and some quite negative (“It’s not very useful for me.  It takes a lot for me to really think about my mood, not just a 1-10 rating.” as well as “What exactly am I rating 1-10?”)  The strongest critique, which strikes at the app’s very viability as a product and business, is that most people are not really that interested in measuring themselves, particularly actively over time.  Consequently, Expereal needs to offer something immediate and compelling to encourage people to interact with the app.  What’s the immediate feedback that makes it both useful and “sticky”?

Expereal screen shot 1   Expereal screen shot 2   Experal screen shot 3

Q: What makes it different, sets it apart?

Jonathan: Simplicity: I created the initial capture mechanism to be dead simple: “How’s your life going right now? 1-10.”  If the user has to think about it, he’s overthinking.  It wasn’t intended to measure “mood”, though it could be used as such.  The Capture Details screen is totally optional, allowing additional information to be ascribed to a rating.

Aesthetics: Expereal was designed to look different from other apps, not so simple given that there are several hundred thousand.  I was most inspired by the LACMA exhibition catalog “Living in a Modern Way: California Design 1930-1965” and, to a slightly lesser extent, Edward Tufte’s data visualization books.  I also respect the work of numerous “quantified selfers”, “data visualizationalists” and artists, including Jonathan Harris, Nicholas Felton, Jer Thorp, Jan Willem Tulp and countless others – many of whom consistently speak at the Eyeo Festival.

Expereal screen shot 4   Expereal screen shot 5   Expereal screen shot 6

Social: I initially wanted to make the app solitary, because I was concerned that sharing one’s Expereal Ratings with friends would skew results, where users would only rate their lives when they were going well.  I ended up taking a middle course: one can optionally share an Expereal Rating to Facebook, and one’s ratings and descriptions are used in anonymous aggregates.  It could become more social depending on audience demand, but I want Expereal to remain true its core of helping users better understand their lives.  It’s not meant to be another social network or to replicate Facebook, Path or Twitter, which could all be future partners.

Q: What are you doing next? How do you see Expereal evolving?

Jonathan: Expereal should be available in the app store in November.  I have numerous ideas and dreams, but it will ultimately depend on user interest.  Again, the core challenge is giving people who haven’t shown interest in active measurement inspiration to continually engage.  I suspect that for most potential users, the social component will be a greater driver of interest and usage than advanced personal analytics, but am happy to be proven wrong and will adapt accordingly.

Q: Anything else you’d like to say?

Jonathan: Going from an idea to an app is an incredible challenge, yet even after it “ships”, it feels like the beginning of infinity.  There are just so many possible permutations and extensions of what might happen.  In another chapter of “Thinking,” Kahneman wonders why so many people start businesses without considering the terrible odds against succeeding.  Right now, without question, I feel that it’s been a worthwhile endeavor.  I’d give my life right now a ‘9’, describing it “rewarding”, “exciting” and “harrowing.”  I love a challenge

Product: Expereal
Website: www.expereal.com
Price: Initial version – Free; Download for iOS

This is the 17th post in the “Toolmaker Talks” series. The QS blog features intrepid self-quantifiers and their stories: what did they do? how did they do it? and what have they learned?  In Toolmaker Talks we hear from QS enablers, those observing this QS activity and developing self-quantifying tools: what needs have they observed? what tools have they developed in response? and what have they learned from users’ experiences? If you are a “toolmaker” and want to participate in this series, contact Rajiv Mehta at or Ernesto Ramirez.

Posted in Toolmaker Talks | Tagged , , , , , , | 14 Comments

Toolmaker Talk: Hind Hobeika (Butterfleye)

At a recent QS-themed event at Stanford, 3-time Tour de France winner Greg LeMond described the constant stream of new technologies that make bicycles lighter and more streamlined and that provide ever more detailed monitoring of the cyclists. In contrast, innovation in swimming seems limited to controversial bathing suits. Competitive swimmer Hind Hobeika aims to change that with Butterfleye, as she describes below and in her talk in Amsterdam last fall. She is also inspiring tech entrepreneurship in Lebanon, and is the organizer of the Beirut QS meetup group.

Q: How do you describe Butterfleye? What is it?

Hobeika: Butterfleye is a heart rate monitor for swimmers:  a waterproof module that can be mounted on all types of swimming goggles and that visually displays the athlete’s heart rate in real-time. Butterfleye has an integrated light sensor that measures the heart rate by reflection from the temporal artery (a ramification of the carotid artery that runs through the neck), and a 3 color LED that reflects indirectly into the goggle lens indicating the status relative to the target: green if the swimmer is on target, red if above target and yellow if below target.

Butterfleye is still in the prototyping stage, I am currently working on iterating the design to get to a market product.

Q: What’s the back story? What led to it?

Hobeika: I used to be a professional swimmer during my school and university years, and all of the trainings were based on the heart rate measurement. As a matter of fact, in all professional trainings, there are 3 main target zones that are dependent on a percentage of the maximum heart rate, and that lead to different results from the workout: the swimmers try to stay between 50-70% of their maximum heart rate for fat burning, 70-85% for fitness improvement, and 85-95% for maximum performance. In every single workout, the coach used to combine different sets of each of the zones to make sure the swimmer gets a complete workout and works on different aspects of his body. The problem was that there was no effective way of actually measuring heart rate during the practice! What we did is count the pulse manually after each race. Other options would have been to wear the watch + belt or use a finger oximeter, but both of these were very impractical for a swimmer.

I built the first prototype during the ‘Stars of Science’ competition, which is kind of like the Arab version of the ‘American Inventor’ initiated by Qatar Foundation. Following a Pan-Arab recruitment campaign, I was one of the 16 candidates to get selected among 7,000 initial applicants to go to Doha for the competition. Once I got to the Qatar Science and Technology Park, I was able to combine my passion for swimming and my background as a mechanical engineer, along with the experts and the resources available in Education City to build the first concrete version of my idea. After four long months, I won the third prize, and got a valuable cash award that I used to file for a US patent, start a joint stock company in Lebanon, and hire an electronics engineer and an industrial designer to get started on the prototyping process.

Q: What impact has it had? What have you heard from users?

Hobeika: The product is not on the market yet, so the reactions I have been getting so far are from swimmers and athletes hearing about the idea or testing the first prototype.

Swimmers I have talked to have commonly agreed that there is a very big lack of monitoring tools for practice in the water, and that Butterfleye would be filling a very big gap. As for people who have tested it, they are surprised of how lightweight it is and how they don’t feel it when wearing it in the water.

Here is my assumption on the impact Butterfleye will have: Swimming is a very solitary sport, and it is very difficult for athletes to get feedback on the performance if swimming without a coach or a team. It is the main reason why most people prefer practicing another activity. Having a practical monitor that can not only measure the heart rate but give all kind of information a swimmer would want to know (such as lap counting, stroke counting, speed, distance, etc.) will encourage more people to practice this complete sport and change its status of ‘solitary’.

Q: What makes it different, sets it apart?

Hobeika: Butterfleye is innovative when it comes to its sensor design: it is the first heart monitoring tool that doesn’t require wearing a chest belt, a finger clip or an ear clip, elements that would add a lot of drag in the water, and that would be cumbersome for the swimmer. Butterfleye’s sensor is integrated in the module itself, and measure the heart rate from the temporal artery.

Butterfleye’s design is also one of its competitive advantage: it is specifically designed for swimmers. It is waterproof, modular- it can be mounted on any type of goggles, light-weight and in the shape of a waterdrop in order to minimize the drag. It is also flat so it doesn’t interfere with the swimming motion. It is designed to be perfectly compatible with the biomechanics and the dynamics of swimming.

Butterfleye also stands apart by comprising a waterproof heads-up display, where the swimmer can visualize his target zone on his lens. This way, the swimmer would not have to interrupt the motion of his arms (as he would do if he was wearing a watch), and could visualize the heart rate in real-time, compared to using a pulse ox right after the race.

Swimming technology, unlike all of the other sports, is widely unexplored to date, especially when it comes to monitoring and self tracking devices. Butterfleye is one of the first tools to tackle this market gap.

Q: What are you doing next? How do you see Butterfleye evolving?

Hobeika: My next target is to release a first version of the waterproof heart rate monitor in the market. After that, comes a series of other monitoring products for the swimmers, so they would be able to track calories, strokes, lap count, etc.

I am also planning on expanding this platform technology to models compatible with running, skiing, biking and diving.

Q: Anything else you’d like to say?

Hobeika: I participated in ‘Stars of Science’ when I was still a university student, and after winning the third prize I got a job at a renowned Lebanese engineering design firm. I was very scared of working full time on my project and giving up the sense of security I had, and was only able to do it a year down the line.

The entrepreneurship ecosystem is still very nascent in Lebanon and in the Middle East, and I am part of the first generation that is working on a hardware startup in the region. It is very challenging, simply because there aren’t many (or any) resources available. I have to ship and prototype everything abroad, which makes the entire process more lengthy and expensive.

However, I am also part of that generation who will, through our projects, develop and nurture the right resources to make it easier for the next crazy change makers! I am already working on a website An Entrepreneur in Beirut, which is a platform for all the resources needed for hardware development in Lebanon.

Product: Butterfleye
Website: www.butterfleyeproject.com
Price: tbd

This is the 16th post in the “Toolmaker Talks” series. The QS blog features intrepid self-quantifiers and their stories: what did they do? how did they do it? and what have they learned?  In Toolmaker Talks we hear from QS enablers, those observing this QS activity and developing self-quantifying tools: what needs have they observed? what tools have they developed in response? and what have they learned from users’ experiences? If you are a “toolmaker” and want to participate in this series, contact Rajiv Mehta at rajivzume@gmail.com.

Posted in Toolmaker Talks | Tagged , , , , , , , , | 7 Comments

Toolmaker Talk: Michael Forrest (Happiness)

In talking with many toolmakers, I find myself constantly surprised by how different people approach the same, and seemingly simple, issue with very different perspectives. A few months ago I wrote about Mood Panda which went from private to community. In contrast, Michael Forrest’s Happiness has evolved from shared to private. I also find Michael’s experimentation with the look of his app both beautiful and fascinating.

Q: How do you describe Happiness? What is it?

Forrest: Happiness is an iOS mood tracking app. You get randomized reminders to record your mood, and then can view this data graphically and as a journal. The idea is that by using this app, you’ll be able to make better decisions in your life.

Q: What’s the back story? What led to it?

Forrest: I’ve always been inspired by technology’s potential to solve old problems in new ways. I was looking for novel ways to solve mental health problems without resorting to pharmaceutical hacks like antidepressants. I came across Daniel Gilbert’s TED talk “Why Are We Happy?” and read his book where he talks about the marked differences between what we think will make us happy versus what will actually make us happy.. My idea was that even if we can’t make good predictions about how we’ll feel in the future, we can at least start gathering accurate data about our past and use that to reflect on the present moment. I first built a Facebook app, and then moved to the iPhone.

Q: What impact has it had? What have you heard from users?

Forrest: I’ve sold a few copies without doing a great deal of marketing – people seem to discover it on their own. The feedback I have had has been amazing – when it helps people, it is helping them with a fundamental aspect of their life so it didn’t seem beyond the bounds of reason when one user told me it was the ‘single best reason for owning an iPhone’. I have seen an increase in uptake since I put this page together http://goodtohear.co.uk/happiness – people are finally starting to see the point of it and I’ve been getting useful feedback about details of the UI and so on. I’m still really only starting out though.

Q: What makes it different, sets it apart?

Forrest: I know my app isn’t the only way to track your mood, but I want it to be the best way to do so. A lot of decisions have gone into this seemingly simple app.

Single focus: I have deliberately avoided trying to track any other information because happiness has an infinite variety of possible influences that I would never presume to be able to predict for any particular user.

Design: It was important to me that I give the app a personality of its own. Finding a look that wouldn’t interfere with the user’s mood (or annoy them) but still had some personality was not trivial. Initially I drew from artists like Kandinsky and Miro (see here) for the style but over time realised that a journal was a more appropriate look. I have avoided smiley faces in the latest and came up with a very tactile way to report mood from a blank canvas – I don’t want the app to influence the user’s mood in any way at the reporting stage by suggesting anything (but it should still look good!).

Exploration: The charts in Happiness have evolved a lot over time. My original designs were largely tag cloud based. As I personally accumulated entries (I have over 700 reports in my database!) I realised that time-based reporting would become increasingly important. After a lot of trial and error I settled on a monthly reporting cycle. I also made the graphs simple by moving away from multicoloured heatmaps to simple areas filled with red or green. The algorithms used to calculate these areas need to be complex enough to find patterns but self-evident enough that when users look at the reports these seem to match their input. Details of the reports give the tool different usage styles. Simply by numbering my ranked taggings I’ve now started setting myself challenges (e.g. move “Music” from #2 in my life to #1!). There’s also something interesting about getting a blank slate each month to see if you can do better than last month.

Price: Happiness isn’t a free app, and this is a conscious decision. I want users to feel invested immediately since you don’t get instant gratification. The price will always stay around this level while I continue to add value to the app in a multitude of ways.

Privacy: A big benefit of making this app as a native iPhone app is that the data can be stored locally. I want users to feel they can be 100% honest when writing in their diary. There’s even a passcode lock feature to make sure people definitely can’t get in, even if your phone is unlocked.

Q: What are you doing next? How do you see Happiness evolving?

Forrest: Soon I’ll be releasing an iPad version of the app that will sync data via iCloud, and enable larger, more in-depth views of the data. I’ve done some fun experiments around bringing in information and media from users’ social networks which really helps contextualise the more private comments. I like the idea of people being able to share their mood maps as artworks so I have some ideas around this – making this possible without necessarily revealing details to the world.

Q: Anything else you’d like to say?

Forrest: I’m working as a one-man-team on this project. I love that it’s possible to achieve so much on my own but I’d also prefer to be working more collaboratively. I’m looking into clinical trials, and enabling others to build their own visualizations. Happiness is such a fertile subject that I’ve barely scratched the surface of what is possible with this tool. So if anybody feels inspired by what I’ve done so far and can see opportunities to work together, get in touch.

Product: Happiness
Website: http://goodtohear.co.uk/happiness
Platform: iOS
Price: $1.99 / £1.49

This is the 15th post in the “Toolmaker Talks” series. The QS blog features intrepid self-quantifiers and their stories: what did they do? how did they do it? and what have they learned?  In Toolmaker Talks we hear from QS enablers, those observing this QS activity and developing self-quantifying tools: what needs have they observed? what tools have they developed in response? and what have they learned from users’ experiences? If you are a “toolmaker” and want to participate in this series, contact Rajiv Mehta at rajivzume@gmail.com.

Posted in Toolmaker Talks | Tagged , , , , , , | 2 Comments

Toolmaker Talk: Vaibhav Bhandari (Enabling Programmable Self with HealthVault)

A few years ago, there was a lot of hoopla about PHRs (Personal Health Records), and the idea that all of one’s health records would be easily accessible in one place. Things haven’t turned out as rosy, and one major player, Google Health, shut down. However, Microsoft continues to persevere with its version, HealthVault, and Vaibhav Bhandari has written a book explaining how self-trackers can take advantage. Is a book a “tool”?! Surely a book that helps you use a tool qualifies for this series.

Q: How do you summarize Enabling Programmable Self with HealthVault? What is it about?

Bhandari: Enabling Programmable Self with HealthVault is a concise book explaining how Microsoft HealthVault can be used for self-tracking and behavior change. It shows how users can enable automatic updates from well-known fitness devices like Fitbit; how they can collect and analyze their health data; and how application developers can help them with mobile or web-based applications.

The book appeals to a broad set of readers from novice health hackers to professional programmers. It walks the reader through showing how they can easily download information from HealthVault in spreadsheets and track and visualize disparate health data to show interesting health trends about themselves. It outlines the details of the powerful data ecosystem of HealthVault and then shows how to write mobile and web applications using HealthVault APIs.

Microsoft HealthVault is the most prominent example of a personally controlled health record. With its open API, flexibility and connections with multiple health care providers and health & fitness devices, it gives people interested in monitoring their own health an unprecedented opportunity to do their own research on their own data. The other part of the title, “Programmable Self” is a term coined by Fred Trotter, and refers to a combination of Quantified Self and Motivational Hacks.

Q: What’s the back story? What led to it?

Bhandari: For the past three and a half years, I had been part of the HealthVault engineering team. I guided partners and developers building HealthVault applications, and curated an open source community around HealthVault and its client libraries. For this I created a lot of content and code examples, and it became clear that a book explaining HealthVault and its client libraries would be helpful to many.

Over the same time period Quantified Self, Personal Informatics and Motivational Hacks have seen an uptrend. During high-school and college I used to track a lot of factors like time, work-outs, and expenses on a daily basis.  Through collaborators and colleagues like Fred Trotter I recently got reintroduced to self-tracking. I learned to appreciate the value of tracking and make it more meaningful by associating goals and self experiments and evaluating it in a qualitative context.

I realized these trends very squarely represent the usage scenarios for HealthVault. HealthVault is a great open health platform to aggregate self-quantification data from health & fitness devices and from connected medical institutions via standards like CCD & Blue Button. It does have limitations. There is minimal graphing and statistical capability; however one can export data and use a spreadsheet. And while it has a good input editor for standard data formats, for anything else you must use the programming interface or a spreadsheet.

Q: What impact has your book and HealthVault had for self-trackers? What have you heard from readers and users?

Bhandari: The book was released about a month ago. The feedback I have received in that short time has been quite varied.

One reader noticed a strange correlation between dental visits (data entered automatically through his healthcare provider) and sleep cycle disruption (data entered automatically through Fitbit). Understanding that sleeplessness was caused by anxiety about his frequent dental visits allowed him to curtail the anxiety. Another reader tracking weight, using the Withings scale, and carbohydrate intake and alcohol consumption spotted correlations that has helped him manage his diet to be competitive in national and international triathlons.

In last few weeks I have also received emails from readers who found the book to be a great aid in helping to design clinical trial experiments for graduate research.

Q: What makes the book different, sets it apart?

Bhandari: Currently, Enabling Programmable Self with HealthVault is the only technical book covering Microsoft HealthVault.

Q: What are you doing next? How are you advancing these ideas?

Bhandari: I’m encouraging readers to contribute sharable spreadsheets on the companion website of the book, http://www.enablingprogrammableself.com. One common denominator among health hackers is use of spreadsheets, be it Google spreadsheet or Microsoft Excel. The kind of data being tracked is of long tail nature and no software does a really good job of presenting an interface which can handle and visualize it. Spreadsheets are a useful tool to extend and visualize the varied data involved. Through www.enablingprogrammableself.com, I want readers to be able to share their Health tracking experiences and perhaps create an Open-Source ecosystem of spreadsheets where members of the community can start with a new tracking methodology easily and see some sample data and visualizations of what has worked or not worked for the community members.

Q: Anything else you’d like to say?

Bhandari: Self-quantifiers are mavens of personal informatics, justifying and promoting citizen empowerment with their Healthcare data. We need to promote communities and tools which put the patient in control of their healthcare. Hopefully, Enabling Programmable Self with HealthVault will add a drop to to the ocean by spreading ideas and tools for toolmakers to empower and motivate citizens to be more involved in their day to day health.

Product: Enabling Programmable Self with HealthVault
Website: http://www.enablingprogrammableself.com
Price: $14.99

This is the 14th post in the “Toolmaker Talks” series. The QS blog features intrepid self-quantifiers and their stories: what did they do? how did they do it? and what have they learned?  In Toolmaker Talks we hear from QS enablers, those observing this QS activity and developing self-quantifying tools: what needs have they observed? what tools have they developed in response? and what have they learned from users’ experiences? If you are a “toolmaker” and want to participate in this series, contact Rajiv Mehta at rajivzume@gmail.com.

Posted in Toolmaker Talks | Tagged , , , , , , , , | 1 Comment

Toolmaker Talk: Yoni Donner (Quantified Mind)

There are ever more widgets to measure our physical selves, but how can we measure how well we’re thinking? Yoni Donner is trying to address this need with Quantified Mind. At a recent Bay Area QS meetup he told us how he used his tool to discover that fasting reduced his mental acuity, which was the opposite of what he had expected. Here he tells us what led to his developing Quantified Mind, and the the difficulties of creating such a tool.

Q: How do you describe Quantified Mind? What is it?

Donner: Quantified Mind is a web application that allows users to track the variation in their cognitive functions under different conditions, using cognitive tests that are based on long-standing principles from psychology, but adapted to be repeatable, short, engaging, automatic and adaptive.

The goal is to make cognitive optimization an exact science instead of relying on subjective feelings, which can be deceiving or so subtle that they are hard to interpret. Quantified Mind allows fun and easy self-experimentation and data analysis that can lead to actionable conclusions.

Q: What’s the back story? What led to it?

Donner: 2-3 years ago I started a discussion group dedicated to meta-optimization. Quickly many suggestions for cognitive improvement came up, and it also became clear that we need to test the hypotheses scientifically to make sense of this huge domain. I then did over a year of pure study of the previous work in measuring cognitive abilities.

I realized that while the existing tests are useful for identifying interindividual differences and detecting pathologies, no solution exists for repeatedly testing the same individual under different conditions, and that I need to collect the psychometric principles that were already established and adapt the tests to the requirements of the new goal: tracking within-person variation in multiple cognitive abilities.

Then there came a long design and planning stage which eventually led me to write a prototype in Python that ran locally. After meeting Nick Winter the real work on making the web application started.

There were many challenges in designing the tests so that they are repeatable and efficient, and trying to minimize practice effects. Much of early stage of the project was spent reading papers and books to identify where I could adapt established tests to my different goals. There was no single formula but one principle that comes up a lot is to change the difficulty of the test dynamically based on the user’s accuracy, to reach a steady state of some fixed accuracy, and apply Bayesian estimation to the parameters of interest. For example, in Digit Span we estimate the level in which the user would get exactly 50% of the trials correct. The reason that our verbal learning test doesn’t use a fixed number of items is that some people would find 10 items too hard and others would find 30 too easy, so any fixed number would waste a lot of their time testing them at an inappropriate level.

We haven’t established validity yet independently from the tests we are based on. This is something that I would very much like to do, but need many test subjects for. In fact, not much is known about the extent to which the intra-individual variance structure resembles the inter-individual structure that has been studied so much. With enough data, we can learn so much!

Now we are at the point where everything is functional, though the UI clearly still needs work. We’ve been live and collecting data for about two months now.

Q: What impact has it had? What have you heard from users?

Donner: People had far more positive reactions than what I dared hope for. I was afraid that people would say it’s too much work because it’s a kind of tracking where you actually need to spend some time on the tracking itself.

We have over 200 users now and almost 100 hours of testing time, though only a small fraction (about 10) are consistently using the site for self-tracking. Feedback was very constructive and I love it when people just share with me interesting things they learned about themselves.

For example, some things people shared with me: butter seems to be individual since one user had a very significant negative effect from just butter, but another had a pretty big positive effect from butter+coffee; piracetam had a small positive effect; 50gr of 85% dark chocolate increased number of errors; lactose and gluten had small negative effects. I love these individual stories but I think that organizing controlled trials will tell us much more. In any case this is just the beginning – we launched very recently, and don’t have much data yet.

Q: What makes it different, sets it apart?

Donner: It is the only cognitive measurement tool that is designed completely for repeated testing and tracking variation over time. It has more tests (over 25 now) than other cognitive testing sites and covers many cognitive domains (processing speed, motor function, inhibition, context switching, attention, verbal and visuospatial learning and working memory, visual and auditory perception and more coming). The data is collected such that everything is stored, not just aggregate statistics, so we can analyze new questions using existing data. We allow queries and statistical analysis of your results through the site itself, and plan to improve these features even more.

I think this combination makes Quantified Mind unique: (1) careful adaptations of many well-known tests and principles from psychological research; (2) multiple domains covered by tests designed to be repeatable, short, adaptive, efficient and reasonably fun; (3) emphasis placed on data collection and analysis.

Q: What are you doing next? How do you see Quantified Mind evolving?

Donner: I think most people think it’s cool but the barrier to starting your own experiments is high. The main insight from users is that I should probably make it even easier to figure out how to use Quantified Mind to quickly get benefits. I want to add more content like suggested experiments, documentation of what other people did and what they learned, and the science behind all of it, and most of these ideas came from users. Aside from that, there are many features to add such as better UI, more tests (I am working on mood detection now), better tools to access and analyze data.

At a higher level, I want to go forward and develop a science of cognitive optimization. There are many interventions to test and I want to study as many of them as possible using rigorous controlled studies and publish the results. It’s time for cognitive improvement to take a step forward from being astrology-like to being a proper science.

Q: Anything else you’d like to say?

Donner: Thanks for doing this! The QS community is wonderful and I think the future for taking care of our own health, brains and general well-being looks bright – but of course we should measure that, too.

I am always looking for people who share the vision. If you are interested in helping develop Quantified Mind further or helping run experiments, contact me (yonidonner@gmail.com).

Product: Quantified Mind
Website: www.quantified-mind.com
Platform: web
Price: free

This is the 13th post in the “Toolmaker Talks” series. The QS blog features intrepid self-quantifiers and their stories: what did they do? how did they do it? and what have they learned?  In Toolmaker Talks we hear from QS enablers, those observing this QS activity and developing self-quantifying tools: what needs have they observed? what tools have they developed in response? and what have they learned from users’ experiences? If you are a “toolmaker” and want to participate in this series, contact Rajiv Mehta at rajivzume@gmail.com.

Posted in Toolmaker Talks | Tagged , , , , , , | 1 Comment

Toolmaker Talk: Caspar Addyman (Boozerlyzer)

Our QS Conferences are organized to maximize discovery and serendipity. The entire program results from us inviting attendees to present and participate. You’re never quite sure what you’ll get, but it’s hardly ever boring! I didn’t know what to expect when Caspar Addyman took the stage in Amsterdam to talk about “Tracking your brain on booze”, but he very quickly grabbed my attention. His talk reminded me that, as Malcolm Gladwell once reported, “How much people drink may matter less than how they drink it.

Q: How do you describe Boozerlyzer? What is it?

Addyman: The Boozerlyzer is a drinks-tracking app for Android phones. It lets you count your drinks and their calories and tells you your current blood alcohol. Crucially, it also lets you record your mood and play a range of simple games that measure your coordination, reaction time, memory and judgment.

What Boozerlyzer explicitly does not do is tell people how much to drink. We think people would find it patronizing and off-putting. Rather we hope that it will help people get better insight into how drinking affects them.

In addition, if users agree, their data is sent to our servers to contribute to our research on how drink affects people. I’m a researcher with the Center for Brain and Cognitive Development, Birkbeck College, University of London, and this project was started as a way to collect data beyond the artificial setting of a laboratory.

Q: What’s the back story? What led to it?

Addyman: I originally had the idea back in 2003 while doing my undergraduate psychology degree. I was interested in how to study the affects of recreational drugs. The web technology of the time couldn’t be used when people were out at the pub or club so I didn’t pursue it.

In summer of 2010 I took part in a science & technology hack day in London and the idea occurred to me again, this time using smartphones. So I told a few friends about it. Mark Carrigan, a sociologist at Warwick University, opened my eyes to the more sociological types of data that we could gather. This broadened the aims from my initial very cognitive focus to think about the emotional and social experiences involved with drugs and alcohol. That was at the end of 2010. All that remained then was to invent the app. I’m not really a developer and have been working on this in my spare time so it has taken longer than I’d expected.

Q: What impact has it had? What have you heard from users?

Addyman: I have been using the app myself for 6 months now and the thing that has surprised me the most is how rapidly the drinks accumulate if I’m out with friends. A few drinks early in an evening, then a couple of glasses of wine with a meal and then more drinks all through the night. Over a particularly sociable weekend I find myself drinking a disturbing amount even though it doesn’t seem that way at the time.

We started our first public beta in December 2011 and have a hundred or so users. I still have to analyse the first batch of data and usage statistics. But, a first look at the data from December and January showed something surprising: the Christmas season seems to ratchet up drinking levels, normalising heavy drinking on into January.  Unfortunately, I don’t think I’ve got enough data to tell if this is real trend.

In terms of direct feedback from users, generally, we’ve had positive reaction to the idea but there are plenty of things we can improve. One of the biggest problems with the enterprise is that our users forget to actually use the app when in the bar, or when they’ve stopped drinking. Also, people are willing to track their drinks and their mood as they go along, as that takes very little time. But at the moment the games take a little too long to play, and the game feedback is a bit too abstract. We aren’t yet giving estimates of drunkeness based on game performance. Here we are in a bit of Catch 22: more compelling feedback ought to be possible once we’ve got a reasonable base set of group data to run some regression analysis but without interesting feedback we have trouble getting people to play the game in the first place.

Q: What makes it different, sets it apart?

Addyman: One big difference between our app and many tools in the personal health world is that our focus is not on behavior change, but instead on data for scientific research and self-learning.

Also, this is an academic, non-commercial project. Our app will always be free. We will never collected any data that could directly identify you nor will we sell any of the data we collect. We believe in open systems, open data and open minds. The code we write is open sourced. The data we collect will be available to anyone that wants to study it.

Q: What are you doing next? How do you see Boozerlyzer evolving?

Addyman: The Boozerlyzer is our first app and there are still plenty of improvements to make to it. But, in addition, we want to broaden our scope and apply the same principle to recreational drugs and the effects of various medications.

As an example, I met Sara Riggare Sara Riggare from the Parkinson’s Movement at the Amsterdam QS conference. She pointed out that a version of Boozerlyzer could help Parkinson’s patients track their medication intake and quantify the effects of the medications on mood, coordination, memory, etc. We are starting a collaboration to redesign the app for this purpose.

Meanwhile, my own motivation for starting this project was always to be able to do better research into recreational drugs. This has never been a more pressing concern, and I am hoping that a drugs tracker app can help. Obviously, this is fraught with legal and ethical difficulties so we are having to tread carefully. See here and here for more background on this.

Q: Anything else you’d like to say?

Addyman: We have already benefited greatly from our contact with QS community. The conference was a great inspiration and I wish could get to more of the lively London meet ups. If anyone out there would like to get involved with our project, we’d love to hear from you. Any advice or experience you could lend us would be greatly appreciated. Our project is both open source and open science. We believe in the power of collaboration and so would love to hear from anyone with similar projects in mind.

Product: Boozerlyzer
Website: http://boozerlyzer.net and http://yourbrainondrugs.net
Platform: Android
Price: Free

This is the 12th post in the “Toolmaker Talks” series. The QS blog features intrepid self-quantifiers and their stories: what did they do? how did they do it? and what have they learned?  In Toolmaker Talks we hear from QS enablers, those observing this QS activity and developing self-quantifying tools: what needs have they observed? what tools have they developed in response? and what have they learned from users’ experiences? If you are a “toolmaker” and want to participate in this series, contact Rajiv Mehta at rajivzume@gmail.com.

Posted in Toolmaker Talks | Tagged , , , , , , , , , , , , | 1 Comment

Toolmaker Talk: Alexander Grey (Somaxis)

The first speaker at last week’s QS meetup in San Francisco was Alexander Grey. He told us about the muscle-activity sensor he had developed and the fascinating things he had learned about himself from using it. The result of many years of thinking and work, he’s now eager to find collaborators, so he jumped at my suggestion to participate in this series.

Q: How do you describe Somaxis? What is it?

Grey: We have developed a small, wireless sensor for measuring muscle electrical output. The sensors stick onto the body adhesively (like Band-Aids) and transmit data to our smartphone app. One version “MyoBeat” uses a well established heart metric to provide continuous heart rate measurement (like a “chest strap” style sensor). A second version “MyoFit” uses proprietary algorithms to measures the energy output of other muscles. For instance, one on your quads while running can give you insight into how warmed up you are, how much work you are doing, fatigue, endurance, and recovery level. If you use two at the same time, it can show you your muscle symmetry (when asymmetry develops during exercise like running or bicycling, it can indicate the onset of an injury). Our goal is to get people excited about understanding how their bodies work.

Q: What’s the back story? What led to it?

Grey: My parents used to run a clinic that used muscle energy technology (sEMG) along with a special training method called Muscle Learning Therapy to cure people with RSI (Repetitive Strain Injury) and other work-related upper extremity disorders involving chronic pain. Each sEMG device they bought cost them $10K. I started to develop early symptoms of TMD (Temporomandibular Joint Disorder) when I was only 10, and my father used sEMG to teach me how to control and reduce my muscles’ overuse. The training worked, and I still have it under control today.

Years later, I decided to start a company to develop and commercialize  more accessible / less expensive sEMG technology, with my mom as my investor. (My father has passed away, but I think he would have supported the idea.)  At first we were going after a workplace safety service — I developed an algorithm that quantified people’s likelihood of developing an RSI injury in the future, and envisioned a prevention-based screening/monitoring service to offer to progressive companies. The feedback I got from VCs was that we needed to start with a bigger market. So we redesigned the product to make it small, cheap, and completely wireless. I also started working on a new set of sports-related algorithms to interpret muscle use into useful metrics.

Q: What impact has it had? What have you heard from users?

Grey: Having this new kind of tool at my disposal has really been a lot of fun, and has allowed me to run some new kinds of experiments that haven’t really been practical before.

For example, I wondered: for a given running speed, what cadence or stride rate would use the least energy, and so delay the onset of fatigue? I put sensors on my both quads, hamstrings, and calves. I created an audio track that increased from 120 – 170 bpm in increments of 5pm, 15 seconds on each. I kept my treadmill locked at 6.5 mph (my “comfortable pace”). By adding up the work done by all 6 muscles in the legs, I got a snapshot of the energy expenditure at each stride rate / cadence. The resulting curve [see graph above] answered my question: for me, at 6.5 mph, 130 bpm is my “sweet spot” that minimizes energy expenditure. It also showed a second trough in the graph, not as low as 130, but still pretty low, at 155 bpm. So if I need to run uphill or downhill, and want to keep the same speed but take shorter steps and still try to minimize energy burn as much as possible, I should shoot for 155 bpm.

Another test that these tools allow us to do is to figure out how recovered someone is from exercise. I did a test where I ran at a fixed speed every 24 hours (that’s not enough recovery time for me – I’m not in good shape). The first day, the muscle amplitude was about 1000 uV RMS (microvolts, amplitude). The second day, the amplitude started out at 500 uV and decreased from there. So the lack of sufficient recovery showed up in the data, which was quite interesting to see.

Whenever we have volunteers in the lab offering to help out (runners, usually) they geek out over these devices and the insight that they can get into the muscles of their bodies for the first time. We’ve had about 40 volunteers help out with muscle data gathering, and about 60 with heart rate testing.

Q: What makes it different, sets it apart?

Grey: Our design goals for our sensors are “good enough” data, wireless, long battery life, and comfort (wearability). Key to this is using a low-power, low-bandwidth radio. The trade-off is a much lower sample rate and a/d resolution than medical-grade sensors. Our sensor transmits processed data, not the raw data. However, our data is good enough for sports and fitness, where you want to see some predigested metrics and not raw graphs or frequency analysis. The benefit is that our battery life is 100 hours, and our sensor is small and light enough to attach using an adhesive patch. The up-side of an adhesive-based solution is that one-size fits all, it’s very comfortable, and there is no tight and annoying strap around your chest.

Q: What are you doing next? How do you see Somaxis evolving?

Grey: We are mainly focusing on improving the physical sensor itself: rechargeable battery, completely waterproof (current version is water resistant), and a smaller size. And maybe a medical-grade version with much higher sample rate and a/d resolution.

We also want to open up the hardware platform so that others can develop applications for it. For example, maybe someone wants to develop software for Yoga that uses muscle isolation to help do poses correctly. Or perhaps someone wants to focus on a weight-lifting application that assesses power and work done during lifting. We can envision many possibilities for sports, gaming, physical therapy, and health.

Q: Anything else you’d like to say?

Grey: I would love to hear from anybody who has ideas about potential uses of our technology! Also, we are fairly early-stage, so if anyone wants to work with us (individuals) or partner with us (companies) we definitely want to hear from you. You can reach me at agrey@somaxis.com

Product: MyoLink platform: MyoBeat (heart) and MyoFit (muscle)
Website: www.somaxis.com (coming soon – there’s nothing there right now, but check back again soon)
Platform: Sensors stream data to an iPhone app (Android under development) and certain sports watches (Garmin, etc.)
Price: $25 for a starter set of 1 Module (MyoBeat or MyoFit) and 4 adhesive patches. Or you can buy 1, 2 or 3 Modules, with a one-year supply of patches, for $75, $125, or $170, respectively.

This is the 11th post in the “Toolmaker Talks” series. The QS blog features intrepid self-quantifiers and their stories: what did they do? how did they do it? and what have they learned?  In Toolmaker Talks we hear from QS enablers, those observing this QS activity and developing self-quantifying tools: what needs have they observed? what tools have they developed in response? and what have they learned from users’ experiences? If you are a “toolmaker” and want to participate in this series, contact Rajiv Mehta at rajivzume@gmail.com.

Posted in Toolmaker Talks | Tagged , , , , , , , , , , | 10 Comments