Tag Archives: Wearables
Today’s post comes to us from Rajiv Mehta, our longtime friend and co-organizer of the Bay Area Quantified Self Meetup group. Rajiv is also leading the team behind UnfrazzledCare, a media and application development company focused on the caregiving community.
“What lessons have we learned through Quantified Self meetings and conferences that would benefit entrepreneurs looking to enter this space?” That’s what I was asked to comment on at a recent event on Quantified Self: The Next Frontier in Mobile Healthcare organized by IEEE and TiE. The workshop took place on September 19, 2013, almost exactly five years after the first QS meetup, naturally leading to a theme of 5 years and 5 lessons.
The 5 themes I discussed were:
- How difficult it is to get an accurate measure on the “market size” for self-tracking, though according to some measures it is a very common activity.
- The importance of and excitement surrounding new sensor technologies, but also what we have learned about our in-built human sensors and the challenges of making sense of the data.
- The need to treat feedback loops with caution; that thoughtful reflection is sometimes better than quick reaction.
- About engagement and motivation, about how so many are drawn to QS through a desire to change their own behaviors, and how QS experiences match behavior science research.
- The value of self engagement, and how self-trackers often learn something even when their experiments aren’t successful.
My slides include my talking points, in small text below the slides. If you view this full-screen, you should be able to read the small text.
Several other QS regulars participated in this workshop. Rachel Kalmar, who runs the Sensored meetup group and is a data scientist with Misfit Wearables, gave a keynote on some of the technology challenges facing those working on the sensing devices. These ranged from the fundamental (“What exactly is a step?”) to prosaic (batteries!), and from business issues (data openness vs competitive advantage) to human issues (accuracy vs wearability). Dave Marvit, of Fujitsu Labs, shared some of their work on real-time stress tracking and his thoughts on the issue of “quantifying subjectivity”. Sky Christopherson, of Optimized Athlete, told the audience of his own health-recovery through self-tracking and how he helped the US women’s track cycling team to a dramatic, silver-medal performance at the London Olympics. QS supports his passion for “data not doping” as a better route to athletic excellence. And Monisha Perkash showed off Lumoback.
The second QS European 2013 Conference is coming up. We run our QS global meetings as “carefully curated unconferences,” meaning that we make the program out of ideas and suggestions from the registrants, with a lot of thoughtful back-and-forth in advance. Today we’re highlighting Rain Ashford.
Rain is currently a researcher in the Art and Computational Technology Program at Goldsmiths, University of London. She has been experimenting with wearable electronics since 2008. At first her work centered on interactive wearables for music and gaming, but she soon became interested in mood and social behavior. Her curiosity led her to what she calls “physiological responsive wearables.” Continue reading
Simon Frid moved to California last year because his data told him he was smarter here than in New York. Well, not really. But this funny story begins his journey of figuring out how to track one of the simplest things that we don’t generally know about ourselves: our own posture. Simon designed a wearable sensor shirt with ten built-in accelerometers, and was able to improve his posture significantly from December to January. In the video below, he shares how he trained the shirt to recognize good posture, why he didn’t want immediate feedback, and what question he most wants to ask people. (Filmed by the Bay Area QS Show&Tell meetup group.)
Eric Boyd, a long-time QS member and now part of the Toronto QS Show&Tell meetup group, has a new project. It’s called HeartSpark, and it’s a heart-shaped pendant which flashes little LED lights in time with your heart beat. HeartSpark and Eric (video below) were featured on Engadget today – congrats!
Thanks to @faisal_q for posting the link.
Researchers at Concordia University and the University of London have created ‘smart’ clothing, with embedded wireless biosensors that detect your mood and play voices and videos of people you want to hear when you’re feeling sad, upset, excited, or lonely.
From the review in TechNewsDaily:
The new “smart” clothing contains wireless biosensors that measure heart rate and temperature
(among other physiological indicators), small speakers, and other
electronics that wirelessly connect to a handheld smartphone or PDA.
Data from the sensors is sent to the handheld where it is converted
into one of 16 emotional states, which cues a previously setup database
to send the wearer some inspirational message.
These “mood memos” could be a text message, which scrolls on a
display on the garment’s sleeve, a video or photograph displayed on the
handheld device, or a sound that comes through the embedded speakers.
The researchers have made two prototype garments so far, a male and a
female version, and plan to display them at museums over the next two
years. They are also looking at medical and fashion applications.
The sounds, photos and videos sent to the wearer aren’t arbitrary.
Instead, the messages are spoken by a friend or loved one.
“When you first wear the garment, you turn on the device and you tell it what person you
want to channel that day,” said Barbara Layne, professor at Concordia
University and co-developer of the garments. “That could be your lover
who’s away, it could be your deceased parent, your best friend, whoever
you want to be with that day.”
The multimedia is pre-loaded into a database for each person the
wearer wants to virtually hang out with.
“[At] multiple times during the day, you can set it for as many times
as you want, [the garment] will take your biometric readings, your
bio-sensing data, analyze it on that emotional map and then go up to the
Internet, to the database that relates that emotional state, and bring
you back something that you need,” Layne said.
Thanks to Lyn Jeffery at IFTF for the pointer.
Here is a great talk by Ryan Grant from the last QS Show&Tell. Things got especially interesting when Ryan talked started talking about how the device he is making would allow you to capture, in sound, stills, and video, moments of your life that had already passed.
Below are a few excerpts from the audio transcript to whet your interest.
KK: “Quiet please!”
RG: “Hi my name is Ryan Grant, I’m the founder of Metascopic, Incorporated. We make a memory assistant in the form of a camera you can wear. It takes tens of thousands of pictures during the day and records audio as well. I”m really excited we can get it down to a product about this size. It very wearable. This has not been done before, and I don’t know why….”
QUESTION FROM AUDIENCE: “Tell us a little more abou the specs. What is this? Is this a still camera, is it video? I’m not really sure what it is yet, can you describe what it hopes to be?”
RG: “The first thing I can tell you is that you are going to get tens of thousands of pictures a day.”
QUESTION FROM AUDIENCE: “Still pictures?”
RG: “Still pictures.”
QUESTION FROM AUDIENCE: “What resolution?”
RG: “XGA resolution. 1024 by 768.”
QUESTION FROM AUDIENCE: “What about the viewpoint?”
RG: The viewpoint is going to be very wide angle. For some reason nobody is doing this in existing semi-wearables.
QUESTION FROM AUDIENCE: How often per minute does that translate to?
RG: That translates to a picture every 2 to 5 seconds.
Now, using a buffer, you could tap the device and capture the moment that had just passed – a kind of TIVO for life.