Tag Archives: Wearables
The second QS European 2013 Conference is coming up. We run our QS global meetings as “carefully curated unconferences,” meaning that we make the program out of ideas and suggestions from the registrants, with a lot of thoughtful back-and-forth in advance. Today we’re highlighting Rain Ashford.
Rain is currently a researcher in the Art and Computational Technology Program at Goldsmiths, University of London. She has been experimenting with wearable electronics since 2008. At first her work centered on interactive wearables for music and gaming, but she soon became interested in mood and social behavior. Her curiosity led her to what she calls “physiological responsive wearables.” Continue reading
Simon Frid moved to California last year because his data told him he was smarter here than in New York. Well, not really. But this funny story begins his journey of figuring out how to track one of the simplest things that we don’t generally know about ourselves: our own posture. Simon designed a wearable sensor shirt with ten built-in accelerometers, and was able to improve his posture significantly from December to January. In the video below, he shares how he trained the shirt to recognize good posture, why he didn’t want immediate feedback, and what question he most wants to ask people. (Filmed by the Bay Area QS Show&Tell meetup group.)
Eric Boyd, a long-time QS member and now part of the Toronto QS Show&Tell meetup group, has a new project. It’s called HeartSpark, and it’s a heart-shaped pendant which flashes little LED lights in time with your heart beat. HeartSpark and Eric (video below) were featured on Engadget today – congrats!
Thanks to @faisal_q for posting the link.
Researchers at Concordia University and the University of London have created ‘smart’ clothing, with embedded wireless biosensors that detect your mood and play voices and videos of people you want to hear when you’re feeling sad, upset, excited, or lonely.
From the review in TechNewsDaily:
The new “smart” clothing contains wireless biosensors that measure heart rate and temperature
(among other physiological indicators), small speakers, and other
electronics that wirelessly connect to a handheld smartphone or PDA.
Data from the sensors is sent to the handheld where it is converted
into one of 16 emotional states, which cues a previously setup database
to send the wearer some inspirational message.
These “mood memos” could be a text message, which scrolls on a
display on the garment’s sleeve, a video or photograph displayed on the
handheld device, or a sound that comes through the embedded speakers.
The researchers have made two prototype garments so far, a male and a
female version, and plan to display them at museums over the next two
years. They are also looking at medical and fashion applications.
The sounds, photos and videos sent to the wearer aren’t arbitrary.
Instead, the messages are spoken by a friend or loved one.
“When you first wear the garment, you turn on the device and you tell it what person you
want to channel that day,” said Barbara Layne, professor at Concordia
University and co-developer of the garments. “That could be your lover
who’s away, it could be your deceased parent, your best friend, whoever
you want to be with that day.”
The multimedia is pre-loaded into a database for each person the
wearer wants to virtually hang out with.
“[At] multiple times during the day, you can set it for as many times
as you want, [the garment] will take your biometric readings, your
bio-sensing data, analyze it on that emotional map and then go up to the
Internet, to the database that relates that emotional state, and bring
you back something that you need,” Layne said.
Thanks to Lyn Jeffery at IFTF for the pointer.
Here is a great talk by Ryan Grant from the last QS Show&Tell. Things got especially interesting when Ryan talked started talking about how the device he is making would allow you to capture, in sound, stills, and video, moments of your life that had already passed.
Below are a few excerpts from the audio transcript to whet your interest.
KK: “Quiet please!”
RG: “Hi my name is Ryan Grant, I’m the founder of Metascopic, Incorporated. We make a memory assistant in the form of a camera you can wear. It takes tens of thousands of pictures during the day and records audio as well. I”m really excited we can get it down to a product about this size. It very wearable. This has not been done before, and I don’t know why….”
QUESTION FROM AUDIENCE: “Tell us a little more abou the specs. What is this? Is this a still camera, is it video? I’m not really sure what it is yet, can you describe what it hopes to be?”
RG: “The first thing I can tell you is that you are going to get tens of thousands of pictures a day.”
QUESTION FROM AUDIENCE: “Still pictures?”
RG: “Still pictures.”
QUESTION FROM AUDIENCE: “What resolution?”
RG: “XGA resolution. 1024 by 768.”
QUESTION FROM AUDIENCE: “What about the viewpoint?”
RG: The viewpoint is going to be very wide angle. For some reason nobody is doing this in existing semi-wearables.
QUESTION FROM AUDIENCE: How often per minute does that translate to?
RG: That translates to a picture every 2 to 5 seconds.
Now, using a buffer, you could tap the device and capture the moment that had just passed – a kind of TIVO for life.