Tag Archives: Wearables
In response to the much anticipated reveal of the Apple Watch I did a bit of digging around to find out where we stand with wrist-worn wearable devices. I found over 60 different devices. The following list focuses on self-tracking tools, I intentionally left out those that work only as notification centers or secondary displays for your phone. I’m sure this isn’t all of them, but it’s as good a place to start as any. If you’re using one of these devices to learn something about yourself, or you’re just interested in these type of wearable tools we invite you to join us in San Francisco on March 13-15, 2015, for our QS15 Conference & Exposition.
(Thank you to all those who commented here, on Twitter, and on our Facebook group pointing us to additional devices to add!)
Sensors: Accelerometer, Heart Rate (optical), Blood Oxygen, Temperature
Sensors: Accelerometer, Pulse Oximeter, Temperature
Sensors: Accelerometer, Gyroscope, Heart Rate (optical)
Sensors: Materials state the ZenWatch houses a “bio sensors and 9-axis sensor.” I assume optical heart rate, accelerometer, and gyroscope.
Sensors: Accelerometer, Gyroscope, Heart Rate (optical)
Sensors: Accelerometer, Temperature, Pressure
Epson Pulsense Band/Watch
Sensors: Accelerometer, Heart Rate (optical)
Fatigue Science Readiband
Today’s post comes to us from Rain Ashford. Rain is a PhD student, researcher, and hardware tinkerer who is interested in how personal data can be conveyed in new and meaningful ways. She’s been exploring ideas around wearable data and the hardware that can support it. At the 2014 Quantified Self Europe Conference, Rain led a breakout session on Emotive Wearables during which she introduced her EEG Visualizing Pendant and engaged attendees in a discussion around wearing data and devices.
By Rain Ashford
It was great to visit Amsterdam again and see friends at the 3rd Quantified Self Europe Conference, previously I have spoken at the conference on Sensing Wearables, in 2011 and Visualising Physiological Data, in 2013.
There were two very prominent topics being discussed at Quantified Self Europe 2014, firstly around the quantifying of grief and secondly on privacy and surveillance. These are two very contrasting and provocative areas for attendees to contemplate, but also very important to all, for they’re very personal areas we can’t avoid having a viewpoint on. My contribution to the conference was to lead a Breakout Session on Emotive Wearables and demonstrated my EEG Visualising Pendant. Breakout Sessions are intended for audience participation and I wanted to use this one-hour session to get feedback on my pendant for its next iteration and also find out what people’s opinions were on emotive wearables generally.
I’ve been making wearable technology for six years and have been a PhD student investigating wearables for three years; during this time I’ve found wearable technology is such a massive field that I have needed to find my own terms to describe the areas I work in, and focus on in my research. Two subsets that I have defined terms for are, responsive wearables: which includes garments, jewellery and accessories that respond to the wearer’s environment, interactivity with technology or physiological signals taken from sensor data worn on or around the body, and emotive wearables: which describes garments, jewellery and accessories that amplify, broadcast and visualise physiological data that is associated with non-verbal communication, for example, the emotions and moods of the wearer. In my PhD research I am looking at whether such wearable devices can used to express non-verbal communication and I wanted to find out what Quantified Self Europe attendees opinions and attitudes would be about such technology, as many attendees are super-users of personal tracking technology and are also developing it.
My EEG Visualising Pendant is an example of my practice that I would describe as an emotive wearable, because it amplifies and broadcasts physiological data of the wearer and may provoke a response from those around the wearer. The pendant visualises the brainwave attention and meditation data of the wearer simultaneously (using data from a Bluetooth NeuroSky MindWave headset), via an LED (Light Emitting Diode) matrix, allowing others to make assumptions and interpretations from the visualizations. For example, whether the person wearing the pendant is paying attention or concentrating on what is going on around them, or is relaxed and not concentrating.
After I demonstrated the EEG Visualising Pendant, I invited attendees of my breakout session to participate in a discussion and paper survey about attitudes to emotive wearables and in particular feedback on the pendant. We had a mixed gender session of various ages and we had a great discussion, which covered areas such as, who would wear this device and other devices that also amplified one’s physiological data? We discussed the appropriateness of such personal technology and also thought in depth about privacy and the ramifications of devices that upload such data to cloud services for processing, plus the positive and the possible negative aspects of data collection. Other issues we discussed included design and aesthetics of prominent devices on the body and where we would be comfortable wearing them.
I am still transcribing the audio from the session and analysing the paper surveys that were completed, overall the feedback was very positive. The data I have gathered will feed into the next iteration of the EEG Visualising Pendantprototype and future devices. It will also feed into my PhD research. Since the Quantified Self Europe Conference, I have run the same focus group three more times with women interested in wearable technology, in London. I will update my blog with my findings from the focus groups and surveys in due course, plus of course information on the EEG Visualising Pendant’s next iteration as it progresses.
Rain Ashford is a PhD student in the Art and Computational Technology Program at Goldsmiths, University of London. Her work is based on the concept of “Emotive Wearables” that help communicate data about ourselves in social settings. This research and design exploration has led her to create unique pieces of wearable technology that both measure and reflect physiological signals. In this show&tell talk, filmed at the 2013 Quantified Self Europe Conference, Rain discusses what got her interested in this area and one of her current projects – the Baroesque Barometric Skirt.
Today’s post comes to us from Rajiv Mehta, our longtime friend and co-organizer of the Bay Area Quantified Self Meetup group. Rajiv is also leading the team behind UnfrazzledCare, a media and application development company focused on the caregiving community.
“What lessons have we learned through Quantified Self meetings and conferences that would benefit entrepreneurs looking to enter this space?” That’s what I was asked to comment on at a recent event on Quantified Self: The Next Frontier in Mobile Healthcare organized by IEEE and TiE. The workshop took place on September 19, 2013, almost exactly five years after the first QS meetup, naturally leading to a theme of 5 years and 5 lessons.
The 5 themes I discussed were:
- How difficult it is to get an accurate measure on the “market size” for self-tracking, though according to some measures it is a very common activity.
- The importance of and excitement surrounding new sensor technologies, but also what we have learned about our in-built human sensors and the challenges of making sense of the data.
- The need to treat feedback loops with caution; that thoughtful reflection is sometimes better than quick reaction.
- About engagement and motivation, about how so many are drawn to QS through a desire to change their own behaviors, and how QS experiences match behavior science research.
- The value of self engagement, and how self-trackers often learn something even when their experiments aren’t successful.
My slides include my talking points, in small text below the slides. If you view this full-screen, you should be able to read the small text.
Several other QS regulars participated in this workshop. Rachel Kalmar, who runs the Sensored meetup group and is a data scientist with Misfit Wearables, gave a keynote on some of the technology challenges facing those working on the sensing devices. These ranged from the fundamental (“What exactly is a step?”) to prosaic (batteries!), and from business issues (data openness vs competitive advantage) to human issues (accuracy vs wearability). Dave Marvit, of Fujitsu Labs, shared some of their work on real-time stress tracking and his thoughts on the issue of “quantifying subjectivity”. Sky Christopherson, of Optimized Athlete, told the audience of his own health-recovery through self-tracking and how he helped the US women’s track cycling team to a dramatic, silver-medal performance at the London Olympics. QS supports his passion for “data not doping” as a better route to athletic excellence. And Monisha Perkash showed off Lumoback.
The second QS European 2013 Conference is coming up. We run our QS global meetings as “carefully curated unconferences,” meaning that we make the program out of ideas and suggestions from the registrants, with a lot of thoughtful back-and-forth in advance. Today we’re highlighting Rain Ashford.
Rain is currently a researcher in the Art and Computational Technology Program at Goldsmiths, University of London. She has been experimenting with wearable electronics since 2008. At first her work centered on interactive wearables for music and gaming, but she soon became interested in mood and social behavior. Her curiosity led her to what she calls “physiological responsive wearables.” Continue reading
Simon Frid moved to California last year because his data told him he was smarter here than in New York. Well, not really. But this funny story begins his journey of figuring out how to track one of the simplest things that we don’t generally know about ourselves: our own posture. Simon designed a wearable sensor shirt with ten built-in accelerometers, and was able to improve his posture significantly from December to January. In the video below, he shares how he trained the shirt to recognize good posture, why he didn’t want immediate feedback, and what question he most wants to ask people. (Filmed by the Bay Area QS Show&Tell meetup group.)
Eric Boyd, a long-time QS member and now part of the Toronto QS Show&Tell meetup group, has a new project. It’s called HeartSpark, and it’s a heart-shaped pendant which flashes little LED lights in time with your heart beat. HeartSpark and Eric (video below) were featured on Engadget today – congrats!
Thanks to @faisal_q for posting the link.
Researchers at Concordia University and the University of London have created ‘smart’ clothing, with embedded wireless biosensors that detect your mood and play voices and videos of people you want to hear when you’re feeling sad, upset, excited, or lonely.
From the review in TechNewsDaily:
The new “smart” clothing contains wireless biosensors that measure heart rate and temperature
(among other physiological indicators), small speakers, and other
electronics that wirelessly connect to a handheld smartphone or PDA.
Data from the sensors is sent to the handheld where it is converted
into one of 16 emotional states, which cues a previously setup database
to send the wearer some inspirational message.
These “mood memos” could be a text message, which scrolls on a
display on the garment’s sleeve, a video or photograph displayed on the
handheld device, or a sound that comes through the embedded speakers.
The researchers have made two prototype garments so far, a male and a
female version, and plan to display them at museums over the next two
years. They are also looking at medical and fashion applications.
The sounds, photos and videos sent to the wearer aren’t arbitrary.
Instead, the messages are spoken by a friend or loved one.
“When you first wear the garment, you turn on the device and you tell it what person you
want to channel that day,” said Barbara Layne, professor at Concordia
University and co-developer of the garments. “That could be your lover
who’s away, it could be your deceased parent, your best friend, whoever
you want to be with that day.”
The multimedia is pre-loaded into a database for each person the
wearer wants to virtually hang out with.
“[At] multiple times during the day, you can set it for as many times
as you want, [the garment] will take your biometric readings, your
bio-sensing data, analyze it on that emotional map and then go up to the
Internet, to the database that relates that emotional state, and bring
you back something that you need,” Layne said.
Thanks to Lyn Jeffery at IFTF for the pointer.
Here is a great talk by Ryan Grant from the last QS Show&Tell. Things got especially interesting when Ryan talked started talking about how the device he is making would allow you to capture, in sound, stills, and video, moments of your life that had already passed.
Below are a few excerpts from the audio transcript to whet your interest.
KK: “Quiet please!”
RG: “Hi my name is Ryan Grant, I’m the founder of Metascopic, Incorporated. We make a memory assistant in the form of a camera you can wear. It takes tens of thousands of pictures during the day and records audio as well. I”m really excited we can get it down to a product about this size. It very wearable. This has not been done before, and I don’t know why….”
QUESTION FROM AUDIENCE: “Tell us a little more abou the specs. What is this? Is this a still camera, is it video? I’m not really sure what it is yet, can you describe what it hopes to be?”
RG: “The first thing I can tell you is that you are going to get tens of thousands of pictures a day.”
QUESTION FROM AUDIENCE: “Still pictures?”
RG: “Still pictures.”
QUESTION FROM AUDIENCE: “What resolution?”
RG: “XGA resolution. 1024 by 768.”
QUESTION FROM AUDIENCE: “What about the viewpoint?”
RG: The viewpoint is going to be very wide angle. For some reason nobody is doing this in existing semi-wearables.
QUESTION FROM AUDIENCE: How often per minute does that translate to?
RG: That translates to a picture every 2 to 5 seconds.
Now, using a buffer, you could tap the device and capture the moment that had just passed – a kind of TIVO for life.