Today’s post comes to us from Floris van Eck. At the 2014 Quantified Self Europe Conference Floris led a breakout session on a project he’s been working on, The Imaging Mind. As imaging data become more prevalent it is becoming increasingly important to discuss the social and ethical considerations that arise when your image it stored and used, sometimes without your permission. As Floris described the session,
The amount of data is growing and with it we’re trying to find context. Every attempt to gain more context seems to generate even more imagery and thus data. How can we combine surveillance and
sousveillance to improve our personal and collective well-being and safety?
We invite you to read Floris’ great description of the session and the conversation that occurred around this topic then join the the discussion on our forum.
Imaging Mind QSEU Breakout Session
by Floris Van Eck
Imaging Mind Introduction
Imaging is becoming ubiquitous and pervasive next to being augmented. This artificial way of seeing things is quickly becoming our ‘third eye’. Just like our own eyes view and build an image and its context through our minds, so too does this ‘third eye’ create additional context while building a augmented view through an external mind powered by an intelligent grid of sensors and data. This forms an imaging mind. And it is also what we are chasing at Imaging Mind. All the roads, all the routes, all the shortcuts (and the marshes, bogs and sandpits) that lead to finding this imaging mind. To understand the imaging mind, is to understand the future. And to get there we need to do a lot of exploring.
The amount of available imagery is growing and alongside that growth we try to find context. Every attempt to gain more context, seems to generate even more imagery and thus data. We are watching each other while being watched. How can we combine surveillance and sousveillance to improve our personal and collective wellbeing and safety? And what consequences will this generate for privacy?
With about 15 people in our break-out session it started with a brief presentation about the first findings of the Imaging Mind project (see slides below). As an introduction, everyone in the group was then asked to take a selfie and use it to quickly introduce themselves. One person didn’t take a selfie as he absolutely loathed them. Funnily enough, the person next to him included him on his selfie anyway. It neatly illustrated the challenge for people that want to keep tabs on online shared pictures; it will become increasingly difficult to keep yourself offline. This leads us to the first question: What information can be derived from your pictures now (i.e. from the selfies we started with)? If combined and analyzed, what knowledge could be discovered about our group? This was the starting point for our group discussion.
Who owns the data
Images carry a lot of metadata and additional metadata can be derived by intelligent imaging algorithms. As those algorithms get better in the future, a new context can be derived from them. Will we be haunted by our pictures as they document more than intended? This lead to the question “who uses this data?” People in the group were most afraid of abuse by governments and less so by corporations, although that was still a concern for many.
People carrying a wearable camera gather data of other people without their consent. Someone remarked that this is the first time that the outside world is affected. Wearable cameras that are used in public are not about the Quantified Self, but about the ‘Quantified Us’. They are therefore not only about self-surveillance, but they can be part of a larger surveillance system. The PRISM revelations by Edward Snowden are an example of how this data can be mined by governments and corporations.
How are wearable cameras different from omnipresent surveillance cameras? The general consensus here was that security cameras are mostly sandboxed and controlled by one organisation. The chance that its imagery ends up on Facebook is very small. With wearable devices, people are more afraid that people will publish pictures on which they appear without their consent. This can be very confronting if combined with face recognition and tagging.
One of the things that everyone agreed on, is that pictures often give a limited or skewed context. Let’s say you point at something and that moment is captured by a wearable device. Depending on the angle and perspective, it could look like you were physically touching someone which could look very compromising when not placed in the right context. Devices that take 2,000 pictures a day greatly increase the odds that this will happen.
New social norms
One of the participants asked me about my Narrative camera. I wasn’t the only one wearing it, as the Narrative team was also in the break-out session. Did we ask the group for permission to take pictures of them? In public spaces this wouldn’t be an issue but we were in a private conference setting. Some people were bothered by it. I mentioned that I could take it off if people asked me, as stated by Gary in the opening of the Quantified Self Conference. This lead to discussing social norms. Everyone agreed that the advent of wearable cameras asks for new social norms. But which social norms do we need? This is a topic we would like to discuss further with the Quantified Self Community in the online forum and at meetups.
Capturing vs. Experiencing
We briefly talked about events like music concerts. A lot of people in the group said that they were personally annoyed by the fact that a lot of people are occupied by ‘capturing the moment’ with low quality imaging devices like smartphones and pocket cameras instead of dancing and ‘experiencing the moment’. Could wearable imaging devices be the perfect solution for this problem? The group thought some people enjoy taking pictures as an action itself, so for them nothing will change.
Wearable cameras create some sort of ‘visual memory’ that can be very helpful for people with memory problems like Alzheimer or dementia. An image or piece of music often triggers a memory that could otherwise not be retrieved. This is one of the positive applications of wearable imaging technology. The Narrative team has received some customer feedback that seems to confirm this.
Combining Imaging Data Sets
How to combine multiple imaging data sets without hurting privacy of any or all subjects? We talked for a long time about this question. Most people have big problems with mass surveillance and agree that permanently combining imaging data sets is not desirable. But what about temporarily? Someone in the group mentioned that the Boston marathon bombers were identified using footage submitted by people on the street. Are we willing to sacrifice some privacy for the greater good? More debate is needed here and I hope the Quantified Self community can tune in and share their vision.
One interesting project I mentioned at the end of the session is called called “Gorillas In The Cloud” by Dutch research institute TNO. The goal of the “Gorillas in the Cloud” is a first step to bring people in richer and closer contact with the astonishing world of wildlife. The Apenheul Zoo wants to create a richer visitors’ experience. But it also offers unprecedented possibilities for international behaviour ecology research by providing on-line and non-intrusive monitoring of the Apenheul Gorilla community in a contemporary, innovative way. “Gorillas in the Cloud” provides an exciting environment to innovate with sensor network technology (electronic eyes, ears and nose) in a practical way. Are the these gorillas the first primates to experience the internet of things, surveillance and the quantified self in its full force?
We invite you to continue the discussion on our forum.