Me and My Log
Topics
social life & social media
Cathal Gurrin
Cathal Gurrin is a researcher at Dublin City University and the University of Tsukuba. He’s also an expert in the field of visual and data-driven lifelogging. Since 2006 he’s collected over 14 million passively collected images from different wearable cameras. In addition to his other sensors, and he’s nearing over 1TB per year of self-tracking data. In this talk, Cathal describes what he’s learned over the last eight years and what he’s working on in his research group including search engines for lifelogging as well as privacy and storage issues.
Tools
Google Glass | phone
Links
Slides
Transcript
Show
So good morning. My name is Cathal Gurrin from Dublin city University in Ireland and University of Tsukuba in Japan. Contact details
cathal@gmail.com and Twitter handle is @cathal.So I’m here to talk about life logging and capturing large archives of life logging data, and I’m particularly interested in life logging as the automatic and continuous sensing of life experience into a secure database which can support organization, personal access, retrieval and intervention in effect creating a digital memory. We’re interested in synergy not substitution of the digital memory.
So this is kind of like Quantified Self that we’re all used to here in terms of using things like the basis watch but instead we’re using different kinds of sensors that captures visually what a person is doing and build up large archives of life data from a visual perspective. Generating as I said massive archives; 40 million images I’ve gathered since 2006. They’re not photographs or images because they don’t particularly look at them they are images for processing software okay, to extract meaning and that’s our key challenge.
So we’re trying to externalize life experiences and all of these are taken by Google Glass using sensors and using software. To understand that I’m looking at a book here, drinking coffee and around giving a lecture. Okay, that kind of analysis in real time with visual streams, Google Glass and various devices.
To allow us to make new user experiences and data access mechanisms like color of life. To understand color is understanding your days in terms of your visual diaries, or even understanding your activities inside of your home city, where you went to, and what you were doing at all those different places, okay, and doing this over long periods of time automatically.
That gives us quite a lot of data. About 2 million passive capture like narrative clip images every year, and in my case using Google Glass now about 500 thousand of actively chosen pictures.
It’s quite a lot of data I guess as you can imagine and if you move towards video it would be even more. In 2006 I started with a small camera called a Microsoft sense cam and since then I’ve grown into many many different devices being used all the time gathering data. So mobile phones, Google Glass and now your computer or narrative clip or autonomy devices.
So the data is looking quite big so that’ looks a bit exponential in terms of starting off with small quantities. And then as devices get better, quantities gets better, size gets better. You get much more data being gathered which you would think would be a problem, and I would have assumed it was a problem if you think about it, the IPhone 5S is 16,000 times more powerful than Apollo 11 guidance computer.
So what you are looking at is an exponential grow to computer science which actually isn’t topping. So it isn’t possible now, maybe possible in two or three years and I’m not worried about the data requirements.
Okay, so why?
Why would anybody do this, that’s a good question. So I’m a researcher and I develop search engines and my Ph.D. was in search engine design. And starting off with this data, we found that 75% of the time you could find anything with browsing through life logging data; we have to do search.
One of the reasons why is I want to extend the memory as much as I can, and based on 23andme I’ve twice the probability of Alzheimer’s than anybody else probably in this room and I don’t trust medical discoveries okay. I’d rather do it myself and get my own data gathering as much as I can.
A third why and that is to being more productive okay, self-improvement through knowledge and using memories. Better health care productivity, learning, sharing, many benefits in the book, Yu Life is Lonely by Gordon Bell; wonderful book if you get a chance to read it.
In terms of learning, what have I learned because I’ve done this for almost eight years now, so a few things; to access the data abstraction, okay, because the data’s coming in in massive quantities and have to summarize and organize automatically, okay. Software’s got to do this on our behalf; we can’t try to do this. If we don’t do that we get something like this.
Constant streams of thousands and thousands of images a day. Isn’t it so much better to take this data and make this. And this is the simplest thing you can do, advance segmentation models over streams of lifelong data.
Other learnings. Life logging doesn’t sense enough yet. I can sense that using cameras when I’m meeting something but I can’t understand if I like it or not. I can’t sense my emotions, my moods, and hopefully we will get the point of having better sensors in the future to do this.
Other learnings; the data contains the truth about what I did. Not the version what I put on Facebook but the truth about what I did. So you have to protect that data because that data is really valuable; valuable to people who don’t like you. and that’s a problem and I’ll come back to that again later on.
In terms of learning, life logging should be separate from you data because data can be problematic. I can capture my colleague writing questions on a board, and that’s in my life logging and that can be before the exam takes place, so we need a secure hosting that separates me from my data.
Learning; your privacy, your mind. If I wear a camera right now whose privacy is being invaded? Yours but also the case that my private data gets gathered and millions of images are put into the archives, so who has the most to lose, okay.
Another point I’d like to make, and finally last two or three points, I’m lazy to curate and I don’t want to comment and I don’t want to add annotations; it has to happen automatically at capture time. And people in the real world don’t mind wearable cameras as much as you think they would when you interact with them outside.
Not a final learning point, four reasons we think life logging access; to reminisce with your friends, to reflect yourself and to understand yourself better. To recall and retrieve where you parked your bicycle, and to remember intentions and remember what you wanted to do. And the benefits of having all this is it outweighs the cost.
Issues, privacy. We should have access policies to allow us to only access what we need to access at any point in time. So this my archive and I’m back out of my archive if you want to. You’re missing out the pictures behind because the lightings a bit weird.
Issues and concern: security.
It’s very tempting for someone who wants to access your life log because it has everything you’ve ever done; your credit card number, everything goes in there to capturing data.
Issues and concerns: sharing and trust. If you are a life logger and if you give your partner access to your life log, there’s times when that’s not on poses some questions why was it off, what were you doing etc. so it’s very interesting I didn’t expect to happen but this happens.
So future plans in our research and what else to do because I’m from a research group in Dublin and real time life logging using Google Glass and equivalent kinds of devices, where you can do real time analytics of what you’re trying to do and then use that for building better quality search engines.
Privacy by design. That’s where we want to build software that takes into account the privacy aspects. The best questions I get asked are about privacy. So we’re trying to build a software that will do that by having negative face detections. So taking out faces you don’t know and having policies based around that.
Artificial intelligence based analytics. We can’t write enough rule basis to do this, but we need to have software based on AI to understand a lot of computer science Ph.D. work and build on the backend and really it’s not straightforward to do. And new technologies have always raised concerns. So people get freaked out by Google Glass, but all these raise concerns and all of these are in your pocket in your smart phone.
Excepting these technologies as we progress true, and I think that will be the same.
So we are looking for people in Dublin to prototype some of our software were working on. So real time analytics and you’ve got Google Glass or if you have a smart phone even we can still do this kind of stuff and new technologies. So if you want to talk to us that will be great.
Thank you.