Tag Archives: MIT
Thanks to Nathan Yau of FlowingData
for the heads-up on this. Nathan writes:
The wheel stores energy when you pedal and brake, and turns on auto
pilot through your iPhone when you’re feeling lazy. Your iPhone is also used to switch gears and lock and
unlock your bike.
On top of that, or rather, inside the wheel, there are sensors for
torque, noise, carbon monoxide, nitrogen oxide, and location. Look back
on the environment around you, from the your data’s point of view, and
optionally, share your data with the community to contribute to a closer
view of your town.
I love this idea of passively capturing data while you cycle. There is so much environmental data available to us all the time – temperature, ambient noise, light levels, pollutants – why do we not have devices to easily capture all this information?
Earlier this week I had a chance to drop in on Nathan Eagle‘s presentation at ETech about using the Bluetooth feature on mobile phones to keep track, not only of where people are, but who happens to be nearby. This research is part of the larger Human Dynamics Group at MIT run by Sandy Pentland.
Eagle gave a great talk, which led me to read the description of his research at the Reality Mining site. Here one statement that jumped out:
[O]ur ultimate goal is to create a predictive classifier that can learn aspects of a user’s life better than a human observer (including the actual user)…
Can our devices know us better than we know ourselves? It seems obvious that this must be true. Human self knowledge is plagued by all kinds of limits: bias, sampling error, memory failure, and lack of sufficient processing power to recognize complex patterns. Machines do not suffer from the first three of limits, and the last is under steady assault from Moore’s law. But for computers to help us know ourselves better, they need two things: better data, and new analytical tools for transforming this data into predictions. These are problems that the Reality Mining researchers (among others) are trying to tackle.
In the experiment he described at ETech, Eagle’s group gave 100 MIT students free use of a Nokia smart phone in exchange for being tracked whenever the phone was turned on. Some filled out questionnaires, others kept diaries.
In return for the use of the Nokia 6600 phones, students have been asked to fill out web-based surveys regarding their social activities and the people they interact with throughout the day. Comparison of the logs with survey data has given us insight into our dataset’s ability to accurately map social network dynamics….Additionally, a subset of subjects kept detailed activity diaries over several months. Comparisons revealed no systematic errors with respect to proximity and location, except for omissions due to the phone being turned off.
Proving that people can be effectively tracked using low-power Bluetooth transmissions has a certain technical interest, but of course the true power of this work lies in beginning to understand what kinds of things can be learned from such tracking. Eagle and his colleagues, for instance, found it easy to predict when two people were likely to encounter each other, as long as the users had fairly regular habits:
In contrast to previous work that requires access to calendar applications for automatic scheduling [Roth and Unger (2000)], we can generate inferences about whether a person will be seen within the hour, given the user’s current context, with accuracies of up to 90% for ‘low entropy’ subjects.
By ‘low entropy,’ the researchers mean ‘easily predictable.’ Their claim is that their system can predict social behavior among people who are easily predictable. Such a result might seem the very definition of trivial, but it’s not as pointless as it sounds. Such a result functions as a kind of system tuning, a check on whether the basic parameters of Bluetooth tracking and social predictions are plausible. Once you know that it works on the easy cases, you can start trying to generate the more interesting analytical tools necessary to get more surprising results.
Research is being pursued to develop a new infrastructure of devices that while not only aware of each other, are also infused with a sense of social curiosity. Work is ongoing to create devices that attempt to figure out what is being said, infer the type relationship between the two people, and even suggest additional subjects to discuss. These devices see what the user sees, hear what the user hears, and are beginning to learn patterns in people’s behavior. This enables them to make inferences regarding whom the users knows, whom the user likes, and even what the user may do next. Although a significant amount of sensors and machine perception are required, it will only be a matter of a few years before this functionality will be realized on standard mobile phones.
To perform these experiments, more than 100 subjects on the MIT campus will be needed. That’s where you come in:
While Symbian Series 60 phones have become a standard for Nokia’s high-end handsets, they represent a small fraction of today’s Bluetooth devices. We are in the final stages of developing a MIDP (Java) version of the BlueAware application that will run on a wider range of mobile phones. The final test of Serendipity will be its public launch on www.mobule.net. We hope that not only will the application prove to be robust, but also quite popular within the realms described above, as well as those unanticipated.
The Mobule site does not seem to be functional yet, though there is a light description of the next phase of the project here, where it is described as a social introduction service. If your phone knows who is in your proximity, it can match profiles and make introductions. To me, this application seems boring and redundant. The world has gone crazy for social networking, but I don’t want new ways to make social and business contacts. There is a lot of fear that social tracking will simply be a new channel for exploitative marketing, oppressive government tracking, and annoying, spam-like requests for “friendship.” In some ways, the Reality Mining group is underselling their own interesting discoveries, because the promise of new understanding our social behavior goes beyond this impoverished definition of “networking.”
Another section of the site offers a clue as to the more interesting applications:
In collaboration with Push Singh and Bo Morgan, we have created an interactive, automatically generated diary application which will allow users not only to query their own life (ie: “When was the last time I had lunch with Mike? Where were we? Who else was there? What did I do next?”) but also (after a few months of training data) visualize the model’s predictions about upcoming behavior in the immediate future.
The reference to Push Singh and Bo Morgan offers a clue that this work goes deeper than finding friends or hustling sales. The question “what did I do next?” is easily transformed into a prediction about “what will I do next?” Or how about “what should I do next?” The day when we consult devices for advice is closer than we think. It already works in the stock market, and in many expert systems. Many of our decisions are less complex; but until now, both data and models have been missing. Eagle’s work is part of a bunch of efforts that will help fill the gap.
In his talk, he spoke of getting the next phase of his experiment going with 100,000 users.