QSEU14 Breakout: Data Aggregators
Ernesto Ramirez
June 10, 2014
Today’s post comes to us from Kouris Kalligas and Erik Haukebø, who led the Data Aggregators breakout session at the 2014 Quantified Self Europe Conference. Data aggregations methods and platforms are becoming commonplace as the range of data collection tools continues to grow. Individuals are continually looking for ways to combine and analyze disparate data sets. In this breakout session, conference attendees discussed some of the concerns and benefits associated with data aggregation platforms. You’re invited to read the description of the session and then join the discussion on the QS Forum.
QS Data Aggregators
by Kouris Kalligas & Erik Haukebø
The main topics raised by participants in the break-out session were around the use cases of aggregator platforms, privacy concerns, interoperability of interfaces, ontology of the data they import, automation of the added value they present, and the subsequent machine learning which comes with it.
Use of aggregator platforms: There was a common trend during the discussion that the added value of aggregator platforms is putting the data being integrated and aggregated into context. For example, steps, calories consumed, and deep sleep have to be contextualized by an aggregator platform to allow for better interpretation, better use, and possibly move towards providing a predictive capacity. Everybody agreed that predictions might be a long shot.
Privacy concerns: There was a general consensus around privacy concerns. This included concerns about how data are being used, who should use them, for what, and how safe they are in the context of an aggregator platform. There was no common ground on a solution, but these concerns were echoed by all attendees.
Interoperability: There was a discussion on how a datapoint reaches an aggregator platform. That is, where does the data come from? Which platform(s) can it travel to? And where can it move from there? Exploring the “journey” of a datapoint is an interesting thought experiment as it’s not really been analyzed. As we are moving towards a world integrated with the Internet of Things an aggregator platform has to think about this journey and where they fit in.
Ontology of data: By ontology, we mean that the data or metrics being integrated from different tracking sources should have the same terminology and expressions . One step from device X is different from device Y. Looking at the future there should be a common ground on what data and metrics we use and what we mean by them. Standardization of personal data metrics may help platforms that are handing many different types of data from different devices and services.
Automation: It was a clear, and a rather obvious remark, that for aggregator platforms to provide the added value they can being to use machine learning to help users learn from data that it automatically collected by devices and sensors. If the quality of data depends on manual entry then applying machine learning may not provide insights that are meaningful or representative.
If you’re interested in keeping this conversation going about what should happen to our data after we’re gone you’re invited to join the discussion on the QS Forum.