Tag Archives: qstop
What did you do? How did you do it? What did you learn?
These are the questions that inspire every Quantified Self conference. We’ve been working for many months to organize the 2018 meeting, and now we’re ready to open QS18 for public registration. The conference this year is in Portland, Oregon – we hope you’ll join us!
QS18 is what we call a “Carefully Curated Unconference.” We’ll have over 100 individual sessions, all of which are proposed and lead by conference attendees. We work closely with all the participants in advance, based on what we know of your projects, work and interests. The final program lineup is released a few days before the event. So please let us know what you’re working on when you register.
A warm thank you to Ziba Design in Portland, OR, whose beautiful building will be the setting of this year’s meeting. Due to the size of the venue, attendance is strictly limited to 300 people. We have a limited number of early bird tickets available for a reduced price. So please don’t delay.
We are happy to have a guest post from Bastian Greshake Tzovaras, the director of research at the Open Humans project, on a new way to share personal data analysis methods. Read to the end to learn about a data analysis contest happening this month. Bastian can be found online at @gedankenstuecke. -Steven
The Quantified Self community builds its collective knowledge from individuals sharing insights gleaned from their own n-of-1 data. Not only do we learn from these projects, we also get inspired to do the same or similar projects of our own. But it’s easy to get tripped up when trying to do the same analysis on your own data. Is your input data in the same format? Are you running the code on the same operating system? Can you get all the dependencies installed? What if you have never really written code before or executed analysis scripts?
In the realm of academic science these issues are grouped under the label “reproducibility”. A solution to many of these issues are Jupyter Notebooks, which can be used to share code for analyzing data. JupyterHubs make it easy to host these notebooks online and overcome the difficulties that come with different operating systems, software packages, etc. Open Humans, a non-profit foundation that helps people donate their data to research, is using this technology to make the analysis of self-collected data reproducible for other members of the Quantified Self community.
We just released Open Humans’ Personal Data Notebooks. These are run in the browser and give people access to the data that they have stored in Open Humans. Data from Fitbit, Apple Health, Moves, Twitter, and a selection of genetic data providers is currently supported. People can write their personal data analysis in Python, R or Julia right in their web browser and see the results there – without having to worry about installing any local packages on their own computer. If you are proficient in any of these programming languages, it is easy to write your data analysis from scratch. If you are unfamiliar with coding in general – or with Python, R or Julia, in particular – the Personal Data Notebooks offer well-documented example notebooks which can be run without any prior knowledge as no modifications are needed and can serve as a great way to start coding.
For all notebooks the resulting analysis and visualizations can be shared easily with other users who then plug in their own data. We have made it easy to decouple the data analysis from the underlying data. You can share your data analysis code without having to share your personal data itself. Since data sources inside Open Humans are standardized, someone else’s Fitbit data will work just as well as your own.
There are step-by-step guides to get started with Personal Data Notebooks and example notebooks which can analyze your activity data from Fitbit and Apple Health or perform a sentiment analysis of your Twitter data.
To celebrate the launch of the Personal Data Notebooks, Open Humans and Quantified Self are running a notebook competition.
To take part, all you have to do is:
create a data analysis of a data source of your choice with the Personal Data Notebooks
Gary Wolf, Steven Jonas, and Azure Grant of Quantified Self will judge and rank the submitted notebooks. The most interesting notebooks will be highlighted and added to the set of existing samples that are preinstalled for each user. The winning notebooks will be featured here, on the Quantified Self blog. If you want to share and discuss your notebook ideas, The Open Humans community on Slack is eager to have you.
We recently held a symposium where we invited self-trackers, toolmakers, activists, clinicians, scholars, and scientists to explore the impact of everyday science on cardiovascular health.
The video of those talks can be found on our Medium page:
Join us today (April 19th), starting at 9am for a special all-day event about the intersection between Quantified Self and public health. The sessions will look specifically at cardiovascular health and participant-led research. You can view the entire program here.
QS CVD Symposium Live Feed
The Quantified Self Public Health Symposium addresses the role of self-collected data in advancing health. This years meeting at the University of California, San Diego brings together invited researchers and advocates from diverse fields, including clinicians, policymakers, technologists, scholars and community members to share progress reports and initiate new collaborations. This year’s focus is on self-collected data and cardiovascular health. To request an invitation, please review the QSCVD Program Outline and send a short email to firstname.lastname@example.org explaining your interest.
Here’s an interesting call for papers for citizen scientists by the journal Narrative Inquiry in Bioethics published by Johns Hopkins University Press.
The editors want first person accounts of ethical issues in citizen science. I’ve been part of many discussions of whether QS is part of citizen science. There are some key differences. The most important reason not to think of QS as citizen science is that most QS projects are not designed to contribute to research problems in a scientific discipline. Instead, they are meant to answer one person’s question. The answer may be interesting to science, it may even make a novel contribution, but the disciplinary nature of science, and the non-disciplinary nature of QS, is a distinction too important to ignore. And yet, with all that said, I still think this call for papers is interesting to disseminate.
First: I know that many people who do QS projects face interesting ethical questions, and some of the thinking associated with this work might be interesting in the more institutional context of citizen science. And second: there are an increasing number of QS projects that take place among small groups; while each person has their own reason to participate, the social nature of the projects brings them closer to the kind of group research typically done by citizen scientists. I’m curious about the ethical issues of doing group projects, and I’d like to know how others are handing them. For the Bloodtesters group that I helped organize, we ended up using a process of ethical reflection we called – only somewhat tongue-in-cheek – “self-consent.” What have you done?
The full call for papers is here: Narrative Inquiry in Bioscience
Narrative Inquiry in Bioethics will publish a collection of personal stories from individuals involved in citizen science research. Citizen science is a growing area in which the lay public is involved in research in dynamic and important new ways. This enables new questions to be asked, new methods to be pursued, and new people to contribute, often without the usual oversight provided by institutions and funding agencies. Citizen scientists do environmental research, animal research, human research including clinical trials, identification of photographs, or collect other data.
This movement has implications for traditional science and for human participants in trials run by citizen scientists. Among some of the most challenging and interesting are the ethical implications of this new scientific research.
We want to collect true, personal stories from citizen scientists and those who contribute to citizen science. Please share this invitation and guide sheet with appropriate individuals. In writing your story, please consider one or more of these questions:
- What does citizen science enable that conventional research approaches do not?
- What unique challenges have you faced doing citizen science?
- What ethical issues have you confronted in the conduct of the research?
- Were you able to use existing frameworks (such as Institutional Review Boards) to resolve them, or did you approach resolving the ethical issues in a new way?
- What advice would you have for individuals who are considering conducting their first citizen science project?
- What advice would you have for those who seek to regulate citizen science?
We are happy to welcome this guest post on a community tool by Bastian Greshake Tzovaras. Bastian is the director of research at the Open Humans project. He can be found online at @gedankenstuecke. -Steven
I’ve built a Twitter analysis web application that’s open to everyone to use and learn from. Often the best data for learning something about yourself are data you’ve already collected; sometimes without even being explicitly aware of collecting it. Social media activity, for example. We often send off Facebook posts or tweets with very little thought about the metadata that we generate in doing so. Where was I when I made that post? What time was it? What type of content did it contain? Did I retweet or reply to another person’s post? And, of course, what did my post contain?
This data can be extremely powerful – for others. The language you use in your Tweets can be used to predict your age as well as your income. Twitter uses the data to gather information about your likes, dislikes, and possessions – among other topics. But what if you want to learn about yourself with your own Twitter data?
The tool I created allows anybody to explore their own Twitter archive in detail. First, you’ll want to request your archive from Twitter. It will contain all the tweets you have ever sent, with not only the text but all the metadata as well. To look at these metadata, go to my small web application called TwArχiv (pronounced tw-archive), which allows you to upload your data and explore it using interactive graphs.
For instance, you can see how the nature of the tweets you send change over time. Are you replying more to people than you used to or is it all just retweets by now? For my own data it seems that finishing up my PhD work had quite an impact, starting in late 2016. With less procrastination I wrote fewer unprompted tweets. Instead, replying to people became more central to my Twitter experience.
There is also plenty of research on gender bias in social media usage and whose voices are being amplified, with men being overwhelmingly favored. TwArχiv allows one to do some soul searching on this. It tries to predict the gender of the people you interact with based on their first names and shows you whether your reply and retweet behaviour is gender-balanced.
My own graphs show that I had (and have) a good way to go here. Especially 2010 is wildly off when it comes to the gender representation in my Twitter interactions. What happened during that time? I was politically active in the German Pirate Party, which was infamous for being a “boys club”.
If you have geolocation enabled on your tweets, you can get an idea of where you tweet. With a fully zoomable map, TwArχiv allows you to explore the globe on all scales to see the broader picture as well as street-level tweet distributions. As a first attempt of seeing movement patterns, you can also get a time-stamped version of the map that highlights locations one tweet at a time.
If you want to give a try with your own archive, you can head to TwArχiv.org. The data storage is handled by Open Humans and by default your archive and the resulting visualizations will be private. (You can choose to make them public, though, to share them with your friends and followers – mine are here!).
A note: The Twitter archive does not contain any direct messages but only your tweets, so if you have a public Twitter account the archive is basically all your “public Twitter interactions”.
If you have ideas on how to extend the functionality of TwArχiv or you want to code your own Twitter archive analysis, you could even get funding to do so: The Open Humans’ mini-grants of USD 5,000 for projects that will enrich the Open Humans ecosystem are a perfect fit for this kind of data visualization and analysis.
We are organizing a QS symposium on cardiovascular health for scholars and researchers and participants in the QS Community. The goal of our meeting is to support new discoveries about cardiovascular health grounded in accurate self-observation and community collaboration. This one-day symposium will be held on Thursday, April 19, 2018 at the University of California, San Diego.
Our “QS-CVD symposium” is free to attend, but space is limited, so if you’d like to be there we ask you to get in touch with us and tell us something about your research, tool development, and/or the personal self-tracking projects you’re doing that are relevant to the symposium there.
Learn more about the meeting here: QS-CVD Symposium.
Read about the community driven research that has influenced our planning for the symposium here: QS Bloodtesters.
From the Symposium program statement:
We know that data collected in the ordinary course of life holds clues about some of our most pressing questions related to human health and well being. Cardiovascular disease is the number one cause of death globally. CVD risk is strongly influenced by many of the factors commonly tracked in the QS community, including fitness, diet, stress, and sleep. But significant barriers stand in the way of using personal and public data for understanding and improving individual cardiovascular health. Perhaps the most important of these barriers is a lack of consensus about the legitimacy of self-initiated research and self-collected data. Our symposium is designed to advance progress in this field through exposing practical and innovative projects that would otherwise remain invisible, inviting critical comment, and documenting the state of the art for a wider public.
This week there are two QS meetups happening. On Thursday, Gary Wolf will be hosting a somewhat unusual meetup at the National Archive in The Hague. Self trackers and archivists will get together to discuss current trends and issues around personal data and quantified self, including archiving methods and data privacy. The Hong Kong meetup will also get together on Thursday to share what they’ve learned from their own genomic data.
To see when the next meetup in your area is, check the full list of the over 100 QS meetup groups in the right sidebar. Don’t see one near you? Why not start your own! If you are a QS Organizer and want some ideas for your next meetup, check out the myriad of meetup formats that other QS organizers are using here.
Thursday, November 16th
What is a QS guide?
The purpose of a QS guide is to make it easy for you to start tracking a new metric. Searching for the right device, head-scratching over how to use the thing, and figuring out what experiment to try first can be a huge time sink. Our goal is to offer a worked example of all of these steps with the device(s) we found to be the best on the consumer market. While the most sophisticated tools for physiological measurements are offered through professional laboratories, our guide is – of course – meant to help you with your own, DIY self-tracking projects. It’s not an extensive review of every option, but it will lead you from purchasing, to validating, to syncing the data, to doing a first experiment. As you go through the process yourself, much of your learning will come from building a mental model of how your own physiology works through additional reading and experimentation. Don’t shy away from that work – a QS project may not answer your question expediently, but it has the potential to teach you a lot.
The Guide to Tracking Cholesterol and Triglycerides will discuss two home lipid trackers: the CardioChek Plus and the Cholestech LDX. I will give an in-depth review of my experience with the former. The guide will touch on the science of the tests and devices, their accuracy and precision, and suggest a first experiment to try.
A little about blood lipids
While we normally think of cholesterol and triglycerides as risk factors for heart disease, there’s actually much more to them. In fact, it turns out that the basic functions of lipid components — including total cholesterol, triglycerides, HDL-c and LDL-c, not to mention their roles in heart health — are an active area of research and the center of an ongoing controversy. What lipid measurements can certainly do is reflect how your body is handling ingested animal products and fats. If you’d like to learn more, we put together an animation that goes a little deeper into the physiology.
Option 1: CardioChek Plus
What It Does: The CardioChek Plus measures blood lipids including total cholesterol, HDL-c and triglycerides using test strips. The device itself is battery powered and about the size of a Game Boy Color (and it makes similar sounds!). Each sample requires 40ul of blood and takes a few minutes. The major limitation of the device is its range of operation (it won’t report results if your lipids are very high or very low -and different lots of test strips have different operating ranges. Be sure to read the documentation before you purchase). It is an FDA approved, CLIA waved, testing system for clinical and paraclinical use.
Cost: New units retail for ~$800-$1000, but units appear on eBay for around $400. The tests cost ~$15 each and come in bundles of 15. Additionally, you will need rubbing alcohol, 2.8 mm lancets, 40 ul capillary tubes for blood collection and gauze wipes. Cost of extra supplies comes to about $200 for 15 tests.
Getting Your Data: The CardioChek stores data locally and has a limited memory. We recommend transferring the raw data by hand (3 numbers per test) to a personal spreadsheet.
Accuracy, Precision and Supporting Research: Finding information about the accuracy and precision of a new device can be non-trivial. Confirming what you learn can be even harder. We’ve had several months to figure out measurement validity for home lipid testing, and it’s a little complicated. At present, there is measurable variability (~13% is acceptable) in results obtained from clinical laboratory tests (Quest, LabCorp) as well as those from para-clinical tests like the CardioChek Plus. Chris Hanneman has written a great report that comments on the not-very-useful way validations are reported by glucose meter companies – and we acknowledge that the same is true here. The company that produces the device, PTS Diagnostics, reports numerous validations at the bottom of this page under resources, but we’ve averaged the basics across these many reports to produce a summary table.
Accuracy is a measure of how close a measured value is to the true value of the measurement (obtained via some gold-standard device). For accuracy, PTS diagnostics reports 18% error for total cholesterol (averaged across reported tests on the website in this document), 8% for HDL-c, and 13% for triglycerides.
At least one academic group has published a validation of this device: Gao et al., 2016 . They reported 3% error for total cholesterol, 7.1% error for HDL-c, and 7.6% error for triglycerides.
Precision is a measure of agreement between two measures which should be identical. In other words, it’s a measure of how much noise the device adds to the signal. During our own testing we measured the precision of the CardioChek Plus; you can view our results here. We actually found the CardioChek to be more precise than the company reports (so far).
Option 2: Cholestech
What it Does: The Cholestech LDX also measures total cholesterol, HDL-c and triglycerides. However, the device is larger (shoebox sized) and less mobile than the CardioChek Plus — it must be plugged into an outlet and calibrated in each new location.
Cost: New units retail for ~$2000, but used units can be easily found on eBay for around $50-$100 each.Note: make sure units have ROM pack version 3.40 or higher, and calibrate the used device.
Getting Your Data: Similarly, we recommend transferring the raw data by hand (3 numbers per test) to a personal spreadsheet.
Supporting Research: Whitehead, 2014 offers accuracy and precision measurements. Bias was 11.6% for total cholesterol, and 12.9% for HDL-c. The authors reported %CV of 2-3.5% for HDL-c and total cholesterol (pretty good!) – with the caveat that the venous blood samples they compared are less likely to introduce measurement error in comparison to the finger prick samples used at home.
My Experience, and What I Tried First
I only had the opportunity to use the CardioChek Plus, but my comments should apply to both devices. Setting up the device is trivial, but testing requires a few practice trials. The main challenge is the amount of blood required; it’s forty microliters (µl) which is equivalent to 2 large drops of blood. For some people I worked with, this was easy. But for others like myself, running the sampling hand under a hot tap is necessary to get the blood flowing. On top of this, the blood needs to be collected and deposited on the test strip within a couple of minutes to get an accurate reading. If this sounds a little off-putting, don’t worry too much- one becomes a blood collecting ninja fairly quickly. The pay of is in the ability to learn what my lipids are doing in near-real-time.
A First Experiment
While preparing for the project I wondered how fast my lipids really changed. I knew that seasonal, ovulatory cycle, and daily changes in lipids had been reported in the literature, but I wasn’t able to find any examples of how individual ambulatory humans varied hour by hour. The dynamic actions of these compounds on short timescales are less well characterized than changes on the timescale of years, but they are likely to contain useful health information. Because of all of this, I decided to measure my lipids every hour from the time I woke up, to the time I went to sleep. I won’t go into everything I saw here, but I will share one picture.
I’m 22 and in good health, yet across a single day I saw my total cholesterol nearly 50 mg/dL (almost plunking me into the at-risk for CVD category). Even more interesting, these changes seemed to occur at regular 3 h intervals, gradually climbing higher until they peaked around 8 pm. I learned that these changes might actually tell me more about my health than any one of those measurements alone could have. If you’re interested in getting a general sense of what your lipids are doing before you dive into more complex tests, I highly recommend setting a date with your CardioChek Plus or Cholestech LDX for some hourly measurements. Want a more in-depth argument for why you should try this first? Check out this animation.
This guide may have revealed that blood lipids are more complicated than you thought. But there’s no need to be overwhelmed — explore the metric, and you’ll build a deep understanding of your lipids in the process.