This is a light and speculative entry where I brainstorm personal ideas about the future of lifelogging (imminent or sci-fi future, depending on your optimism level). I will focus mainly on the practical aspects of such activity, while largely ignoring ethical, moral and psychological issues. Even if speculative, I tried to include as many “good” references as possible, and I as well hope that ideas here expressed can themselves inspire you or simply give you food for thought.
I will start with a questionable distinction between two otherwise intertwined, blurry-defined movements: quantified-self and life-logging. Both are about pretty much the same main point: the tracking of data related and generated by an individual to improve the life of such individual. The goal of our idealized individual is to collect (or better, have it collected) as much data as possible about its life, while making the best use of it, with the final and indisputable goal of… being better! Better health, better performances, better person, better mood, better potential for the future — everything one cares about, one wants it better! Makes perfect sense to me.
“Quantified-Self is just Data Science… for yourself”
While from one side with quantified-self we have the more common, “traditional” variety of data (heart-rate, sleep, calories, mood), on the other, with life-logging, we have the more controversial entries related to our perception and senses, media like text, audio, video — smell too maybe? — together with other even more peculiar forms of data.
Someone might say that life-logging is really just about collecting data, or that quantified-self is more about numbers, but everything is numbers, and if you collect data is because you want to do something with it (at least that’s the initial good resolution). If you don’t, then the fault it’s of humanity for not giving you good enough tools or motivation, but the initial goal was there: you wanted/hoped to improve your life with data.
Quantified Self (just a brief intro)
I have a Fitbit, and that’s already considered by some as a pretty hardcore device: automatic sleep and heartbeat monitoring using flashing green light? And it shows the time — kind of — when it recognizes that you turned the wrist to check it ?! Astounding!
But then you start reading and chatting around this movement, the quantified-self, start getting interested and fascinated by it, get more and more acquainted with concepts like wearables, continuous monitoring, self-tracking and optimization, cyborgs and transhumanism. You recognize that fancy Fitbit to be just the commercial surface of a deep and variegated ecosystem of choices and products. What else have we there?
Start with basic activities monitoring, for which Fitbit is an example: heartbeat (or the even more important heart-rate-variability), steps, distance covered, activity levels, burned calories, sleep tracking.
Then we can move to more intimate body analysis: genome testing and sequencing, analysis of blood and guts (microbiome), saliva and stool testing. We already approach scenario for which a “wearable” is not enough, tests have to be conducted in a dedicated laboratory, but in a seamless easy process, which is luckily so far away from the tediousness of old fashion hospital routines.
Then there is the brain, still part of our body, but sort of our favorite one. Here the potential is enormous, with a lot of studies and money put into brain computer interfaces (BCI). And even if at consumer level one might see just basic gaming and meditation helpers, different companies provide affordable devices.
And while talking about helpers we can conclude our list on the personal optimization ecosystem, which fades into apps over apps: time trackers, task takers, habits builder, bad-habits wreckers(?), actions loggers and life gamificators, with the omnipresent and always judging Pomodoro technique!
That’s a lot of tracking, and many from one group can see many of another group as crazy or weird just for being into such things. One might ask “is it worth it?”, one should answer “it depends”.
At the same time, many (if not all) are tracked by companies or similar entities in all kind of possible ways, during all their interactions with, practically, anything digital. Seeing the success such companies had using in an intelligent way our data, I expect by analogy that a similar fate should fall upon us once we can do the same with regards to our own goals.
Diversions apart, all this was yes about learning to get and feel better, but still about what I addressed as “traditional” type of data, but my post is about the future of life-logging, so let’s move to some more quirky stuff.
Gordon Bell seems to have been one of the first “playing” around this concept, with a two-sided project of both collection and analysis of life’s data. But then you discover that after all that work he quit, together with another big name like Chris Anderson, and both with practically the same takeaway message: not worth it. Many others point out how dull most of the recorded stuff might end up to be, but that’s such a wrong understanding of the logging concept. Software log everything, not just errors or exceptions, they go as low as possible, down to the finest level, where yes, life is pretty boring, but one can always filter, and who knows, maybe there is so much important insight that can be derived and generated from those lower levels.
Better to bring more than needed, than to find yourself in the need of something you previously decided was superfluous.
As humans, we keep striving for more (for better or worse) and once we will achieve an optimal level of analysis and use of the quantified-self data, do you really want to tell me that richer media such as audio, text, and video would not be the next obvious step? We don’t even need more practical incentives, just more transparent and effortless automation and probably a bit of an increase of apathy toward privacy matters.
Capturing what we see is something most of us do, and something humans have been doing for a while now. Eyebrows rise though when you mention continuous logging: a camera strapped to your chest, some recording glasses or even smart contact lenses. But again, if you want to log, you better log properly, especially if it’s feasible and effortless with nowadays technologies. I know Google glasses — for example — were not really successful as a commercial project, but being this a speculation I can dream of a reality in which products are good just because they are suitable for you and your peculiar needs, and not because there are enough people interested in buying them.
Smart contact lenses could probably deliver unbeatable POV recording fidelity, but it’s also worth pointing out that we might not need external devices to record what we see/perceive; nevertheless, my point would stay quite the same.
With huge databases of images from video life-logs would be relatively easy to implement a search through time, a search through your entirely recorded life. It’s a matter of platform and algorithms scalability, and a lot of research already went into the subject (sometimes nicely addressed as egocentric vision). In correspondence with improvements on the machine learning side, many kinds of natural language queries could be transparently operated on one’s life database: “when I saw person Y?”, “where I was at datetime X?”, “what I did on day Z”, “what did I eat in the year K?”, “how many yellow cars I saw in my life”. Why? Because we can.
There are actually many practical health-related use-cases for such a technology, that cover the entire spectrum of prevention, healing, and enhancement. Food recognition is probably one of the best marketable ones, with automatic caloric and nutritional value calculation (approximate, but the more precise the better technologies get). First world problem, but if there is something stopping me from keeping track of food is definitely how bothersome, impractical and time consuming the recording process is. One can use all this data to infer the influence that some specific food has on him, and discover possible allergies or more subtle effects not diagnosed until then.
This was just about food, but combination and aggregation of different sources of data are the key values for all these “I want to improve” movements: discover how something, how everything affects you. Track luminosity and colors, environments and location, see how habits, mood, performances change. Maybe someone needs technology to tell them who they like or dislike, or whatever other emotions people or situations can influence. GPS tracking can already say a lot about you, but this would be GPS on steroids, with full context awareness.
All this is just about filming what surrounds you, but I realize how much more you could gain if such recording could include also your person and actions. More complex analysis and reporting of hand gesturing, body language, posture monitoring, tics or routines you might not be aware of, with possible reinforcing learning loops via automated feedback.
Plus additional data for all our mood recognition and tracking.
And I feel stupid saying this while staring at the piece of paper taped to my laptop camera, but I suppose that’s the nice thing about speculating, a suspension of disbelief towards some of our credos.
Audio and Text
Many points about video virtually apply to this section too, but I want to reinterpret the audio part, and just focus here on what one says.
Textual messages are already easily available for analysis (Facebook Conversation Analyzer) or OCR if you still happen to scribble around. What we say with our voice is instead most likely and for the most part not recorded. Someone else might be recording it, but they are unlikely to share the data with us. I doubt someone like NSA would share its data about me with me, no matter how much of an enjoyable collaboration that might end up to be.
Recording what we say is the first step to a sea of possibilities in terms of personal discovery and improvement. Once we consider speech to text solutions, we just reduce the analysis tasks to the resources-rich realm of NLP, while leaving the audio data to more specific metrics that are instead lost with pure textual translation.
After my simple analysis of a personal public talk, I thought a lot about this scenario, data, and potential. It’s something simple, but still, until it doesn’t reach your mind is not there, no matter how simple. What if I record everything I say, process it and analyze it. Bear a moment with me and see how much stuff you can get out of it, and this is just was I was able to come up with:
- vocabulary: how rich your vocabulary is (lexical richness), common repetitions of words or idioms, grammatical and syntactical errors
- improvements over time: changes in your “language model” (multiple if you also speak more languages, with analysis of reciprocal interaction and influence)
- pronunciation: accents, quality, weaknesses
- register, timbre, prosody, pace, pitch, volume, mood…
Body language we covered in the previous section, so once you bring good content, here we have the potential to make you the perfect orator.
And I was again restricting myself to what is more normal to us, that is, the recording of public speaking. Instead — in this speculation of ours — everything is public speaking, and so your voice can be analyzed in relation to all sort of situations or people you are interesting in, being it a blind date, a weekend with the family, a mortgage negotiation or a drunk night out.
Even more speculations, and conclusions
Sci-Fi movies and series already said this all, and what I enjoy about them is tinkering for next months about ways to achieve that, what more is there, how technologies I know would behave for such scenarios.
Think how many services you can build around this: more startups for everyone! Given the amount of data, AI becomes the necessary intermediary, and while someone might call it our new — external — brain layer (the successor of the neomammalian one) let’s address it for now in a more friendly way: personal AI coach.
So there you have, a personal AI coach who observes and listens to you 24/7, has a pretty unmatchable universal knowledge, can give you advice on almost everything, know what is good and bad, and can subtly make you better, more like you would like to be… doesn’t this sound magical to you?
Humans don’t seem to excel at rational decision making, especially for daily life activities. A continuous observation of our “doing” has a great potential for improving our stand towards things like cognitive biases, heuristics and logical fallacies; a great potential for better choices overall.
“Better or fewer choices?”, you might ask. As per terms and conditions, our AI coach doesn’t order you what you have to do, is neither responsible nor punishable for what you do based on what it says, cause it’s just an advisor. It gives advice, summarized from such a huge amount of knowledge and computation power. I am not saying it will always be right, but most of the times, definitely most times than you would, so you will seem rather stupid/stubborn not to listen to it, and just do the opposite.
We can even accelerate in our speculation, discarding series and movies and get instead inspiration from books, which are really aiming way higher. Hypothesize about a virtual reality future instead of a physical one, of uploaded brains instead of transhumanism, of citizens instead of fleshers. At that point maybe all that I have said would not make sense anymore, who knows.
I started this entry with measuring my heartbeat with Fitbit and ended up with realities from hard science fiction novels, passing through a lovable/inevitable AI coach and a lifetime of one’s experiences stored somewhere in the cloud, for a possible better life.
I used a lot the word speculation, but for many all that has been here proposed doesn’t even sound so implausible or even too far. We live in interesting times, a steepness for the curve of technological advancement that none of our ancestors had the honor to experience, something to be sure content about if you are a curious mind, no matter what the future will hold.