What: in this entry I’m going to present notes, thoughts and experimental results I collected while training multiple StyleGAN models and exploring the learned latent space.
Why: this is a dump of ideas/considerations that can range from the obvious to the “ holy moly”, meant to possibly provide insight — or starting point for discussions — for other people out there with similar interests and intents. As such it can be skimmed through to see if anything is of interest.
I’m also here heavily leveraging Cunningham’s Law.
Who: many considerations apply to both StyleGAN v1 and v2, but all generated…
What: this is a collection of tutorials, examples and resources for generating fractals visuals using Blender. I’ll cover topics such as recursion, n-flakes, L-systems, Mandelbrot/Julia sets and derivations.
Why: if you are fascinated by fractals patterns, interested in procedural generation in Blender and want to get a better understanding and hands-on experience of animation-nodes and shader nodes.
Who: all content here is based on Blender 2.8.2 and animation-nodes v2.1. I also relied on a few short code snippets (Python ≥ 3.6).
“beautifully intricate, yet so simple”
Fractals are shapes with fractal dimensions. This derives from the way we measure them…
What: learning the basics of scripting for Blender Grease-Pencil tool, with focus on generative art as a concrete playground. Less talking, more code (commented) and many examples.
Why: mostly because we can. Also because Blender is a very rich ecosystem, and Grease-Pencil in version 2.8 is a powerful and versatile tool. Generative art is a captivating way to showcase the tool potential: if you love Python and don’t feel like learning Processing, or are still unsure about venturing with p5.js or Three.js, here you will find the perfect playground.
Who: Python ≥ 3.6 and Blender 2.8. I took inspiration from…
In this entry, I am going to talk about deep learning models exposure and serving via Tensorflow, while showcasing my setup for a flexible and practical text generation solution.
With text generation I intend the automated task of generating new semantically valid pieces of text of variable length, given an optional seed string. The idea is to be able to avail of different models for different use-cases (Q&A, chatbot-utilities, simplification, next word(s) suggestion) also based on different type of content (e.g. narrative, scientific, code), sources or authors.
Here a first preview of the setup in action for sentence suggestion.
This entry is a non-exhaustive introduction on how to create interactive content directly from your Jupyter notebook. Content mostly refers to data visualization artifacts, but we’ll see that we can easily expand beyond the usual plots and graphs, providing worthy interactive bits for all kind of scenarios, from data-exploration to animations.
I am going to start with a brief introduction to Data Visualization and better define the scope and meaning of interactiveness as intended in this article.
I will then provide a quick overview of the tools involved (Plotly and ipywidgets) plus some generic suggestions around the Jupyter ecosystem.
Finally, I will…
Game Of Life (GOL) is possibly one of the most notorious examples of a cellular automata.
Defined by mathematician John Horton Conway, it plays out on a two dimensional grid for which each cell can be in one of two possible states. Starting from an initial grid configuration the system evolves at each unit step taking into account only the immediate preceding configuration. If for each cell we consider the eight surrounding cells as neighbors, the system transition is defines by four simple rules.
I was interested in exploring the visualization of such phenomenon with Blender. …
This is a light and speculative entry where I brainstorm personal ideas about the future of lifelogging (imminent or sci-fi future, depending on your optimism level). I will focus mainly on the practical aspects of such activity, while largely ignoring ethical, moral and psychological issues. Even if speculative, I tried to include as many “good” references as possible, and I as well hope that ideas here expressed can themselves inspire you or simply give you food for thought.
I will start with a questionable distinction between two otherwise intertwined, blurry-defined movements: quantified-self and life-logging. Both are about pretty much the…
In a previous short entry, I gave an introduction to chatbots: their current high popularity, some platform options and basic design suggestions.
In this post, I am going instead to illustrate what I believe is a more intriguing scenario: a deep-learning-based solution for the construction of a chatbot off-topic behavior and “personality”. In other words, when confronted with off-topic questions, the bot will try to automatically generate a possibly relevant answer from scratch, based only on a pre-trained RNN model.
What follow are four self-contained sections, so you should be able to jump around and focus on just the one(s)…
This article is really just a Jupyter notebook, with embedded explanations, comments and working examples. Was hoping for a better embedding system or rendering from the Medium guys, but hey, that’s life. However all is there, so feel free to play with it and give me your feedback. Happy Data Sciencing(?)!
I recently gave a talk about my analysis on Fitbit sleep data, at the Dublin Quantifies Self meetup. Being a Quantified Self meetup, seemed more than appropriate (if not obligatory) for me to “quantify” and analyze all the data I gathered and generated during such talk.
I will here explore two kinds of data: heart-rate measurements from my Fitbit and a transcript of my speech.
This article is supplemented with a Jupyter notebook, which explores the code and methods used for obtaining the results I illustrate here. …