2D Pixel Arrays
Processing Sketches, Custom Linux, PyQT, Teensy, wood, glass.
Displaying algorithmic art
The design for this custom display device started around 2014. At that time, I was working on developing algorithmic art and growing dissatisfied with the lack of means to show it outside the usual computing context. Seeing a mouse and keyboard open up this kind of imagery in a window over a familiar desktop OS on the usual computer hardware dilutes the nature of the experience.
These pieces typically involve coming up with a set of rules that describe how the image is drawn and how it animates. Random variations are introduced in the input parameters and end up generating different versions, each unique but all clearly belonging to the same family. After studying them for a while, you get an intuitive feel for the underlying process shaping them and you automatically start to anticipate how they will continue to develop. A tension develops back and forth between the expectation of the viewer and the validation (or not) of those expectations in the visuals, like watching crashing waves on the shore.
Each iteration of these recipes becomes a mini story all its own, with a beginning, a middle and an end. It’s not a coincidence that coders usually use variable names reflecting events in the natural world: birth, death, child, branch, root, etc… The random occurrence of these events gives each of these visual stories its unique character.
In the best of cases, the growing intuition that underlying laws have the potential for infinite manifestations invites contemplation in ways found in classical Islamic art.
The intent of this display is to create a space where, even if you don’t see God, at least you won’t have to run into Clippy.
Scanned Negatives. Amiga Monitor Photos, Digi-Paint.
The word “slitscan” is originally the name given to a specific type of photographic lens that uses a thin and tall rectangular aperture that is moved horizontally to create an exposure. Instead of exposing the whole film surface at once through an iris, these lenses capture the light over time and across the length of the negative, like a scanner or a rotary printing press. They have typically been used for capturing very wide horizontal perspectives in landscape photography or group photos, as well as to create optical visual effects. With the advent of digital video, this process can be expanded on quite a bit to generate surprising visuals which end up in a place between abstraction and representation where they can feel both familiar and strange at the same time.
Digital video can be thought of as a cube of data. Each image is a two dimensional plane of pixels with X and Y coordinates, and these image planes are stacked on top of each other like the floors of a skyscraper. In this cube, a frame in the original video is the ‘XY’ plane at height ‘t’. In our building analogy, this would be the floor plan at a specific floor. What we usually think of as a slitscan is the Yt plane for coordinate X, or to continue the skyscraper analogy, a cross sectional slice of the whole height of the building. Ultimately, this cube of data can be processed, dissected, or remixed in arbitrary ways to the point where the name “slitscan” no longer even makes sense. Video datagraphy is a more appropriate description of the process: making images from video data.
When we navigate the physical world, we use a constantly changing and always singular perspective to build a mental structure representing what is around us beyond what we can directly experience at that moment. Though these models persist through time in our consciousness, we can never experience them holistically. We are bound by the laws of physics and can not ACTUALLY wrap ourselves around them. As a substitute, we look for patterns that can give us some reference as to where we are on that continuum: a heart beat, light patterns, seasons, music, speech… Tracking these linear signals informs our conception of space beyond our current perspective and anchors our experience on the mysterious expanse of time,
Storytelling similarly spans time and space. It paints narrative arcs that connect specific events lost inside an infinity of places and moments, and provide a scaffolding on which our understanding of the world is conveyed. Even though stories conjure up a god-like perspective above the physical constraints of our human experience, they are typically linear in nature. Like the 3 dimensional shadows of a 4 dimensional hypercube we can never actually experience directly, or like the chained humanity in Plato’s cave watching shadows on the wall, they only hint at the existence of a greater context. There is an unfolding that happens and reveals a new dimension. The Tapestry of Bayeux is a wonderful example of this linear visual rhythm that unfolds on a timeline, and its horizontal shape is unsurprisingly similar to video datagraphs which represent samples of time and place on a two dimensional surface.
The physical principles of traditional photography are similar to the way vision works in that it is limited to a moment in space and time, and they actually do so 24 times a second in film. These space time video samples do not share these limitations. They feel alien because the process that reveals their hidden structures is not physically achievable for humans but they feel familiar because we have intuited them mentally from how we have experienced what they are representing.