This is an algorithmic piece that uses a clip of digital video as its source and interprets the information in ways it was not originally intended for. The actual movie file shown here is a screen capture showing a sample of the process. It continuously churns the input data and produces a stream of endlessly unique content.
“A terrible idea that should never have been tried!” – Ansel Adams, April 2021.
This image is captured by shooting the film back of a Brownie 620 camera using a Raspberry Pi camera mounted inside the black chamber above the lens just outside the optical path. High gain screen material is used on the image plane to maximize the reflected light. It works, sort of… The current limitations are that it ends up being very low light so in full sun, I have to use the highest ISO available and about a .25 second exposure. Also, since the sensor is not in line, the captured image is skewed and you end up with soft focus on the top and bottom.
First image, un-warped and brightened.
It’s a fun project, and I’m sure I’ll complete it someday. I am waiting to see if a higher sensitivity chip will become available for the RPI. Ultimately, the best solution would be to manufacture a lens element that would refocus the 120 film size beam onto the small sensor mounted in the back, or a concave mirror on the back that would reflect the image onto the sensor where it is currently mounted (both of these solutions are beyond my pay grade).
This uses a Raspberry Pi Zero W and a small LCD panel, both powered by a Pi Sugar battery. Operating system is diet pi. The camera is operated with the buttons on the LCD panel board and the resulting image is displayed on the screen. Using VNC and a virtual desktop, I also get the live feed directly from the camera to a laptop. The pictures are saved with the raw information and converted to DNG. I designed a 3D printed adapter to mount the electronic components on the body of the camera but I am waiting on more light sensitive solutions to finish the piece.
One is the ghost in our minds of a past sensory experience and the other is a physical thing we hope can be a gateway to the first. Does the promise of preserving memories destroy them by creating an unmanageable clutter of infinite possibilities which merge into a cold immaterial surface offering no comfort and condemning us to anxiety?
We want to reclaim power over defining the topology of our interior landscapes. It involves art, hammers, digital storage, authentication, originals, blood, and an evening of fun with people.
The design for this custom display device started around 2014. At that time, I was working on developing algorithmic art and growing dissatisfied with the lack of means to show it outside the usual computing context. Seeing a mouse and keyboard open up this kind of imagery in a window over a familiar desktop OS on the usual computer hardware dilutes the nature of the experience.
These pieces typically involve coming up with a set of rules that describe how the image is drawn and how it animates. Random variations are introduced in the input parameters and end up generating different versions, each unique but all clearly belonging to the same family. After studying them for a while, you get an intuitive feel for the underlying process shaping them and you automatically start to anticipate how they will continue to develop. A tension develops back and forth between the expectation of the viewer and the validation (or not) of those expectations in the visuals, like watching crashing waves on the shore.
Each iteration of these recipes becomes a mini story all its own, with a beginning, a middle and an end. It’s not a coincidence that coders usually use variable names reflecting events in the natural world: birth, death, child, branch, root, etc… The random occurrence of these events gives each of these visual stories its unique character.
In the best of cases, the growing intuition that underlying laws have the potential for infinite manifestations invites contemplation in ways found in classical Islamic art.
The intent of this display is to create a space where, even if you don’t see God, at least you won’t have to run into Clippy.
The word “slitscan” is originally the name given to a specific type of photographic lens that uses a thin and tall rectangular aperture that is moved horizontally to create an exposure. Instead of exposing the whole film surface at once through an iris, these lenses capture the light over time and across the length of the negative, like a scanner or a rotary printing press. They have typically been used for capturing very wide horizontal perspectives in landscape photography or group photos, as well as to create optical visual effects. With the advent of digital video, this process can be expanded on quite a bit to generate surprising visuals which end up in a place between abstraction and representation where they can feel both familiar and strange at the same time.
Digital video can be thought of as a cube of data. Each image is a two dimensional plane of pixels with X and Y coordinates, and these image planes are stacked on top of each other like the floors of a skyscraper. In this cube, a frame in the original video is the ‘XY’ plane at height ‘t’. In our building analogy, this would be the floor plan at a specific floor. What we usually think of as a slitscan is the Yt plane for coordinate X, or to continue the skyscraper analogy, a cross sectional slice of the whole height of the building. Ultimately, this cube of data can be processed, dissected, or remixed in arbitrary ways to the point where the name “slitscan” no longer even makes sense. Video datagraphy is a more appropriate description of the process: making images from video data.
When we navigate the physical world, we use a constantly changing and always singular perspective to build a mental structure representing what is around us beyond what we can directly experience at that moment. Though these models persist through time in our consciousness, we can never experience them holistically. We are bound by the laws of physics and can not ACTUALLY wrap ourselves around them. As a substitute, we look for patterns that can give us some reference as to where we are on that continuum: a heart beat, light patterns, seasons, music, speech… Tracking these linear signals informs our conception of space beyond our current perspective and anchors our experience on the mysterious expanse of time,
Storytelling similarly spans time and space. It paints narrative arcs that connect specific events lost inside an infinity of places and moments, and provide a scaffolding on which our understanding of the world is conveyed. Even though stories conjure up a god-like perspective above the physical constraints of our human experience, they are typically linear in nature. Like the 3 dimensional shadows of a 4 dimensional hypercube we can never actually experience directly, or like the chained humanity in Plato’s cave watching shadows on the wall, they only hint at the existence of a greater context. There is an unfolding that happens and reveals a new dimension. The Tapestry of Bayeux is a wonderful example of this linear visual rhythm that unfolds on a timeline, and its horizontal shape is unsurprisingly similar to video datagraphs which represent samples of time and place on a two dimensional surface.
The physical principles of traditional photography are similar to the way vision works in that it is limited to a moment in space and time, and they actually do so 24 times a second in film. These space time video samples do not share these limitations. They feel alien because the process that reveals their hidden structures is not physically achievable for humans but they feel familiar because we have intuited them mentally from how we have experienced what they are representing.
If you live in LA, you probably are familiar with huge freeway interchanges. It’s an impressive network of forming a big knot that somehow channels various streams in all the possible ways. It has huge curves sweeping above and below the main wide arteries. Anyway, they are cool. My favorite is where the 110 and 105 interchange and I’ve been mulling over making a cement casting of it for a while now. It starts with grabbing 3D data from the interwebs and turning it into a functional model.
Once the data is prepped, I prepare it for 3D printing. I want the final piece to be 26 inches wide which is wider than any printer I could get my hands on so I break it up in tiles and run some tests.
Next step is to figure out the process of making the mold. There two options but the first step for each is the same: glue the tiles back together (there will be 16 of them) and use bondo, sand paper to smooth out all the irregularities, and spray paint some urethane to smooth over the result. The question is: should I make the urethane mold directly from the positive 3D prints, or should I create a positive plaster intermediate I can tweak and further polish, and make a mold out of that? If I go with the plaster, I worry that the brittle nature of the plaster and the rigidity of PLA filament will cause the small details to break as I release the plaster image from the 3D print. If I pour the mold directly on the 3D print, I worry that the tell tale 3D print lines will be captured by the mold and that I will not have had the chance to polish it with the control a plaster intermediate would give.
So, next step is to test both approaches on my two test prints, and also test printing a tile with PETG filament which I hear is more flexible. If you have any recommendations, I’m all ears…