Author: Thomas Hollier
BrowniePi
Art, Electrons, Photons, Project“A terrible idea that should never have been tried!” – Ansel Adams, April 2021.
[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column width=”1/3″][vc_single_image image=”2047″ img_size=”large” onclick=”link_image”][/vc_column][vc_column width=”2/3″][vc_column_text]This image is captured by shooting the film back of a Brownie 620 camera using a Raspberry Pi camera mounted inside the black chamber above the lens just outside the optical path. High gain screen material is used on the image plane to maximize the reflected light. It works, sort of… The current limitations are that it ends up being very low light so in full sun, I have to use the highest ISO available and about a .25 second exposure. Also, since the sensor is not in line, the captured image is skewed and you end up with soft focus on the top and bottom.[/vc_column_text][vc_empty_space height=”100 px”][vc_column_text]First image, un-warped and brightened.[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_column_text]It’s a fun project, and I’m sure I’ll complete it someday. I am waiting to see if a higher sensitivity chip will become available for the RPI. Ultimately, the best solution would be to manufacture a lens element that would refocus the 120 film size beam onto the small sensor mounted in the back, or a concave mirror on the back that would reflect the image onto the sensor where it is currently mounted (both of these solutions are beyond my pay grade).This uses a Raspberry Pi Zero W and a small LCD panel, both powered by a Pi Sugar battery. Operating system is diet pi. The camera is operated with the buttons on the LCD panel board and the resulting image is displayed on the screen. Using VNC and a virtual desktop, I also get the live feed directly from the camera to a laptop. The pictures are saved with the raw information and converted to DNG. I designed a 3D printed adapter to mount the electronic components on the body of the camera but I am waiting on more light sensitive solutions to finish the piece.
My completely unsorted code is here: https://github.com/thomashollier/browniePi
[/vc_column_text][/vc_column][/vc_row][vc_row el_class=”th_flexrow”][vc_column width=”1/3″ el_class=”th_flexpictures”][vc_single_image image=”2049″ img_size=”large”][/vc_column][vc_column width=”1/3″][vc_single_image image=”2051″ img_size=”large” el_class=”th_flexpictures”][/vc_column][vc_column width=”1/3″ el_class=”th_flexpictures”][vc_single_image image=”2050″ img_size=”large” el_class=”th_flexpictures”][/vc_column][/vc_row]Memory or Memento?
Art, Atoms, Collection, Electrons, ThoughtsOne is the ghost in our minds of a past sensory experience and the other is a physical thing we hope can be a gateway to the first. Does the promise of preserving memories destroy them by creating an unmanageable clutter of infinite possibilities which merge into a cold immaterial surface offering no comfort and condemning us to anxiety?
We want to reclaim power over defining the topology of our interior landscapes. It involves art, hammers, digital storage, authentication, originals, blood, and an evening of fun with people.
[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column width=”1/4″][vc_single_image image=”2012″ img_size=”full”][/vc_column][vc_column width=”1/2″][vc_single_image image=”2004″ img_size=”full” alignment=”center”][/vc_column][vc_column width=”1/4″][vc_single_image image=”2011″ img_size=”full” alignment=”right”][/vc_column][/vc_row]
[vc_row el_class=”th_collection-row” css=”.vc_custom_1615843473810{margin-right: 60px !important;margin-left: 60px !important;}”][vc_column][vc_column_text el_class=”th_collection-title”]
Postcards From The Upside Down, 2020
[/vc_column_text][vc_column_text el_class=”th_collection-media”]2D Pixel Arrays[/vc_column_text][vc_column_text]
[/vc_column_text][/vc_column][/vc_row]
[vc_row el_class=”th_collection-row” css=”.vc_custom_1615843473810{margin-right: 60px !important;margin-left: 60px !important;}”][vc_column][vc_column_text el_class=”th_collection-title”]
The Land Is Screaming, 2020
[/vc_column_text][vc_column_text el_class=”th_collection-media”]
Video Datagraphs. Sony RX 100VI, opencv, ffmpeg.
[/vc_column_text][vc_column_text]
[/vc_column_text][/vc_column][/vc_row]
[vc_row el_class=”th_collection-row” css=”.vc_custom_1615843473810{margin-right: 60px !important;margin-left: 60px !important;}”][vc_column][vc_column_text el_class=”th_collection-title”]Sketchz, 2014-2020[/vc_column_text][vc_column_text el_class=”th_collection-media”]Processing Sketches, Custom Linux, PyQT, Teensy, wood, glass.[/vc_column_text][vc_video link=”https://player.vimeo.com/video/510531613″ el_width=”70″ align=”center” css=”.vc_custom_1617002091343{margin-top: 40px !important;}”][/vc_column][/vc_row][vc_row css=”.vc_custom_1609467915356{margin-top: 40px !important;padding-right: 10% !important;padding-left: 10% !important;background-color: #101616 !important;}”][vc_column][vc_row_inner][vc_column_inner][vc_column_text]
Displaying algorithmic art
[/vc_column_text][vc_column_text]The design work for this custom display device started around 2014. At that time, I was developing algorithmic art and growing dissatisfied with the lack of means to show it outside the usual computing context. Seeing a mouse and keyboard open up this kind of imagery in a window over a familiar desktop OS on the usual computer hardware dilutes the nature of the experience.[/vc_column_text][/vc_column_inner][/vc_row_inner][vc_column_text]These pieces typically involve coming up with a set of rules that describe how the image is drawn and how it animates. Random variations are introduced in the input parameters and end up generating different versions, each unique but all clearly belonging to the same family. After studying them for a while, you get an intuitive feel for the underlying process shaping them and you automatically start to anticipate how they will continue to develop. A tension develops back and forth between the expectation of the viewer and the validation (or not) of those expectations in the visuals, like watching crashing waves on the shore.
Each iteration of these recipes becomes a mini story all its own, with a beginning, a middle and an end. It’s not a coincidence that coders usually use variable names reflecting events in the natural world: birth, death, child, branch, root, etc… The random occurrence of these events gives each of these visual stories its unique character.
In the best of cases, the growing intuition that underlying laws have the potential for infinite manifestations invites contemplation in ways found in classical Islamic art.
The intent of this display is to create a space where, even if you don’t see God, at least you won’t have to run into Clippy.[/vc_column_text][/vc_column][/vc_row]
[vc_row el_class=”th_collection-row” css=”.vc_custom_1615843473810{margin-right: 60px !important;margin-left: 60px !important;}”][vc_column][vc_column_text el_class=”th_collection-title”]Pixel Sculpted Masks, 1989[/vc_column_text][vc_column_text el_class=”th_collection-media”]Scanned Negatives. Amiga Monitor Photos, Digi-Paint.[/vc_column_text][vc_column_text]
Video Datagraphy
Art, Collection, Photons, Thoughts“Slitscan” is the name given to a photographic technique that uses a specific type of lens using a thin and tall rectangular aperture that is moved horizontally to create an exposure. Instead of exposing the whole film surface at once through an iris, these lenses capture the light over time and across the length of the negative, like a scanner or a rotary printing press. They have typically been used for capturing very wide horizontal perspectives in landscape photography or group photos, as well as to create optical visual effects. With the advent of digital video, this process can be expanded on to generate surprising visuals which exist somewhere between abstraction and representation in a place where they feel both familiar and strange at the same time.
Digital video can be thought of as a cube of data. Each image is a two dimensional plane of pixels with X and Y coordinates, and these image planes are stacked on top of each other like the floors of a skyscraper. In this cube, a frame in the original video is the ‘XY’ plane at height ‘t’. In our building analogy, this would be the floor plan at a specific floor. What we usually think of as a slitscan is the Yt plane for coordinate X, or to continue the skyscraper analogy, a cross sectional slice of the whole height of the building. Ultimately, this cube of data can be processed, dissected, or remixed in arbitrary ways to the point where the name “slitscan” no longer makes sense. Video datagraphy is a more appropriate description of the process: making images from video data.
[/vc_column_text][vc_single_image image=”1697″ img_size=”large” alignment=”center”][vc_column_text]
When we navigate the physical world, we use a constantly changing and always singular perspective to build a mental model representing what is around us beyond what we can directly experience at that moment. Though these models persist through time in our consciousness, we can never experience them holistically. We are bound by the laws of physics and can not ACTUALLY wrap ourselves around them. As a substitute, we look for patterns that can give us some reference as to where we are on that continuum: a heart beat, light patterns, seasons, music, speech… Tracking these linear signals informs our conception of space beyond our current perspective and anchors us on the mysterious expanse of time,
Storytelling similarly spans time and space. It draws narrative arcs that connect specific events lost inside an infinity of places and moments, and provides a scaffolding on which our understanding of the world is conveyed. Even though stories conjure up a god-like perspective above the physical constraints of our human experience which we can never directly experience, they unfurl in a linear manner that gives us glimpses of that higher dimension, like shadows of a 4 dimensional hypercube projected onto the 3 dimensions we inhabit.
The physical principles of traditional photography are similar to the way vision works in that they are limited to a single momentary perspective in space and time. Video datagrams look strange because the way they expose the structures of time is not physically possible for us but they feel familiar because we have intuited from our experience of time what they end up revealing in a static coherent image.
[/vc_column_text][vc_single_image image=”1695″ img_size=”full” alignment=”center”][/vc_column][/vc_row]Video Datagraphs. iPhone 6, Nuke, Kronos, ffmpeg.
[/vc_column_text][vc_column_text] [/vc_column_text][/vc_column][/vc_row]Casting a freeway interchange
Art, AtomsIf you live in LA, you probably are familiar with huge freeway interchanges. It’s an impressive network of forming a big knot that somehow channels various streams in all the possible ways. It has huge curves sweeping above and below the main wide arteries. Anyway, they are cool. My favorite is where the 110 and 105 interchange and I’ve been mulling over making a cement casting of it for a while now. It starts with grabbing 3D data from the interwebs and turning it into a functional model.
Once the data is prepped, I prepare it for 3D printing. I want the final piece to be 26 inches wide which is wider than any printer I could get my hands on so I break it up in tiles and run some tests.
cut out a tile and make a negative shape positive and negative tests
Next step is to figure out the process of making the mold. There two options but the first step for each is the same: glue the tiles back together (there will be 16 of them) and use bondo, sand paper to smooth out all the irregularities, and spray paint some urethane to smooth over the result. The question is: should I make the urethane mold directly from the positive 3D prints, or should I create a positive plaster intermediate I can tweak and further polish, and make a mold out of that? If I go with the plaster, I worry that the brittle nature of the plaster and the rigidity of PLA filament will cause the small details to break as I release the plaster image from the 3D print. If I pour the mold directly on the 3D print, I worry that the tell tale 3D print lines will be captured by the mold and that I will not have had the chance to polish it with the control a plaster intermediate would give.
So, next step is to test both approaches on my two test prints, and also test printing a tile with PETG filament which I hear is more flexible. If you have any recommendations, I’m all ears…