^[strange]+6\s[regx]+5\s[obseion]+9$

Electrons

I find regular expressions obtuse, maddening, and at the same time, strangely compelling. For those who don’t know, “regular expressions” (usually referred to as “regex”) is a common way for most computer programs to search text for particular patterns of characters, like for example a date or an address. Regex gives you access to atoms that represent individual character (“a”, “6”, “#”, etc…) or general character types (digits, symbols, uppercase…), as well as a concise grammar defining how these characters appear. Regex is a set of small blocks with specific rules that define how to put all these blocks together to represent something bigger.

A quick example

How to match a date like 2014/12/24?
Digits are represented like this: \d so the following regex will represent any character from 0 to 9:
\d
What if you are looking for a series of 4 digits? You could do this:
\d\d\d\d
But it can get a little heavy if you are looking for 200 digits, so you can use the following notation:
\d{4}
Okay, so now you have matched the first 4 digits, how about matching the “/” character? In this case, since it’s just a specific character, just go ahead and type it in:
\d{4}/
Then match 2 digits, another “/” and two more digits:
\d{4}/\d{2}/\d{2}

It’s a mental puzzle that requires you to visualize how patterns can be broken down into concise nuggets of representative abstraction.

Regulex

The following web site creates a diagram out of a regex expression to give you a useful visual graphic depiction of the logic.

Regexpress

This next one uses similar visuals but allows you to build the expression by stringing along icons that represent the patterns you are looking for.

Regviz

Here’s another one that doesn’t use fancy charts and graphics but it does allow you yo see the matches in real time with text you specify.

Dexter’s Head

Art, Electrons

Quick little test of photogrammetry software. Nothing revolutionary here… It’s simply something that I’ve been interested in for a long time and haven’t gotten around to working with. Until now, that is, since I have been capturing and modeling real environments for live projections.

From this:

19 still images.
contactSheet

to this:

Click and drag in the window to tumble the geometry.

Maya Frustum Visualizer

Electrons

Why Maya doesn’t offer this functionality out of the box is a mystery to me. I suspect there is actually a way to do it but it’s so buried I can’t find it. I needed it for a recent job so I’m putting it out there for public consumption. Anyway, it’s simple: just select a camera and run the script to visualize the frustum. It should update if you change the camera transforms, film back and FOV. Use it, change it, break it, fix it, improve it at your leisure…

 """A Maya Python script that builds frustum geometry based on the selected camera. Thomas Hollier 2015. hollierthomas@gmail.com""" 
import maya.cmds as cmds
import math, sys

#--------- Gather relevant camera attributes
import maya.cmds as cmds
import math, sys

#--------- Gather relevant camera attributes
camera = cmds.ls(selection=True)

if not cmds.objectType(camera, isType="camera"):
	print "ERROR: You need to select a camera."
	sys.exit(0)

focalLength = cmds.getAttr(camera[0]+".focalLength")
horizontalAperture = cmds.getAttr(camera[0]+".cameraAperture")[0][0]
verticalAperture = cmds.getAttr(camera[0]+".cameraAperture")[0][1]
nearClipping = cmds.getAttr(camera[0]+".nearClipPlane")
farClipping = cmds.getAttr(camera[0]+".farClipPlane")

print "---- Camera Attributes:\n\tfocal length: %s\n\thorizontal aperture: %s" % (focalLength, horizontalAperture)

#--------- compute FOV just for kicks, and to verify numbers match
adjacent = focalLength
opposite = horizontalAperture*.5*25.4

print "---- Right Triangle Values:\n\tadjacent: %s\n\topposite: %s" % (adjacent, opposite)

horizontalFOV = math.degrees(math.atan(opposite/adjacent))*2

print "\tcomputed horizontal FOV: %s" % (horizontalFOV)

#--------- calculate ratios
plane = horizontalAperture*25.4
nearScaleValue = nearClipping*plane/focalLength
farScaleValue = farClipping*plane/focalLength

print "---- Lens:\n\tprojection ratio: %s" % (plane/focalLength)

#--------- build geometry
myCube = cmds.polyCube(w=1, h=1, d=farClipping-nearClipping, sy=1, sx=1, sz=1, ax=[0,1,0], ch=1, name=camera[0].replace("Shape", "Frustrum"))
 
cmds.setAttr(myCube[0]+".translateZ", nearClipping+(farClipping-nearClipping)*.5)
cmds.makeIdentity(apply=True, t=1, r=1, s=1, n=0, pn=1);
cmds.setAttr(myCube[0]+".rotatePivotZ", 0)
cmds.setAttr(myCube[0]+".scalePivotZ", 0)
cmds.setAttr(myCube[0]+".rotateY", 180)

#--------- use expressions to update frustum geo as FOV and apertures are changed 
scaleX = "%s.scaleZ*%s.farClipPlane*%s.horizontalFilmAperture*25.4/%s.focalLength" % (myCube[0],camera[0],camera[0],camera[0])
scaleY = "%s.scaleZ*%s.farClipPlane*%s.verticalFilmAperture*25.4/%s.focalLength" % (myCube[0],camera[0],camera[0],camera[0])

cmds.move(0,0,0, myCube[0]+".f[2]", absolute=True)
cmds.scale(nearScaleValue, 0, 1, myCube[0]+".f[2]", pivot=[0,0,0])
cmds.expression(s="%s.scaleX = %s;%s.scaleY = %s;" % (myCube[0],scaleX,myCube[0],scaleY), n="%s_Expr" % myCube[0])
cmds.parent(myCube, camera, relative=True)


Self Illuminating Responsive Tintype Frame

Art, Atoms, Electrons

The area of a tintype image that gets exposed to the most light essentially becomes a pure silver coating. While the reflective properties of silver make the final image a bit darker than traditional photographic prints, it also gives it a really compelling silvery metallic reflective feel. As a way to highlight those unique qualities, I decided to experiment with hiding LEDs in a box frame to illuminate the image from inside. I also wanted to find a way to intensify the lighting and make the image come alive as viewers got close to it.

Framed!

I inherited some antique oak wood floor pieces and made quick work of it on the old chop saw.

I built some walls behind the front of the frame and attached some LED string lights on it.

I mounted an 8×10 tintype on a black backing board cut to fit at the bottom of the frame’s walls.

Lights OFF / Lights ON

Controls

The next step was to figure out the kind of sensor needed to make the lights come on as a viewer approached the frame. I figured the sensor required would need a range of at least a few meters and be able to return the specific distance to the closest obstructing object. I am not a distance sensor expert so I used the internet to help. In the end, I settled for a Maxsonar EZ0 LV. It’s cheap, it’s got a range of a few meters and it’s got both serial, analog and PWM outputs. I hooked it up to a teensy board and confirmed the sensor was returning appropriate distance values. On the lighting front, I was planning on controlling the brightness of the LED string with the teensy, but since the LED string requires a 12V supply and the teensy outputs only 5V, I used a MOSFET driven by the teensy’s output to modulate the LED’s power supply.

The electronically savvy amongst you will notice that I’m currently using 2 power plugs: a 12V one for the LED and a 5V USB supply for the teensy, which is stupid. I tried to use a voltage regulator to convert the 12V to a 5V supply but somehow it created much noisier reading on the distance sensor… I must have missed a capacitor somewhere. I will try using an Arduino Nano which can take a 12V supply directly.

Code

God, I love the internet!!! I read somewhere that the sensor’s PWM signal was much cleaner so I started with that but I eventually found out that it also is much much slower and wasn’t able to keep up with the rate of change I was looking for. In the end, I used the analog signal and tried to filter it as best I could.

/*
Using the Analog signal from a Maxsonar LV EZ0 to drive a 
12V LED strip through a MOSFET. 
Take 20 samples, sort and select the median value.
*/

const int anPin = 9;
int anVolt, cm;
const int samples = 20;
int ranges[samples];
int highestReading;

float time;
float previousIntensity;
float maxDist;
float minDist;
int minVal, maxVal;

int mosfetPin = 9;           // the pin that the MOSFET is attached to

void setup()  { 
  // declare pin 9 to be an output:
  pinMode(led, OUTPUT);
  Serial.begin(9600);
  time = 0;
  maxDist = 450;
  minDist = 10;
  minVal = 5;
  maxVal = 255;
  previousIntensity = 0;
} 

void loop(){
  // print time spent since last loop
  Serial.println(millis()-time);
  time = millis(); 
  
  // sample sensor value
  for(int i = 0; i < samples ; i++)
  {
    anVolt = analogRead(anPin)/2;
    ranges[i] = anVolt;
    delayMicroseconds(50);
  }
  isort(ranges,samples);
  
  // convert to centimeters
  Serial.println(samples/2);
  cm = 2.54 * ranges[samples/2];

  // shape the curve of the result
  float intensity = (cm - minDist)/(maxDist - minDist);
  intensity = 1-constrain(intensity,0,1);
  intensity = intensity*intensity;
  intensity = previousIntensity*.995 + intensity*.005;
  previousIntensity = intensity;

  // set the brightness of pin 9:
  float val= intensity*(maxVal-minVal)+minVal;
  analogWrite(mosfetPin, val);
}

//Sorting function
void isort(int *a, int n){
// *a is an array pointer function
  for (int i = 1; i < n; ++i)
  {
    int j = a[i];
    int k;
    for (k = i - 1; (k >= 0) && (j < a[k]); k--)
    {
      a[k + 1] = a[k];
    }
    a[k + 1] = j;
  }
}

Next Steps

Once I get my paws on that Arduino Nano board, I can rework my circuit and get the soldering iron out to make my electronics more permanent. I have also ordered a HC-SR04 Distance Sensor to see how it compares to the Maxsonar in terms of accuracy, speed and noise. Also, I need to make the frame a little bit deeper so the light can spread out a bit further across the image.

Which one is cooler?

The ominous cyclopean look or the cute and friendly Wall-E look?

The three main attributes of a camera lens

Art, Electrons

If you ever want to embark on the foolish pursuit of building a camera, you will need to understand how lenses work. The three attribute that control the behavior of a basic camera lenses are focal length, format and aperture.

Focal length

The focal length is the distance from the lens at which infinite rays converge. For example, if you have a 50mm lens, infinity is in focus when the plane of the image is 50 millimeters away from the lens. A lens’s focal length is a factor of how much it bends the light. The more the bend, the closer to the lens the rays converge and the smaller the projected image. This, in turn, means that the focal length can also be used as an indication of the magnification of the image.

Format

The format represents the intended size of the image plane that the lens is designed to project onto, such as 35mm or 4×5. The combination of the focal length and format determine the lens’s field of view. This is why a lens with the same focal length gives you a different amount of magnification on different formats. (illustration)

Aperture

The aperture gives a measure in f-stops of how “fast” the lens is. It is often thought of as the size of the opening in the lens, but in photography, it actually is the ratio of the focal length over the diameter of the lens opening. This has the advantage of remaining proportionally equal across the different sizes of photographic systems. If you use the same f-stop in a tiny phone camera and a big SLR using the same ISO setting, the same shutter speed will expose both images similarly. As the size of the opening increases, more light gets in but the thinner the focal plane is.

Finding the focal plane

The last relevant piece of information is the formula that determines the distance of the focal plane to the lens for rays that are closer to the camera than infinity. The relationship between the distance of an object to the lens (S1), and the distance of the lens to the focal plane of that object(S2) is defined by this formula:
1/S1 + 1/S2 = 1/focal_length
Solving for S1, this becomes:
S1 = (S2 * focal_length)/(focal_length - S2)
If you plug in your own numbers, you will notice that the closer your object is to the lens, the farther away the focal plane will be from the lens.

Useful python functions


# Given a specific focal length and distance to subject, how close to the lens does the image plane converge?
def distanceOfFocusPlaneToLens(distanceToSubject, focalLength):
v= (focalLength*distanceToSubject)/(distanceToSubject-focalLength)
print ("when subject is at %s the focal plane is %s from the lens" % (distanceToSubject, v))

#FOV calculator
def FOVFromFocalAndAperture(focal, aperture):
    return math.degrees(2*math.atan(aperture/(focal * 2)))

Creating a Spotify appliance from an old smartphone

Atoms, Electrons

A problem:

If you have a teenager, you will no doubt be familiar with the dilemma of letting them listen to music at night while keeping them somewhat sheltered from the irresistible pull of the bright rectangle of light that seem to so efficiently hijack their attention. In my day, it was simple: the tape deck played music and that’s all it did. These days, with smart phones, it feels like it’s all or nothing. The tape deck comes with a movie theater, a video game arcade, and a place to hang out with friends, and they’re all open and available 24 hours a day. So, like any concerned parent, we take the phone away at night but since now days, it’s the place they get their music, it means they can’t listen to music anymore which is kind of sad.

Something annoying:

You know what I hate? The fact that our consumer culture dictates that when something is broken, we throw it away and buy something new rather than fixing it. I had a smartphone and after two years, the internal phone speaker broke, which meant I had to have it on speaker or plugged into an earpiece to hear people talking to me. I actually attempted to fix the problem: I ordered the part on ebay and spend 30 minutes opening the phone but I failed. Mind you, the smartphone was still a computer with over 1000 times the speed and memory of my first computer, a nice bright and sharp touch screen, wifi, etc… Other than the speaker problem, it worked perfectly. In the end, though, since I couldn’t fix the problem, I reluctantly gave in and bought a new phone. Grrrrr!

Ready for the trash? Not so fast!

So, I made this:

It’s actually really simple. I tried to restrict the functionality of the phone and create an object designed with the single purpose of enjoying spotify without the potential distraction of the rest of the internet getting in the way. First off, with the zip card out, your smartphone becomes a small tablet.

I removed all the video games and Netflix and Youtubes and Hulus and Snapchats and Facebooks and Vines and Instagrams and circles and Pinterests and Ellos and MySpace. I kept only Spotify and installed a program called Autostart which automatically launches an app after the phone boots and prevents a user from quitting the app.

I then built a frame into which I could mount the phone and poured a trademark Hollier polished cement base. I used autostart to automatically launch Spotify to whenever the phone turns on and thus transformed a crappy old orphaned phone into a custom one-of-a-kind Spotify appliance. Being mounted in a frame and set into a solid base transforms it into something you set and walk away like a radio rather than something you interact and fiddle with like a phone.

How to get a 9 year old interested in programming

Electrons, Giggles

Osx has a nifty command line speech synthesizer called “say”. It allows you to type some text and hear it “spoken” by the ubiquitous synthesized robotic voice. First, open the terminal and show him how it works by typing:
say "hello, my name is Robert"
Then show him that you can type different sentences and let him play with that until he gets the hang of it. Make sure to include enough potty humor to ensure sufficient hilarity:
say "Even though I am a computer, I sometimes talk about farts."
That should get his attention. You can also include question marks and nonsensical words for extra extra fun:
say "Are you some kind of flarpy nunckenbarf?"
The next step is to introduce the concept of variables:
friend="John"
say "$friend is a complete idiot"
friend="My hamburger"
say "$friend is a complete idiot"

By this time, tears of laughter should be streaming down his face, but don’t let it stop you. This is where the comedic potential really starts paying off with the introduction of the “for” loop.
for friend in "alfred" "max"
do
say "$friend is a fizzlebutt"
done

And then show him how to pause between the sentences
for friend in "alfred" "max"
do
say "$friend is a fizzlebutt"
sleep 1
done

Finally, once he finally peels himself off the floor and catches his breath, you apply the final coup-de-grace with the help of the “if” statement:
for friend in "alfred" "max" "frank" "dave"
do
if [ "$friend" != "max" ]
then
say "$friend is a total moron"
sleep 1
else
say "$friend has bad breath"
sleep 1
fi
done

After you share these intoxicatingly powerful instruments of distraction with your son, I suggest you steer clear of the technology teacher at the school.

Exposure/gain calculator for all your inverse squared falloff needs

Art, Electrons

I get asked this approximately once every 6 months and I always forget which means I have to spend a bunch of time looking it up again. This kind of conversion usually comes up going back and forth between tweaking lights in the render and color correcting the individual outputs in the comp. I’ll jot the math down here so I know where to look for it next time. Oh, and since it’s about relentless play and hacking in general, maybe I’ll build some quick little javascript/HTML calculators, cuz I just like to party like that!

Converting from stops to a multiplier

This is useful when you want to use f-stops with a color correct node that multiplies your color values.
newGain = oldGain * pow(2,exposureChange)

Old Gain:
Exposure Change:
New Gain: 666

Converting from a multiplier to stops

This is useful when you have color corrects in the comp and want to bake them back into your render’s lights.
newExposure = oldExposure + log(gainChange,2)

Old Exposure:
Gain Change:
New Exposure: 666

Adjusting for distance

While we’re at it, you often need to move a light backwards or forwards while keeping the same light intensity on a subject. Assuming that “oldDistance” is your light’s current distance to subject and “newDistance” is the new distance to subject, you can use the following formula to figure out the new exposure required for the light to have the same intensity on your subject from the new position.

newExposure = oldExposure + log(pow(newDistance,2)/pow(oldDistance,2),2)

Old Exposure:
Old Distance:
New Distance:
New Exposure 666

Adjusting for light area

In some renderers the light intensity is not normalized to the area of the light itself, which means that your light becomes brighter as you scale it up. If you want the amount of light to remain similar as you scale it up or down, this is your formula:

areaRatio = (oldWidth * oldHeight)/(newWidth * newHeight)
newExposure = oldExposure + log(areaRatio, 2)

Old Exposure:
Old Width:
Old Height:
New Width:
New Height:
New Exposure 666


Submerged, a Lightscape Installation

Art, Electrons

I have a real soft spot for projects that blend virtual environments with real architectural spaces. In fact, before I even found a place in the VFX and animation industry, I had already had the opportunity to get involved in various stage design and themed entertainment projects that required this kind of creative inquiry. I’ve also always been excited by the use of image re-projections techniques in my VFX and animation work because I find they can sometimes offer an elegant and effective solution to problems which would otherwise be difficult or expensive to tackle through more traditional approaches.

A few months ago, when a painter friend approached me for help designing a lightscape installation for the opening of her exhibit, I jumped at the opportunity.  Corinne Chaix‘s exhibit is called “Submerged” and is at the PYO gallery downtown. Her work features underwater scenes, and she wanted to explore the idea of complementing their mood and reinforcing their theme by turning the gallery space itself into an underwater environment.

We visited the gallery and brainstormed on the most aesthetically interesting ways to set up the projection. I also surveyed the space which later allowed me to build an accurate digital version of the environment.

The next step was to create a digital version of the projector in this virtual environment which allowed us to visualize the way imagery projected out into the space.

Meanwhile Corinne compiled a set of stock footage clips that resonated with her and passed them on to me. I created a composite movie that blended various elements from these clips and formatted the resulting image to fit the contours of the gallery space.
The first clip conveyed a dark foreboding underwater feeling while the second captured the delicate crystal beauty of the water’s surface.

I combined the two, moving the water surface to the top of the frame in an echo of many of her paintings.
The last step was to take this composite, project it onto the walls and remap it to line up with the contours and orientation of the walls.
I used the open source VisualSFM package to extract the geometry from the photos, Maya for the 3D and Nuke for compositing.

Hacking, Creativity, Process

Art, Electrons, Giggles, Thoughts

PlaneSander

I had a great time today! I took apart my belt sander. It’s a basic Black and Decker model I got at Home Depot which I have been abusing for the past two years to sand and polish the cement pots I make. In the end, considering the time it took me and the fact that it only costs $50, the sensible thing would probably have been to go out and buy a new one. The thing is I am curious; I wanted to see how the pieces necessary for the tool to function all fit together into one design, I wanted to see how big the motor is, what kind of gears and pulleys it uses, and if there are any cool pieces I can use for something else or are just cool to look at. In the end, I took all its pieces apart, cleaned up all the cement and wood dust that were clogged up in there and, lo and behold, when I put it back together, it worked!

Hacking is a vital activity that subverts the opaque technological structures that exert increasing control over our lives. Extensive data collection empowers large corporate entities to profile how we fit into marketing models and allows them to decide what we should or shouldn’t have access to in order to maximize profits. Accompanying this are the prevailing consumerist attitudes which dictate that the broken item should be thrown away and a new one bought. This wasteful assumption is enabled by the orgy of cheap goods globalization provides us with.  It sucks!

Opening up that belt sander represents my refusal to accept this status quo.  Though I don’t particularly like the word “hacker” because it conjures up the image of a social recluse with questionable personal  hygiene, impressive technical abilities, and a broken moral compass, what I associate hacking with are the creative endeavors borne from the spirit of questioning, exploring, and rearranging the prevailing attitudes and objects of our world. It attempts to figure out how something works, and whether the designers of that “thing” put it together in a way that attempts to control my behavior, and it further seeks to put that thing back together differently. The reasons for doing so can be varied: artistic, political and utilitarian, but usually a bit of all of the above.

Ultimately, I think what really compels me to open up a broken belt sander just to see what’s inside it is that it makes me happy. It allows me to experience a child like sense of discovery and excitement at understanding how something works and the happiness of integrating it into my own creative process. When I get lost in this process of disassembling and reassembling, breaking and building, cool things usually come out and it puts me in harmony with the world. It restores my sense of my own humanity. That’s why the name of this blog is relentlessplay; it’s meant to convey the urgency of keeping the spirit of play alive and well.