break week

I forced myself to have a break this week, so while I did some work on the Capitol Theatre project, I didn’t have a chance to keep up with research, beyond looking at some reference images for my browser-based works.

Happy accidents

I’m working in Emission Control 2 again for my Capitol Theatre piece. It’s a rather temperamental piece of software, especially on PC, where possibly the lack of priority in its audio / file processing, or the fact that it writes 32-bit WAV files, results in occasional glitching. For the piece I’m developing, which involves a lot of rhythmic pulsing, those glitches cause the rhythm to be thrown off, and since I’m using several recordings from the program layered in a DAW, this has some unintended Reich-y consequences, as layers will shift randomly one by one.

For example, 0:07 in this excerpt:

The third note in the four-note pattern shifts out for a moment and then back again. I edited it slightly to remove the glitch sound, and to align the rhythmic shift to a 32nd note grid, but it’s an unintentional interesting feature for now. I’m not sure if I’ll keep it like that, as it throws the hypnotic feeling off somewhat, but it was a nice little happy accident that was only mildly frustrating after a whole day of recording similarly glitchy recordings from Emission Control. I’ve subsequently switched back to my iMac as the PC was frustrating me in other ways.

Pharos Designer

I’ve started digging into Pharos Designer using audio from the above experiment, mostly getting a feel for the timing and some of the automation options available. My conclusion is that I should not feel like using the presets (e.g. strobe, wave, etc) is cheating—after spending quite some time trying to replicate what the presets do by individually accessing each light and cycling through them, I’ve found that it’s not quite worth the trouble. I’ve also realised that it’s likely possible to manipulate the order of lights in the cycling presets by creating new groups with the lights moved into their required positions. I’ll study this over the coming week and see how I go.

Another thing I’m realising is that on the simulation, any quick strobing isn’t really represented very well. I’m hoping that this isn’t the case in the actual space, but I’m prepared to use ramped strobing instead, ie. a quick cut at the beginning or end, either followed or preceded by a fade. The ramp modes work really nicely with the wave preset, resulting in a nice pulsing chase sequence that I can easily time to my music (which is specifically set to 120bpm in order to work best with Pharos).

Browser-based works

As I’ve had a deliberately quiet week, I haven’t had a chance to get into the browser-based work, but I have been collecting more examples, mostly through various Reddit communities, but also in interesting areas such as my partner’s vision therapy exercises.

One exercise in particular uses 3D glasses to make the wearer’s eyes focus on a red and blue circle as one object, and each circle moves further apart onscreen. The viewer is unaware of this happening, as they are told to find a specific feature in the circle and press a key when they see it. Pressing a key results in the images moving further apart, and thus the viewer expecting to lose focus and find that feature again; this distraction enables the circles to move further apart and the viewer’s eyes to follow. It’s a very interesting effect and has made me wonder about similar eye focus techniques, such as autostereograms. Much like some previous ideas, I think it may border on being annoying/headache-inducing, but creating a visual experience that is viewed as an autostereogram may be an interesting way to create a hypnotic experience.

I’m also thinking about kaleidoscopes and the possibility to connect the mechanics of them to an audio experience. One such possibility is to use a wave file of a chord progression, much like in my Emission Control experiments, with a small looping window that adjusts with the kaleidoscope controls. So, mirror position would move the loop points over the wave file, and changing the number of mirrors would expand and contract the size of the loop. It could even be done with pattern data, which would make it easier to generatively create the audio, and allow it to be represented as visuals or vice versa.

Leave a Reply

Your email address will not be published. Required fields are marked *