week 11: distractions

It’s been a busy week. I haven’t had a great deal of time to dedicate to HME work, due having to prepare for a job interview, as well as some work for another course that took up a lot of my time.. but

Capitol piece

After last week’s uncertainty, it was good to receive some feedback that I’m making something that feels effective. I’m particularly appreciative of the comment that leaving the middle section of the ceiling completely unlit for the first 6 minutes was a good idea. As the weeks go by, the iterations become smaller and smaller; this week, I basically only made the changes I wrote about last week, but also made the ending abrupt, rather than a slow fadeout.

I think the decision to move from a wave on the ceiling at every bass note change, to a wave that lasts the entire duration of the non-tonic bass notes, was a good one. Having it slowly fade to a strobe works well for me. However, the walls using a pulse pattern instead of ramp down may be a bit too extreme; I think the ripple effect of the previous iterations is more subtle and works better as a hypnotic pattern.

The build to the abrupt end, as well as the new chord sweeps and arpeggios, make this go from hypnotic to somewhat exhilarating. I think I might aim for that as a secondary experience.

Phase

I haven’t done any more work on this in the past week, however after feedback during the week, I think it can go back to how it was before, without the flashing, and with visible circles that drop from the middle on click. I’ll still include the parameters, as “secret” features, or for a future version of the work, but it’s interesting to iterate on a project and settle on a previous version. That’s a first for me.

Collaborations

Sculpture

The latest iteration of Pat’s sculpture is as follows:

Since this upload, I’ve received a somewhat final edit of the video, as well as some feedback—notably, to reduce the high register sounds, but also to give the piece a kind of arc, that progresses from mesmerising to exhilarating, and to include occasional moments of full-frequency sound. I’m onboard with all of those ideas, and think this will be a great project.

Capitol piece

I’ve been thinking about what I can do with the audio for Pat’s Capitol piece, and over the weekend, a friend sent me a link to Woulg’s latest album Bubblegum.

It’s pretty similar in sound palette to Iglooghost’s work, which I used as inspiration for my piece. However, it’s making me think of doing some post-processing on my already sequenced audio, which perhaps could be the last piece of the puzzle in terms of making something cohesive. I had an idea to create several timestretched versions of the sections that include drums, and crossfade/cut between them; then, in the ending section with the heavy beat, replace some drum hits with enveloped versions of the trance chords from the previous section. I’ll be experimenting with that later today.

Research

My research in the past week has mostly been for my Emerging Digital Cultures assignment, where I’m researching glitch art and presets.

One work in particular that is standing out for me is Cory Arcangel’s Data Diaries, which the artist talks about at the 19-minute point in this lecture:

The work involves providing QuickTime with only header information in a file, which contains resolution, duration and colour mode; an interesting bug/feature in QuickTime is that when provided with such a file, it will use the computer’s RAM to fill in the data that was missing from the file, thus creating glitched patterns and sounds. It relates quite heavily to the work I’m creating for the assignment, which is a collection of Nord Lead presets which were generated by inserting a new battery into a PCMCIA SRAM memory card, creating randomised data that the synth tries to interpret as presets.

Researching presets for the assignment, I got into a rabbit hole of reading about algorithmic listening, a relatively new form that Kobel (2019) compares to Schaeffer’s (2017, p. 212) theory of reduced listening, with the distinction being that the listening and categorisation is being performed by a machine learning algorithm. I find this quite fascinating as someone who is going deeper and deeper into the realms of generative music, and in turn, getting closer to working with AI and machine learning in my music. Relinquishing control to the computer for several tasks—most importantly, the actual generation of the presets, but also, within the automated process of recording and separating the sounds themselves—is a key component of my assignment’s concept.

So how do these relate to my HME works? One thing I’ve noticed with the EDC assignment is that the resulting work can be seen as a catalog of sounds, of which I’ve already used a few as launching points for Pat’s works. It brings up an interesting idea about my now monolithic “album” of Nord patches being simultaneously presented as useless and functional art.

References

Woulg 2021, Submission, sound recording, Yuku Music, Prague.

Arcangel, C 2009, Digital Media Arts, YouTube, 25 March, Columbia University, viewed 5 October 2021, <https://www.youtube.com/watch?v=ZzHq7PzQWEE>.

Kobel, M 2019, ‘The drum machine’s ear: XLN Audio’s drum sequencer XO and algorithmic listening’, Sound Studies, vol. 5, no. 2, pp. 201–204.

Schaeffer, P 2017, Treatise on Musical Objects: An Essay Across Disciplines, translated by Christine North and John Dack, University of California Press, Oakland, California.

Leave a Reply

Your email address will not be published. Required fields are marked *