week 5: diversions

Mark Fell and minimalism

Another week of less than ideal research, due to two things: my first dose of the AZ vaccine completely destroyed me for two days, and I’ve had to dedicate a significant amount of my time to another assignment—a homage to Mark Fell’s Multistability (2010) in the form of a rhythmic sequencer for Max/MSP. It’s looking good though:

It’s giving me a good project to use as motivation to dive in to Max again. Additionally, it led me to dig deeper into Fell’s process, and as such I realised that he wrote a thesis on his works (Fell 2013).

I’ve only briefly skimmed through so far, but the reasoning behind his work Attack on Silence (2008) was something interesting that I could relate back to my HMsEx projects:

“My intention was to create a work that presented the audience with static audio-visual formations, lasting for several minutes without change. Here the format of the image presented a number of possible options that were deliberately not acted upon; presenting a work that established a self-consciously ‘dormant’ system … I considered this to be a critical response to the prevailing trend among my contemporaries.”

It was interesting to read this, especially since I hadn’t previously thought to analyse the why of Fell’s intensely minimalist work. It amused me to realise that, at least in the context of this work, the reasoning behind it was similar to the reason why I started making minimal music—because my peers were making music completely opposite to minimalism.

The work is satisfactorily minimal:

The combination of discovering this work (and Fell’s reasoning behind it) alongside Darrin’s comments in class about the possibility of the hypnotising experience being compromised by too drastic an ‘arc’ of valence, I’ll be re-thinking the approach for my Capitol Theatre piece, perhaps taking a risk and making a piece that seems almost static, with very slow, gradual changes, rather than the distinct sections of my current idea. It’d certainly be an interesting experiment with the lighting system. Gradual changes in rhythm (perhaps even going in and out of sync in homage to Steve Reich) and colour.

Visual research

As part of my research into more hypnotic imagery for my browser-based works, I’ve subscribed to the Perfect Loops community on Reddit. This has proven to be a goldmine of hypnotic moving imagery.

View post on imgur.com

Source – Downward Spiral by Reddit user kinetic-graphics

Analysis of the above animation revealed that there is something very interesting going on that makes the loop particularly hypnotic: each ball actually travels on an elliptical path, rather than in a spiral downwards. This is due to the continual zooming, synced with the addition of track pieces to the spiral.

Source: Minty Sphinx Tiles by Reddit user jn3008 (best experienced by visiting link for looped version)

The perspective play in the above animation is so powerful, it almost makes me sick. What makes it so effective is the way the tiles collapse into the background and become larger, with implied upward movement in contrast to the downward movement of the foreground tiles. It reminds me of the incredibly odd spatial/perspective distortions I experience in fever dreams, where objects are simultaneously miniscule and enormous; or right in front of my face and kilometres away. I need to experiment with such juxtapositions of movement and scale in 2D.

References

Fell, M 2010, Multistability, sound recording, Raster-Noton, Chemnitz.

Fell, M 2013, ‘Works in sound and pattern synthesis ~ folio of works’, PhD thesis, University of Surrey.

Attack on Silence 2008, DVD, LINE Sound Art Editions, Los Angeles, California, created by Mark Fell.

week 4: ideas within ideas

Unfortunately, I didn’t have a great deal of time for research this week, which is somewhat surprising considering the continued lockdown preventing any extended periods of outside time. Anyway, here’s what I did manage.

Project progress

This week, I’ve been thinking more about my browser-based works, and how I can tie them into the hypnotising adjective. I think ultimately I may have to create some new works altogether, although I do have some ideas I haven’t started yet.

One idea in particular, inspired by Steve Reich’s phasing technique, could look like this:

Each square, when activated by pressing the “+”, contains a bouncing ball—likely continually bouncing, with no need for physics beyond a simple “arc” function (though, it would be interesting, yet somewhat less hypnotic, if each ball exhibited some degree of physics as in my Bounce project, with some external force agitating them). When each ball hits the bottom edge of the square, a tone plays. Controls pop up on mouseover, allowing the user to adjust the size and bounce speed, or remove the ball. Size of the ball relates to the played note. Horizontal positioning of each square dictates coarse phase, with possible additional popup controls to adjust “meta-phase” within each square. Other controls could be available, such as rotating each row left/right, overall meta-phase per row, duplicating rows etc.

As with my other browser-based works, the sounds would be simple tones, perhaps synthesised marimbas or other melodic percussion instruments. Visual feedback via blinking squares could occur whenever notes play.

Another idea I’ve been thinking about recently is a “feedback” piece, which is entirely based on user input, and acts as a kind of live looper. A loop time is set (possibly adjustable), and any mouse movements, clicks, and key presses entered by the user are captured in the loop. This would include visual feedback as well as individual sounds for each action, possibly with a “decay” so the loop never gets too overloaded. It would be a little tricky to implement, possibly with the need to quantise actions (both temporally and visually), but it could be an effectively hypnotic/mesmerising work, especially if I include some form of additional pattern-based processing, mirroring, etc, such as this kaleidoscope (draw on the grey canvas), or this patterned spiral.

Research

Vision

An interesting approach for a hypnotic browser-based work could be to exploit persistence of vision, or afterimages. I’d previously relied on instinct to create images designed to mess with viewer perception, for example in the following image, one of my album covers:

However, as I continue to research hypnotic experiences, particularly in the visual domain, it becomes more evident that I should look into the mechanics of why such images are so striking. Obviously, I don’t think I want to create images that will give people headaches, as the above example has potential to, but instead subtly make use of optical illusions as shown in a few examples from my presentation in week 2.

Initial research has been interesting, especially in regards to colour perception, and the illusion of colours “spreading” from a presented image. In particular, the following image (Shimojo, Kamitani & Nishida 2001) produced such an effect, where Fig.A is the stimulus image, and Fig.B shows the resulting afterimage variants:

The illusion of a filled-in object (in this case, the central “square” created from the red wedges), even subtly in the original stimulus image before the requirement to experience the afterimage, is something that I will definitely explore. In the context of a browser-based work, it would be even more interesting to use animated images based on concepts such as those illustrated in the example above in order to produce optical illusions. Shimojo, Kamitani and Nishida’s article cited above continues with investigations into two-frame animations, however those were not presented as actual animated images for viewing, and as such I’ll have to look elsewhere or conduct my own experiments in order to investigate the effects.

A potential dead-end is my attempted research into “actual” hypnosis; currently, the only writing I can find on it seems somewhat dated and close to pseudoscience / strange sexism, e.g. “The subjects were women of average
intelligence and of medium education… The patients all
suffered from neurotic disturbances of a hysterical nature.” (Horvai & Hoskovec 1967). I just couldn’t quite take it seriously after reading that passage. Regardless of the wording used, I think any attempt at actually hypnotising the viewer would not be the intention, and instead aim for a somewhat looser definition of hypnotic.

Sound

Sonically, I’m building a repertoire of musical and sound design devices I can employ in order to produce a hypnotic experience. One such device is achieving a kind of “barber pole” effect, but on a larger, temporal scale, rather than with frequency, as can be heard in the following Autechre track:

An analysis of this track, which I conducted myself many years ago, but confirmed after reading a thesis written about Autechre’s work, reveals that the track’s tempo slows down linearly, but doubles part-way through the measure,  producing the illusion of constant slowdown (Mesker 2007 p. 51–52). As such, I think a similar technique could be employed in a segment of my Capitol Theatre work, perhaps in reverse, to give the illusion of constant speed acceleration. Coupled with corresponding lighting designs (perhaps slowing down in contrast to the audio), this could create a very interesting experience.

References

Shimojo, S, Kamitani, Y, Nishida, S 2001, ‘Afterimage of Perceptually Filled-in Surface’, Science, vol. 293, no. 5535, pp. 1677–1680.

Horvai, I & Hoskovec, J 1967, ‘Experimental Study of Hypnotic Visual Hallucinations’, in J Lassner (ed.), Hypnosis and Psychosomatic Medicine, Springer, Berlin, Heidelberg. pp. 151–156.

Mesker, A 2007, ‘Analysis and Recreation of Key Features in Selected Autechre Tracks from 1998–2005’, Masters thesis, Macquarie University, Sydney, <https://www.researchonline.mq.edu.au/vital/access/services/Download/mq:71099/SOURCE1>.

week 3: narrowing it down

The past two weeks have been good for focusing a bit, and thinking about which project ideas would be possible given my workload for the other courses. I’m still keeping track of all my ideas for the future though.

Researching hypnotic experiences for week 3’s assignment was incredibly useful for helping me to realise I’d like to explore that adjective, at least for one of my projects. I’d subconsciously made it part of my aesthetic for one of my musical projects, so the groundwork has been somewhat laid already. It’ll definitely benefit from research into the psychological effects of repetitive auditory and visual patterns.

At the end of week 2 I produced this from my experiments with Emission Control:

The downside of Emission Control is that the audio is mono only, so I had to do some processing in Reaper—shifting the right channel forward in the sequence by a few “beats”. I actually don’t mind the effect though, as the audio appears to sweep from left to right, which is a little hypnotic in itself. Perhaps I could have it alternate between which channel is behind/in front, or expand it to a 5.1 experience and have it sweep in circles, arcs, or other patterns—thus providing further opportunities for corresponding visual events. It’s certainly a worthy experiment that I’ll likely add to for the final project, whether it is for the Capitol Theatre lighting system, or the 9.1.36 lighting system.

As I was speaking to a classmate on Thursday about the potential of the metal genre to be hypnotic, I was reminded of a project I started but haven’t finished yet; an album of unrelenting, single riff pieces that exploit the stamina of the performers. Here is one example from that project:

Obviously at this point, the drums are programmed, so the stamina exploitation comes from my performance of the guitars. It was unbelievably difficult to keep the playing consistent for the seven minutes of the original piece. From that perspective, the brutal adjective could apply, but I also think it’s quite hypnotic, especially if using a somewhat ambiguous rhythm structure which could be interpreted as in several different time signatures, depending on the context (not so much in this piece). It’s sneaking its way into the arc of my project, and I’d love to play with the potential perspective and intensity shift that would result from a smooth blend between staccato, electronic sounds and the more fluid, open riffs of the metal track.

My idea for this kind of repetitive, hypnotic metal came not only from the Liturgy piece I shared last week, but from the section from 6:47–7:07 in Meshuggah’s I:

Unevenly repeating rhythms across the guitar riffs against the higher time signature structure is a key characteristic of Meshuggah’s sound, but the above passage is particularly entrancing for its use of much shorter phrases—the juxtaposition between the hihat and snare pulses provide an ambiguous rhythmic perspective and makes the repeating phrase quite hypnotic.

Further study into rhythmic ambiguity led me to some articles on polyrhythms, notably Martim Galvao’s thesis Metric Interplay: A Case Study In Polymeter, Polyrhythm, And Polytempo; and the section describing Steve Reich’s piece Drumming:

 

The documentation of the techniques Reich used in Drumming (from his own essay “Writings on Music”) describes them as such:

  1. the process of gradually substituting beats for rests (or rests for beats);
  2. the gradual changing of timbre while rhythm and pitch remain constant;
  3. the simultaneous combination of instruments of different timbre and
  4. the use of the human voice to become part of the music ensemble by imitating the exact sound of the instruments.

A lot of these can apply to my experiments so far, and planned work for my hypnotising project; even analogies such as using time stretching instead of phasing / gradually changing timbre, or imitating acoustic properties using synthesis.

~

The one day we were allowed on to campus last week was quite enjoyable and inspiring. The Capitol Theatre is incredibly majestic, and viewing some of the work that previous students had created has given me some ideas about how to use the lighting system to my advantage, beyond simple Ryoji Ikeda-style minimalist tones and flashing lights. Having said that though, I think I will stick to a strict colour palette, even using mostly white, perhaps “smeared” with my brand colours of red and teal to give the impression that the lights are blurring, or leaking. In addition, I am aware of the possibility that people will get bored watching the same thing over and over, even in the context of a hypnotic work, and as such it would make sense to develop an arc for my piece, perhaps even before commencing work on the finer details.

The demonstration in 9.1.36 was more impressive than I was expecting. I’m very much interested in the strobing and tilt properties of a few lights, as well as the ability to express movement through individually addressing the LEDs in the ceiling. If I had time, I’d love to build a simulated version that runs in a browser and accepts the kind of MIDI signals required to control the DMXIS software, in order to get a loose idea of how a sequence will look in the space. It does open some interesting possibilities for live, or generative control, as manual sequencing might be quite time-consuming unless controlling it on a higher, macro level.

Pierre Proske’s demonstration in the black box was possibly even more inspiring, and following on from my brief chat with him in person and via email over the past couple of days, I’m motivated to at least attempt some kind of external communication from my browser-based works, but more on that later. Proske seems to be doing all of the things I want to do, in that he’s shifting some of the creative coding paradigms into physical spaces, with a generative/algorithmic approach to sequencing light and sound. Even though I’m in my relative infancy when it comes to contextualising / developing my creative practice to academic levels, it’s interactions like this that make me feel like I’m among my people.

~

I’ve spoken to several people about fitting my browser-based works into a project for this semester. I’m still undecided about which adjective to aim for, but the discussions have so far been useful for determining what counts as a heightened experience—the possibility of engagement even if accessing a work through a small browser window is something I will be exploring, for sure, and have touched on with my existing works, such as Bounce.

As mentioned earlier, there is the potential to use WebMIDI to extend the browser-based projects into physical spaces, with the first extensions coming to mind being lighting (e.g. 9.1.36) or the electromechanical percussion setup I’ve been developing:

I imagine it wouldn’t be too difficult to modify pieces like Bounce or Dungeon to allow for connection to an external Teensy-based box with eight or so solenoid outputs, striking various improvised percussion objects. As demonstrated above, I already have part of the setup working, I just need to extend it somewhat (mostly, by developing my own MIDI trigger solution rather than using the drum machine in the above example).

hypnotising

Introduction

I have taken the adjectives hypnotising and mesmerising to be very similar, and possibly even sharing the same meaning, and as such many of the examples in this entry could fit into either adjective. However, I have personally experienced some degree of altered states resulting from exposure to many of the examples provided, so perhaps they could all be considered hypnotic; I will continue to use this adjective to describe each example.

My interest in hypnotic experiences has origins in my musical taste; notably, my interest in minimalist and repetitive music. Creating music and audiovisual media with the intention of either inducing altered states, or illusions/hallucinations—without the use of psychedelic drugs—has been a key goal in my own work, which takes inspiration from many of the following examples. As such, my interpretation of hypnotic is anything capable of generating illusions or altered states.

 

Auditory examples

Lorenzo Senni – Superimpositions

Lorenzo Senni’s music is often referred to as “deconstructed trance”, and Superimpositions is a good example of this. Unrelenting in its repetition, a single chord progression pattern slowly iterates in timbre over nearly six minutes. The use of heavily gated, staccato chords creates “gaps” in the sound, which contributes greatly to the hypnotic experience.

 

Steve Reich – Music for 18 Musicians

Any of Steve Reich’s compositions could be considered hypnotic, but Music for 18 Musicians is a particularly good example. The many interleaving, phased patterns affect the listener’s perception of rhythm and time, creating a kind of auditory illusion, especially evident in the later movements, where the sound is almost similar to digital timestretching or granular synthesis. Listening intently to the entire composition, the repeated pulse tends to fade into the background, and as such the ending creates a sense of emptiness.

 

Autechre – all end

Autechre’s NTS Sessions album series is a four-part, eight hour collection of extended electronic pieces. The fourth part’s closing track, all end, is an hour-long, time-stretched string sound, providing a hypnotic experience through a very slow moving soundscape. Much like Music for 18 Musicians, listening to the entire piece results in the ending feeling “empty”. For me, the feeling of something being subtracted from reality at the end of such long-form, slow moving pieces is an important part of the experience.

 

Pan Sonic – Muuntaja

The use of short, repeating patterns in minimal techno can be considered hypnotic. Pan Sonic’s Muuntaja is an intense example of such repetition, with very slow movement occuring throughout the piece, punctuated by an abrupt change in the rhythm (7:42 in the above video), which can lead to the listener being “snapped out” of any potential hypnotic states. Additionally, the high pitched tone introduced at 1:59 is another example of a sonic element “fading into the background” and creating a jarring transition when it finally stops at 9:47.

 

Liturgy – Generation

While many would not find the metal music genre hypnotic, Liturgy’s composition Generation fits the description through its use of unrelenting, repetitive patterns. The use of techniques such as rhythmic phasing and slow iteration show parallels to minimalist composition. Texturally, much like the other examples shown so far, the ending of the piece is a jarring transition into silence, due to the constant intensity suddenly coming to a stop. This is perhaps the most powerful use of such an auditory device (if it can even be called that) in all of the examples shown.

 

Audiovisual examples

Laurie Anderson – O Superman

Similar to the Steve Reich example, O Superman uses a repeating pulse throughout the piece, and on its own is quite hypnotic through the use of minimalist melodic content and repetition. The music video extends this, using sparse shots of Anderson often performing slow, repeating hand movements (and sign language at one point), and often showing a simple closeup of her face. My personal experience with the music video is through watching it late at night, which adds to the hypnotising experience.

 

The Chemical Brothers – Star Guitar

Michel Gondry’s music video for Star Guitar appears quite simple at first, but a more attentive viewing reveals that the visual elements are highly synchronised to the music. Using precise video editing and computer generated graphics, Gondry was able to essentially assign many of the sounds to their own structures and objects. This synchresis, alongside the viewpoint of being inside a train, and the repetitive musical piece, provides a very hypnotic experience.

 

Ryoji Ikeda – Point of No Return

I experienced this installation in person at Eye Film Museum in Amsterdam. It is a simple, yet captivating work, using a stark contrast between a black circle and various strobed patterns, alongside noise and sine tones. The strobing, volume, and scale of the installation is intensely hypnotic, causing illusions similar to hallucinations when staring at the same point for an extended period.

 

United Visual Artists – Our Time

This is another installation I experienced in person, in Hobart in 2016. An array of lights swing and pulse slowly in various patterns, accompanied by slow droning and distant metallic impact sounds. The often circular motion of the pendulum-like light fixtures, accompanied by the ominous, low activation sound design, evoke a hypnotic state. Interestingly, across my multiple viewings of this installation, the presence of children running around and yelling didn’t detract from the multisensory experience.

 

Still image examples

Kid606 – Sugarcoated album artwork

The use of an optical illusion on this album cover is something I would consider highly hypnotic, especially when combined with its accompanying audio content, as can be heard in the following excerpt from the track Forstevereichandjeffmills:

Simply scrolling up and down in the browser, or moving one’s eyes while looking at the cover is enough to give the illusion of movement. The almost sickening illusion is a key example of hypnotic visuals, in my experience. I do not necessarily want to make people sick, but it would be an interesting outcome.

 

Animal Collective – Merriweather Post Pavilion album artwork

Similar to the Kid606 example above, this album artwork uses another optical illusion to invoke hallucinations. It’s another example of the music matching the hypnotic cover artwork:

Something I noticed about both images is the use of alternating black and white strokes, which seems to contribute greatly to the illusion of movement. This is something I would like to experiment with in my own work, perhaps in a browser-based piece.

 

Other examples

The Catacombs of Solaris interactive artwork

The Catacombs of Solaris is a first-person “game” where the player moves through a multicoloured world. However, upon changing direction, the world appears to be re-calculated, applying a screen shot of the previous perspective to the walls of the new world. It’s incredibly difficult to explain without directly experiencing the work, however, I believe it encompasses many of the other adjectives from this course, such as surreal, overwhelming, and even brutal at times. I personally find the ever-changing perspective to be a very hypnotic and captivating experience.

 

Audiosurf

Audiosurf is an interactive music visualiser / game that generates a course based on analysis of a provided audio file. It attempts to provide a similar experience to Guitar Hero, where the player obtains more points by collecting the coloured squares and avoiding obstacles. This is another example where the hypnotic experience is enhanced by the ending, where a phenomenon known colloquially as Guitar Hero tripping appears, resulting in a short period of time where the player’s vision appears to swirl or ascend.

 

Moire Illusion sculpture

The intersecting shapes and dual movement of this 3D printed sculpture imply more complex movement than is actually occurring, which is a technique I am highly interested in exploring. The sculpture could be considered an extension of the “classic” spiral hypnosis image, and given this connection, as well as the implied (and actual) movement, it provides a hypnotic sculptural experience, especially if produced at a large scale.

 

Satisfying Hexagons sculpture

Another 3D printed item, Satisfying Hexagons is a smaller, handheld sculpture controlled by a magnetic object held against the underside of the structure. The collapsing and expansion of each hexagon’s lines in relation to the others creates an illusion of 3D movement. Such perspective illusions and shifts can be considered hypnotic when repeated and controlled.

 

Further research

In curating the above examples, I’ve noticed a few interesting devices used which have parallels between audio and visual media. Notably, stark contrasts, quickly strobed, appear to have similar hypnotic effects in both audio and visual contexts. Additionally, spatial (audio) and perspective (visual) shifts could be employed to heighten the hypnotic effects.

A possible negative side-effect of exploring hypnotic experiences, particularly with regards to sound, is the potential for the work to be annoying or unfavourably repetitive. As such, I am also looking into studies covering misophonia.

I’ve bookmarked some articles and other media for further study:

Lotto, A & Holt, L 2011, ‘Psychology of auditory perception’, WIREs Cognitive Science, vol. 2, no. 5, pp. 479–489.

Watanabe, K & Shimojo, S 2001, ‘When Sound Affects Vision: Effects of Auditory Grouping on Visual Motion Perception’, Psychological Science, vol. 12, no. 2, pp. 109–116.

Edelstein, M, Brang, D, Rouw, R & Ramachandran, V 2013, ‘Misophonia: physiological investigations and case descriptions’, Frontiers in Human Neuroscience, vol. 7, p. 296.

Dennis, B 1974, ‘Repetitive and Systemic Music’, The Musical Times, vol. 115, no. 1582, pp. 1036–1038.

Samuel, A & Tangella, K 2018, ‘Sound changes that lead to seeing longer-lasting shapes’, Attention, Perception and Psychophysics, vol. 80, no. 4, pp. 986–998.

Nicolai, C 2010, Moire Index, Die Gestalten Verlag, Berlin, Germany.

week 1: ideas vomit

Hello again, it’s good to be back, rambling about my various things. I wonder if I’ll ever show anyone else these posts. They’re a good archive of my little bits of research.

The first week has been pretty great. A lot of very interesting projects to sink my teeth into, including some nice technical stuff for Visual Programming, and some opportunities to further contextualise my work in Emerging Digital Cultures. Plus of course, the freedom to get a bit wacky with one or more projects for Heightened Multisensory Experiences.

With that freedom though, means I’ll have to whittle down my ideas quite strategically. For this, I think I’ll just need to experiment, and try making a bunch of things to see what is worth pursuing.

Idea: OAE

Some initial experimentation today for one of my ideas brought upon some interesting side effects. The original idea for the following audio was to alternate between two versions of a sample—one mono, and the other stereo, but with the left channel’s phase inverted. I was trying to achieve the illusion of movement between the speakers without actually using panning, and while I don’t think I really succeeded, the shrillness of the 808 rimshot and clave samples I used resulted in another interesting effect, that of otoacoustic emissions.

At certain points in the audio, a kind of ‘thumping’ sound can be heard, which is entirely created within the listener’s ears. Adam Neely describes it in this somewhat overblown, yet informative video.

Plus, it uses the word “phenomenological” in the intro, which ties in nicely with Thursday’s class.

It’s an interesting concept to follow up on, though using such shrill high pitched tones definitely runs the risk of causing hearing damage, especially at volumes where the effect becomes noticeable. It has, of course, been done by others, as the above video shows, but it’d be interesting to see how far the technique could be pushed.

Idea: Dayton Audio transducers

Over the break, I ordered a couple of surface conducting transducers, just for experimentation. One in particular works very well for audible tones, and the other (a “bass shaker”) was possibly not powered enough by the amplifier I used, and as such wasn’t very effective.

I’m still in the phase of experimentation where I’m simply placing them on various surfaces to test their conductivity, with objects like my acoustic guitars and wooden serving trays being the loudest, but they could prove useful for accessibility purposes, in that they could potentially be made to vibrate with low frequency tones that hearing impaired people can feel.

Idea: Electrical impulses

My father is an electronic engineer and somewhat eccentric inventor. I spoke to him today about the concept behind HMsEx and mentioned that a lot of the possibilities for projects are in line with his experience working with lighting systems, electromechanical devices, and of course sound. He presented an interesting and potentially risky idea, in that I could use electrodes to stimulate a participant’s muscles, therefore almost turning them into a controllable “puppet”. I’m not sure if I would take it that far (or if I could, legally), but the idea of using electrical impulses to provide a heightened multisensory experience is incredibly fascinating. I’m not sure how I could create such a device for an audience of more than one person, but it is absolutely something I will be researching over the coming weeks.

Idea: Forever Doom

Years ago, I had an idea to make something similar to Squarepusher’s Music For Robots:

Obviously, my version wouldn’t be quite as elaborate, nor would it need to be, because my idea was slightly different—two electric guitars, one of which is a bass, with a few frets controlled by solenoids or other similar actuators, playing endless generative doom metal riffs. Perhaps they’d slowly evolve from a beginning riff rather than being 100% new each time, but the idea is essentially an infinitely long doom metal piece.

It’d require some budget though, in order to realise it to its full potential (ie. with large amplifiers), unless I could rent the gear or borrow it from friends.

Idea: Relay mirror / electromechanical conductor

Basically, something like this:

.. but with relays, or buzzers, or some other kind of electromagnetic / electromechanical components that would allow for the viewer to “conduct” a realtime acoustic sound composition. Perhaps a board mounted on a wall with various bells and other acoustic noisemakers, which are struck when the viewer’s hand moves to a certain point in space.

mp3 compression as a desirable effect

The surface conduction transducer I ordered arrived the other day. Initial tests revealed it to be.. not that powerful. So, it’s not really going to work for the installation, if we’re even out of lockdown by the date of the show. However, I can definitely think of some uses for it, the first being using it with a sheet of metal and an electromagnetic pickup (or two) to make a nice clean plate reverb. Curtis also suggested using it alongside a contact mic to create weird impulse responses from various objects, which is a fantastic idea, and something I’ll explore during the mid-year break.

I’m also looking at prices for the more powerful Dayton Audio transducers, and it looks like I can get a pair for $50. Next time I make that much from sample pack sales, I’ll dive in.

~

My final project for Adam’s sound design specialty was a pretty intense process.

I think I was a little ambitious in synthesising 90% of the sounds (using Nord Lead and MS20 processed with DtBlkFx and Paulstretch), because it didn’t really leave me with much time to create backgrounds or music as detailed as I’d planned. However, I’m pretty happy with a few of the shots, in particular where the object rises from the ground, and the noisy shot that follows shortly after that. Being an animation meant that it was both to my advantage and disadvantage that the sounds didn’t have to be entirely realistic, and perhaps I should have chosen something more based on live action footage. Either way, it was an interesting exercise to get even deeper into the world of synthesis. Plus, now I have a huge library of noises from my MS20 that were unused in the above design (over 700 sounds)

~

I’ve created a short set of music for Infinite Wurldwide, to be played on Wednesday night, and I’ve noticed that I’m starting to use some of the layering, editing and processing techniques from this semester in my solo work, particularly in this case where I’m able to create something in advance rather than playing live. One piece towards the end uses sidechained, Paulstretched synth strings underneath an excerpt from an improvisation I recorded years ago using my Monomachine to process an input from an electromagnetic pickup, placed on various interference-creating devices. I hadn’t done much layering like this before, and even though the two recordings have no connection to each other (apart from the sidechaining), it’s really interesting to hear them connect by coincidence in places.

~

Here’s a good video, where Junkie XL describes the various “levels” of music editors for film, and their relationship with composers. It’s interesting to see that sometimes music editors have a large amount of creativity in terms of arranging the composer’s music, almost becoming a second composer.

It’s also interesting that he mentions music editors may choose that career path to avoid the “burden” of having to constantly write new music, and are more interested in using their creativity to enhance/warp/remix what’s given to them. I’m thinking this may be some interesting work to pursue, as I do enjoy this style of creative editing and remixing. Not that I don’t have fun composing from scratch, but I feel like it might be a little less stressful than having to come up with new ideas all the time.

~

Pilgrim animation

Once again, I haven’t had a chance to work on this due to working on another sound design assignment, but I’ll be getting back to it in the next few days. Collaborator wasn’t happy with the new ending I created, as it was too “heavenly” and against the original idea (for more subtle music alongside intense visuals). She’s suggesting that I use an adaptation of the original demo score, but I think I’ll give something original another try before doing that.. perhaps presenting her with both options. Either way, I think I’m beginning to accept that just because I think something fits, doesn’t think my collaborator will, so this has been a good learning experience overall.

Overwhelming installation

Along with the transducer being underpowered, I had another disappointment/failure regarding the installation. I went in to the space on Monday to test some things, and nothing I tried would allow for the 5.1 template file I’d created in Premiere to work correctly on my collaborator’s Macbook, despite our setups being similar. As a result, I think I may have to abandon the idea of having a speaker beneath the installation, and instead just use the room audio for the whole thing. I’ll give it one more try before the showing (again, if we’re allowed back on campus), but I’m willing to let it go if it just doesn’t work.

four times, four channels

I’ve spent most of today working with stems imported into Premiere, trying to set up a multichannel project for Pat’s installation. I had to look up a guide, and was almost on the edge of giving up, when it suddenly worked and I was able to send certain tracks to the “rear” speaker (a Minirig, more on that later). I think the next step is to see if that actually exports properly, and then get it to play back using the weird speaker/audio configuration I have in mind.

I had to get out of the house at some point though, and while I was out, I put on the Why We Bleep podcast, episode 29 featuring Hainbach:

He’s got some great ideas about texture, destructive processing, and unsynchronised loops/rhythms that I can learn a lot from. I’m usually quite rigid in my arrangements and sequencing, having only really broken away from that this year with various assignments, so hearing someone talk about keeping things loose and focusing on texture first rather than melody is a very interesting perspective.

Here’s something by a rather annoying YouTuber (why do they have to be so loud?!), but has sparked my interest in data manipulation processes once again:

The process in the above video isn’t great (especially since he didn’t figure out the correlation between pixels and wave data and as such couldn’t get it to sound the way he wanted), but it’s reminding me to use some image manipulation tricks in my sound designs (particularly the sound assignment for Adam’s class). I need to find whatever the OSX/Win10 equivalent of Coagula is and see if I can get some interesting results.

Speaking of Adam’s class, I’m digging through some inspirations for the sound design, and the telekinesis sounds in Control are quite close to what I have in mind:

I can’t quite pinpoint how they’re made, but I can probably get pretty close with a synth and some processing.

~

The sound HDR candidate meetup on Wednesday was a fascinating and somewhat nerve-racking experience. I say nerve-racking because my impostor syndrome was in full effect; especially since I was the only one in the room not in a HDR program. However, it was awesome to be among other sound obsessives, and I even managed to squeeze my way into a conversation about vaporwave, which somehow led to the discussion moving towards generative music and the use of machine learning to create or assist in the creation of music, which ties into research I’m doing for my Media Cultures essay.

I need to push myself to get involved with more things like this (meetups etc, not Scatman John), because it’s one of the main reasons why I decided to dive into higher education in the first place.

David Thrussell had the complete opposite personality to what I expected, in a good way—his guest lecture was hilarious and very critical of many parts of the industry. Much like Franzke’s talk, I didn’t write anything down (except to re-watch The Hard Word and look out for the scene with the cars—oh, and all of the hillbilly music he recommended) but it was good to get another realistic perspective on the industry. I may have to get in contact with him and talk about Morricone some time..

~

Overwhelming installation

I think this is going really well, and is quite possibly ready to go in terms of my contribution. I’ve spent the past week working on cute little audio events flying around the place, and as mentioned, learned how to set it up as a quadraphonic project in Premiere so we can have a speaker/transducer in the middle of the installation playing some of those embellishments, to give the impression of the sculpture itself making sound. I think it’ll be quite effective if we can manage it. As a contingency (ie. if the surface conduction transducer doesn’t work well) I have a small Minirig speaker which can put out quite a decent volume and can be placed under the sculpture without being noticeable. I’m heading to the black box again tomorrow to run through some things with Pat so it’ll be a good opportunity to test some of the technical stuff.

Pilgrim animation

Again, I didn’t have much time to work on the actual project file for this during the week, but I’ve been writing down some ideas and working out how to adapt the beginning piano melodies / chords into the end section. My current idea is to essentially keep the underwater section the same (except with the melody starting earlier as per Darrin’s suggestion), but have it swell into something more textural (thanks Hainbach) before the ending “reveal”, where a Mellotron choir sound will provide some backing to the return of the piano, playing a new chord progression, with the previous melody altered to fit. I’m hoping to have time to wrap this up in the next few days!

synthwave triangle

The most exciting thing to happen this week was the behind-the-scenes walkthrough of Because The Night on Friday. I’ve never experienced immersive theatre before (or much traditional theatre, at that—I can only think of one play I’ve ever been to as an adult), and I really want to see the production soon.

It was great to get little bits of information from David Franzke about the technical side, including seeing/hearing the surface conduction transducers in action, and how the rooms with live audio effects were set up—I have definitely had ideas for similar experiences and learned a lot from what he mentioned. Plus it was great to tag along and try to weasel my way into his conversations with David Chesworth, particularly one moment when they were talking about a piece of software used to make a part of the music for the production (I can’t remember which now; some Arturia software) which used FFT/spectral processing to turn a field recording into a melodic instrument, as it relates to some of my sound experiments this week.

Going upstairs to the control desks with Brendan was fascinating too, as he walked through a lot of Qlab details that were briefly touched on during the main part of the tour. He was very keen to answer any of our questions, but at the same time told us so much that I couldn’t even think of what to ask.

~

I’m continuing my research into live audio and video degradation processing. Last week, I put a call out on Facebook to see if anyone had experience with the video side, and had a few replies with some very effective solutions. One of which was a Touch Designer environment+scripts, which I’m very keen to dig into once the semester is over.

On the audio side, I’m digging deeper into DtBlkFx, an FFT processing plugin which has been around for 15+ years, and has a delightfully obscure interface. I’m pushing myself to learn it properly though, as it not only applies to the future project I have in mind, but to some of the sounds I want to create for the final sound design specialisation assignment.

~

Speaking of which, my work for this week has mostly been on that final sound design assignment. I haven’t created much in the way of sounds yet, but I’ve mapped out all of the actions, in order to create a loose idea of the kinds of sounds I should be experimenting with for the sound design. It seems like a huge undertaking, but I want to push myself to create something interesting, such as my idea to use an MP3 compression style effect to represent disintegration of a ghost. Initial experimentation isn’t working out too well, but I think I just need to change up my source sounds—I was using the sound of scrunching up paper, which I think is too harsh for what I want.

Another sound I’m very keen to get started with is a “telekinesis” effect in the video piece I chose. I’m thinking of making something inspired by the telekinesis sounds in Control:

It’s an excuse to play the game a bit more, but I think the sounds are great, and similar sounds would fit my sound design pretty well.

~

Overwhelming installation

I had some incredibly positive feedback on the most recent iteration of my musical piece for Pat’s installation, which made me feel like I’m finally getting better at identifying emotions and building music accordingly. I still have some work to do on the additional sounds, but we’re on the right track.

Pilgrim animation

I haven’t had time to work on this since last week, but I had a chat with Grace about the new intro, and she loved it, so we’re going with that. This excites me greatly. I’ve been briefly messing around on piano every few days and have noted down a few chord progressions I’d like to experiment with for the ending.

jazz from hell

I’ve ordered a surface conductor transducer, for some experimentation with the installation I’m creating music for. However, I’ve been advised that it actually isn’t in stock, and will take between 10–21 days to arrive.. which may end up cutting it a bit fine in terms of milestone deadlines. So in the meantime, I’ve been researching how to build one myself. It actually doesn’t look difficult at all:

I’m really interested finding out what they sound like on the various surfaces of the installation, and also for my own future projects if it doesn’t work out, so I’ll try to build one in the next week or two.

I’m also spending some time thinking about and researching some techniques for an idea for an installation I had last week. One of these techniques being how to simulate video compression artifacts in live processing of a video, in order to have a realtime parameter connected to the “compression” amount. The following articles are some useful reading on how these work:

Understanding Video Compression Artifacts

Development of Application for Simulation of Video Quality Degradation Artifacts

Of course, all of this means I’ll likely need a pretty decent computer running the installation to not only control the “compression” quality, but also run the lo-fi effect filter on the audio (Lossy, by Goodhertz). It’ll be an interesting project once I get the time to apply more of my attention to it.

~

I’ve finally started to watch Curtis’ videos on Max/MSP. It’s a bit of a sidestep from my other recent learning, but I think it’ll be useful for future projects. I have a lot of ideas for things that could be easily made in Max, so the more I absorb, the better.

The in-class demonstration of QLab was fascinating. I’m a big fan of automated/process-based audio, so it was great to see software I hadn’t heard of before, and how it can be set up to create elaborate, realtime audio mixes.

My other learning for the week has been through teaching others. I’ve been helping someone learn synthesis and some other audio related techniques, and it’s interesting how I end up learning things in the process as well. I learned some extended Reaper techniques, such as automation and effect chains for specific clips (rather than what I usually do, which is apply effects and automation to the entire track), and some further synthesis techniques in order to create an electric “zapping” sound. I also discovered a great free softsynth—Vital—which I’ve already started using in my collaborative pieces. It looks likely to replace my previous go-to synth plugin, PG8X, which is great but quite limited.

~

Pilgrim animation – Grace Leong

In the continuing search for free piano libraries, I’ve landed on Prism Audio Atmos Piano, which is quite lovely, and a bit less rough than the previous one I’d been using (Wolno). I’ve applied it to the latest iteration of the Pilgrim animatic I’m scoring, which now has a completely different intro:

I think the new piece conveys melancholy a little better than what was there before. Ultimately it’s up to Grace to decide which one she likes best, but I think it works well.

Overwhelming installation – Patricia Summers

I’ve also progressed with the work for the installation piece. On Monday, I visited the area where the installation is being set up, which gave me a good idea of the scale of the piece and what would work for it. It was great to meet Pat finally as well, and show her some of the ambitious ideas I’ve had for the piece.

I tweaked some of the sounds in the piece, and added some sweeps and crashes for Pat to articulate with visuals. There’s a lot of room for more articulations in the first few sections of the piece as well, but I’ll wait to get feedback on what’s in there at the moment before I go adding a lot more stuff. Here’s the current version:

octagonal hiccup

I listened to the recent episode of Mr Bill’s podcast featuring Mick Gordon this week. It ended up being quite informative, mostly for game industry related information, but also some interesting sound concepts that can be applied generally.

Most notably, Gordon mentioned the idea of a sonic identifier, which, in the context of game audio, refers to a distinctive sound; a sound the player will associate with certain actions or sequences. An example given of this was the alert sound in Metal Gear Solid—a short “stinger” sound played when the player is detected by an enemy. This kind of quick, efficient sound builds an association with the action, and is more effective at communicating an in-game event than a visual cue. The other interesting point that Gordon mentioned is that a lot of these sonic identifiers are musical in nature; the other examples he gave were the Mario jump sound, and the Sonic ring collection sound—both also highly musical sounds. Using musical sound effects like these has been on my mind for a few years, and is something I attempted in a game project I created last year, but the discussion brought up some interesting points about their implementation and psychological effects that will allow me to refine my use of them in future projects.

Gordon also mentioned Wwise, a common “middleware” application for game audio. I’d known of Wwise before, and briefly used it in a sound design short course, but had never used it extensively, so the conversation was useful in that regard. Essentially, Wwise can handle all audio in a game, from music, to sound effects, and even dialogue. It’s similar to a DAW in that effects can be applied, sounds can be sequenced, and control input can be taken from the game in order to influence certain audio events, for example, a player’s health going under a certain threshold can influence the music, or the speed of a car can affect the pitch parameter of a sound. What I didn’t know, is that this control works both ways—markers in an audio file can trigger events in the game, an envelope follower applied to an audio track can affect lighting, etc. This has interesting potential for generative processes in gaming, where the audio and game engine could affect each other; in a more avant garde interactive piece, this could open up opportunities for interesting performance pieces. It’s definitely an area I’d love to explore.

~

David Franzke’s guest lecture was awesome. I didn’t really take much useful technical knowledge from it, but it was very good as a realistic portrayal of the industry. Plus he’s generally hilarious; I could listen to his stories all day and be entertained.

I did take away a few ideas though, one of which being to use exciters/transducers to turn structures into speakers—a concept Darrin brought up last week regarding my collaborative work. I’m going to look into this over the next few weeks, and see if I have the budget to acquire some of these transducers.

Another good point was around cataloguing sounds. I’ve done a bit of this, in terms of sorting sounds into folders, but this is often quite one-dimensional, in that it’s not practical for putting one sound into multiple categories. Over the mid-year break, I think I’ll look into some cataloguing software and attempt to categorise my growing collection of field recordings.

~

I already knew much of what Rebecca Rata spoke about in class; mostly from a similar guest lecture last year in the Advanced Diploma program, but also through my work in the publishing sector and my dealings with the rights and permissions team. However, some good points about fair use of third-party media were discussed, and as a result I’m reaching out to the creators of the visual pieces I’ve re-scored in order to seek permission to use their pieces in my folio.

Rata raised an interesting and somewhat alarming point about use of works on social media, which is relevant to my work: anything uploaded to social media transfers some rights to the platform itself. This is potentially concering, as I occasionally stream song building sessions on YouTube; I’m going to have to do some further research on this in order to know what I should and shouldn’t be sharing.

~

Project work has been slow this week. I’m taking a step back on the animation project—as per Darrin’s suggestion—and will try a different approach. I’ve received generally positive feedback on my latest draft from the animator, but some good points were raised about some sections not quite hitting the emotional cues they’re supposed to hit. I have some ideas for this; one being a potentially more melancholy opening progression, as the current one does seem a little plain. I’d love to convey the feeling of loneliness a little more in the music in that section. I also realised it’s largely all in the same key, which is fine, but perhaps playing with modulation at certain points could result in some effective shifts in mood.

My ideas for the installation project are developing as well. In addition to my thoughts about mechanical percussion and transducers on the frame, I’m starting to develop some ideas for incidental sounds for the visuals to react to. Funnily, this seems like a project where I can go a bit crazy with some chiptune sounds, as a common trope in chipmusic is high activation, positive valence sounds.. about as close as you can get to overwhelming positive emotions, really! As a result, I’m actually considering using some chiptune hardware, for example, a Game Boy running LSDJ (or mGB, which turns the Gameboy into a MIDI controllable synth), in sync with the music I’m currently running in a DAW. This would actually make it quite easy to split out to the transducers I mentioned earlier—the Game Boy elements could be played through the frame, while the rest of the audio plays on the venue speakers.