I listened to the recent episode of Mr Bill’s podcast featuring Mick Gordon this week. It ended up being quite informative, mostly for game industry related information, but also some interesting sound concepts that can be applied generally.
Most notably, Gordon mentioned the idea of a sonic identifier, which, in the context of game audio, refers to a distinctive sound; a sound the player will associate with certain actions or sequences. An example given of this was the alert sound in Metal Gear Solid—a short “stinger” sound played when the player is detected by an enemy. This kind of quick, efficient sound builds an association with the action, and is more effective at communicating an in-game event than a visual cue. The other interesting point that Gordon mentioned is that a lot of these sonic identifiers are musical in nature; the other examples he gave were the Mario jump sound, and the Sonic ring collection sound—both also highly musical sounds. Using musical sound effects like these has been on my mind for a few years, and is something I attempted in a game project I created last year, but the discussion brought up some interesting points about their implementation and psychological effects that will allow me to refine my use of them in future projects.
Gordon also mentioned Wwise, a common “middleware” application for game audio. I’d known of Wwise before, and briefly used it in a sound design short course, but had never used it extensively, so the conversation was useful in that regard. Essentially, Wwise can handle all audio in a game, from music, to sound effects, and even dialogue. It’s similar to a DAW in that effects can be applied, sounds can be sequenced, and control input can be taken from the game in order to influence certain audio events, for example, a player’s health going under a certain threshold can influence the music, or the speed of a car can affect the pitch parameter of a sound. What I didn’t know, is that this control works both ways—markers in an audio file can trigger events in the game, an envelope follower applied to an audio track can affect lighting, etc. This has interesting potential for generative processes in gaming, where the audio and game engine could affect each other; in a more avant garde interactive piece, this could open up opportunities for interesting performance pieces. It’s definitely an area I’d love to explore.
~
David Franzke’s guest lecture was awesome. I didn’t really take much useful technical knowledge from it, but it was very good as a realistic portrayal of the industry. Plus he’s generally hilarious; I could listen to his stories all day and be entertained.
I did take away a few ideas though, one of which being to use exciters/transducers to turn structures into speakers—a concept Darrin brought up last week regarding my collaborative work. I’m going to look into this over the next few weeks, and see if I have the budget to acquire some of these transducers.
Another good point was around cataloguing sounds. I’ve done a bit of this, in terms of sorting sounds into folders, but this is often quite one-dimensional, in that it’s not practical for putting one sound into multiple categories. Over the mid-year break, I think I’ll look into some cataloguing software and attempt to categorise my growing collection of field recordings.
~
I already knew much of what Rebecca Rata spoke about in class; mostly from a similar guest lecture last year in the Advanced Diploma program, but also through my work in the publishing sector and my dealings with the rights and permissions team. However, some good points about fair use of third-party media were discussed, and as a result I’m reaching out to the creators of the visual pieces I’ve re-scored in order to seek permission to use their pieces in my folio.
Rata raised an interesting and somewhat alarming point about use of works on social media, which is relevant to my work: anything uploaded to social media transfers some rights to the platform itself. This is potentially concering, as I occasionally stream song building sessions on YouTube; I’m going to have to do some further research on this in order to know what I should and shouldn’t be sharing.
~
Project work has been slow this week. I’m taking a step back on the animation project—as per Darrin’s suggestion—and will try a different approach. I’ve received generally positive feedback on my latest draft from the animator, but some good points were raised about some sections not quite hitting the emotional cues they’re supposed to hit. I have some ideas for this; one being a potentially more melancholy opening progression, as the current one does seem a little plain. I’d love to convey the feeling of loneliness a little more in the music in that section. I also realised it’s largely all in the same key, which is fine, but perhaps playing with modulation at certain points could result in some effective shifts in mood.
My ideas for the installation project are developing as well. In addition to my thoughts about mechanical percussion and transducers on the frame, I’m starting to develop some ideas for incidental sounds for the visuals to react to. Funnily, this seems like a project where I can go a bit crazy with some chiptune sounds, as a common trope in chipmusic is high activation, positive valence sounds.. about as close as you can get to overwhelming positive emotions, really! As a result, I’m actually considering using some chiptune hardware, for example, a Game Boy running LSDJ (or mGB, which turns the Gameboy into a MIDI controllable synth), in sync with the music I’m currently running in a DAW. This would actually make it quite easy to split out to the transducers I mentioned earlier—the Game Boy elements could be played through the frame, while the rest of the audio plays on the venue speakers.