piano

Research

I was recommended The KLF: Chaos, Magic and the Band who Burned a Million Pounds somewhat indirectly, through the Sonic Talk podcast. It arrived yesterday, and I’m pretty keen to read it, as it seems to tie in with some of the art movements discussed in the Media Cultures course, notably, Dada and Situationism. Music biographies have been the only books I’ve read in the past few years, at least until starting this program, so it’s good to have another to provide inspiration, even if I should really push myself to read other things.

I’m researching piano libraries. Currently high on my list is Lekko, not just because it’s pretty affordable in comparison to some others, but because it has quite a lot of personality, with emphasis on being not pristine like many other piano libraries. The non-tonal sound of the piano is accentuated, and so the sounds of hammers shifting, keys pressing, and pedals springing back up is more prominent than even in a well-recorded live piano. This suits one of my collaborative projects well; I’ll write about that later.

Learning

At the same time, I am learning how to write music using piano again. I’m still not a great player, but it’s quite nice to be able to sit down and experiment with harmonic texture alone, rather than fall back on my trusty timbral texture (TTT) as I usually do. Discussions from Friday’s class helped me to realise (with relief) that timbral texture is as important as harmonic content, but it’s really nice to feel my sense of intuitive musical theory growing, simply from messing around on a piano every couple of days.

Speaking of Friday’s class, it was back to more analysis, which I always find useful. We continued to watch Rejected, and from my notes, I’ve developed a couple of key points:

Serve the piece: Many of Hertzfeldt’s sound designs are rough, distorted, lo-fi. This is appropriate given the style of the animations; it makes sense to not use super polished sound design when the animations are sketchy and lo-fi. The use of distortion, at least for a sound geek like me, makes things even more hilarious. I’ve noticed similar sound design techniques used in Tim & Eric’s Awesome Show sketches, where distorting and accentuating sounds usually edited out, such as lip smacking noises, breathing, coughing, etc. can add to the comedic value. Such shows/animations simply wouldn’t be as ridiculous with highly polished sound design.

Juxtaposition adding to comedic value: I may have written about this already, but the mixture of, or sudden cuts between horrible and/or grotesque sounds and ecstatic or relaxing sound design and music can be a key element in comedic perception. This was even shown in a more high-budget clip, from Monsters, Inc., where peaceful jazz music was used to set up what would become a chaotic scene.

The use of music in the Monsters, Inc. scene wasn’t particularly innovative, but giving it the attention of analysis showed how effectively music could be used in subtle (and not so subtle) ways to enhance the comedic effect. The use of Latin American music in a peak chaotic moment really accentuated the ridiculousness, and provided even more urgency to the actions. Returning to jazz at the end of the scene was another, somewhat more subtle way to accentuate the comedy, using unempathetic music to forcefully wrap things up and declare that the scene is over.

I’ve never thought about comedic sound design before (even as someone who regularly makes non-serious music), and the Friday classes of weeks four and five have been incredibly useful to help me realise that I may be able to give it a go someday, without completely being unable to achieve the desired effect.

Project work

Over the break, I created a rough composition for the aforementioned collaborative project (music starts at 0:35):

My collaborator for this project (and the other one!) is very good at detailed feedback, and has written some notes for how I should proceed, so it’s just a matter of interpreting what they wrote in order to create a suitable piece. I did get a little thrown by the use of some emotional keywords, as I’m not very experienced in interpreting emotions through music, but from Darrin’s advice I’m going to go back and ask for some examples where such words were used.

One thing I’m keeping in mind, especially with this project, is the nature of going from rough animatic, to various iterations of draft animation, to final. Watching the duration of the animatic change quite drastically, even between two early versions, has pushed me to make everything quite scalable. This does mean that I can’t concentrate on a tempo-based structure (at least not yet), but it’s making me learn about ensuring that there’s enough tonal content to stretch things out, or compress certain parts, without affecting the impact of the piece too much.

Overall though, I think it’s going well so far, especially given how much time we have left to complete these projects.

I haven’t had the chance to add anything to the other piece (overwhelming positive emotions), but I’ve received some detailed feedback, which outlines pretty much exactly the direction I’d like to take the piece; I’m pretty happy that I’ve managed to create something appropriate pretty much immediately. I think I’ll play around with some of the ideas in the next couple of weeks.

Final things

I’m learning to feel less guilty about spending time just messing around with gear. Discussions in class have helped me realise that those sessions are an essential part of the creative process as well, and even if nothing comes out of them directly, there is the possibility that they’ll contribute to an understanding of the gear, so that when the time comes that I’ll need it for a production, I can more accurately “dial in” an appropriate sound. I’ll be keeping this in mind for the next Sound Design assignment, as I have some field recordings which are currently too obvious to be used directly; instead, I think I’ll process them through the MS20.

That said though, the past few assignments for Audiovision (and my current collaboration projects) have been further examples that doing everything in software can not only be as effective as using hardware, but allows for a lot more iteration and adjustment of sounds if needed. So, a best of both worlds approach is probably good..

Leave a Reply

Your email address will not be published. Required fields are marked *