corporate doom

Research

I’d been meaning to watch Tantacrul’s video on corporate music for quite some time now, so I’m glad I had the push to watch it for the course.

It’s a pretty hilarious look at the cliches of corporate music, and had me cringing at points, remembering the terrible corporate stock music my last full-time employer used in their internal videos. It’s also contributed to my chord progression and arrangement learning, as it shows certain tropes used a lot in corporate music—tropes that I should probably mostly avoid, unless I want to build up a ton of library music, or write a corporate music generator.

We also watched Tantacrul’s video on TV music. Standout concepts included reification—in this context, strong associations reducing music to a single literal meaning—and “archetype” music—using certain genres or musical elements to represent personality traits. Particularly amusing was the use of pizzicato to imply “dumb” or “cute” traits.

This week’s film viewing was Nobody, which was a choice my partner and I made simply because we had Gold Class tickets and nothing else of interest was showing (plus, we’re both fans of Bob Odenkirk). The film itself is ridiculous and quite over-the-top, but from a sound design perspective, it was quite decent. Much like in Atomic Blonde, there was a musicless fight scene, where the sound design was ramped up to the point of being incredibly brutal. Even the introduction scene of the film was nicely designed; a montage of repetitive everyday life with the sounds of boring activities arranged into a rhythm. Not the most original idea, but well executed.

And, for something fun that came up in my YouTube recommendations, some great insight into the sounds of Doom:

Obviously the sounds are quite dated at this point, but for 1993, the sounds were incredibly immersive, and the video shows that part of the charm is how degraded the stock sounds became once processed, which I assume was a byproduct of space/memory-saving requirements.

While simple, I think in particular some of the edits and processing of stock sounds (notably, animal sounds for the monsters) are pretty innovative, and these techniques are motivating me to process my own field recordings, as well as library sounds, a little more.

Learning

I’ve been diving deeper into Pro Tools—learning how to apply some techniques that I commonly use in Reaper, as well as getting my head around the keyboard shortcuts and exploring some processes I don’t commonly use. One example being send effects; in Reaper I usually only use insert effects, but my task for this week (Sound Design assignment 1 task 5) ended up being quite layered and necessitated the use of a reverb send effect, as opposed to applying an individual instance to each channel.

My learning of more extended chord progression theory is continuing, mostly from Darrin’s recommendation of this Jacob Collier video:

It’s still a little mindmelting, mainly because Collier is such a ridiculous musician that I sometimes get distracted by how technically brilliant he is. I have saved it to refer to later though, and have picked up a few things from it, especially the parts regarding going up and down the circle of fifths and how they relate to major/minor chords, and the explorations of extended chord resolution. The section on chord inversions was also very interesting. I’ve been playing with chord inversions for years now, but it’s good to see practical implementations of certain inversions, as opposed to how I usually work, which involves as little movement of my hand on the piano as possible. After the video, I played with some inversions for the following project..

Project work

After what I thought was a quiet period from my collaborator for this project, she emailed me and asked how I was going with the music. Long story short, my email provider was spam filtering her replies to my previous questions, before they were even hitting my Gmail account. I apologised greatly, and got to work on the latest version, which can be seen above. I’m still not 100% sure if I’m hitting the right emotions that she’s looking for, but I think it’s a step in the right direction.

I still haven’t had the chance to work more on the installation project as yet. I’ve emailed my collaborator about it and she seems to not mind too much about a bit of delay at this stage. Still, I hope to get on to it during the week. I’ve had some ideas for it though, such as using some mechanical percussion to create a more physical element to the music; I’ll see if I can sort this out and still have the “vibe” remain positive.

the ecstasy of gold

Research

I’m two chapters in to the KLF book I mentioned in the last post. One particular quote from Chris Langham, co-writer of the Illuminatus! theatre production in the late 70s, stood out as particularly interesting if not hilarious: “If it’s possible, it will end up as some mediocre, grant-subsidised bit of well-intentioned bourgeouis bollocks. But if it’s impossible, then it will assume an energy of its own, despite everything we do or don’t do.” In the context of the book, it refers to a number of ambitious choices by director Ken Campbell and Bill Drummond, who designed the sets for the production. It’s a particularly resonant quote, despite being a bit ridiculous. I’ve often created my best works when at the very edge of my ability, so I think it’s a good drive to be a bit more ambitious with my works this semester (and for the remainder of the program).

I’ve also started listening to the podcast The Soundtrack Show after it was mentioned in Friday’s class. Naturally, I started with the Morricone episode, as his work is among my favourite soundtracks in cinema. I’ve only listened to the first part of the podcast so far, but I definitely got what I wanted from it: an analysis of his signature chord progressions. It was amazing to learn that what became his style has roots in American folk music, with his Western film compositions seemingly stemming from a cover version of Woody Guthrie’s Pastures of Plenty, created with American folk singer Peter Tevis. This strangely connects a few of my influences together, as I grew up playing bluegrass and country songs with my father, alongside some songs by The Shadows, who also covered Morricone’s 1971 composition Chi Mai. This connection, as well as discovering that Metallica* opened every show from the mid-80s onward with either a recording or a cover of The Ecstasy of Gold, really shows how important Morricone was for the development of my musical interests—and perhaps in a roundabout way, a reason why my tastes are so eclectic.

* – One of my consistently favourite bands, as much as I’m somewhat ridiculed for it among more experimental scenes.

I forgot to mention last week that the following video came up in my YouTube feed:

It came at a very appropriate time, given my focus this semester, and despite being a big ad for Felt Instruments’ product Helenko, it’s interesting not only as a study of how a soundtrack can affect perception of otherwise neutral visual content, but also acknowledgement that sometimes the visuals aren’t very interesting and serve a purpose as more of a transition between scenes, and may still require similar attention to soundtrack as more interesting content.

It was followed immediately by this, which is almost on the other side of the spectrum in terms of energy and variation:

Even more extreme in its differences (due to being created by several different producers), it really helped to show me how drastically the “mood” of a visual sequence can be affected by the music. For the record, Tori Letzler and Virtual Riot stood out as my favourites, mostly because I’m a sucker for ostinatos and arpeggios, but also because they took very different approaches to each other. In general though, I’ve paid attention to many things from that video, notably the use of chords, as I find that I do slip into a few “safe” chord progressions too often, and need to open things up a little.

Learning

The 4 Composers 1 Show video led to another Huang video:

Which, alongside some discussions in class on Tuesday, has motivated me to research chord progressions and their commonly associated emotions. While I have realised that theory isn’t everything, even when it comes to soundtrack work, I’m finding it valuable whenever I learn a little more about how to identify certain harmonic patterns, etc.

Friday’s class notes were written down on paper instead of my usual Google Doc, so I don’t have the majority of them, but I did take away a few things, particularly regarding “heroic” themes in music, and some ways to convey it (syncopation, major key, ascending melodies, and some interesting chord inversion play that I have been experimenting with already).

My other learning for the week has been on the technical side: learning how to use insert effects and automate their parameters in Pro Tools, for the latest assignment in the Sound Design specialty. I’m beginning to feel more comfortable working with it and am starting to lose my initial impression that it was clunky. I also experimented briefly with busses and sidechain compression, in order to have the master track be effected by a muted bass drum track, resulting in a “pumping sound”, or, as I’ve joked about in the past, “implied kicks”. I’ve done plenty of sidechain compression in Reaper, but the process is quite different in Pro Tools. Not too hard to get my head around though.

Alongside these, I’ve been continuing my somewhat unstructured piano learning (ie. thinking of a piece that I like and trying to remember/learn it by ear). This is a slow process, but I’m already noticing my playing getting better.

Project work

It’s been a slow week for project work, mostly due to other assignments taking priority.

I have, however, kept in contact with my collaborators, and for the projection mapping/installation piece, have received some information on when it will be shown, and some details about the space. We are in the process of coordinating a time to visit the space and evaluate whether my perhaps ambitious proposition of a 5.1 sound experience is possible or not; I’ve been told that it’s technically a 5.1 system, but has been configured for stereo. I’m confident that I can repurpose it, as long as I’m not stepping on any toes. A 5.1 mix could help to alleviate potential annoyance at a repeating one-minute high energy musical piece, especially if we create several slightly different versions. Perhaps, the main differences could be the positioning of sounds in the field?

piano

Research

I was recommended The KLF: Chaos, Magic and the Band who Burned a Million Pounds somewhat indirectly, through the Sonic Talk podcast. It arrived yesterday, and I’m pretty keen to read it, as it seems to tie in with some of the art movements discussed in the Media Cultures course, notably, Dada and Situationism. Music biographies have been the only books I’ve read in the past few years, at least until starting this program, so it’s good to have another to provide inspiration, even if I should really push myself to read other things.

I’m researching piano libraries. Currently high on my list is Lekko, not just because it’s pretty affordable in comparison to some others, but because it has quite a lot of personality, with emphasis on being not pristine like many other piano libraries. The non-tonal sound of the piano is accentuated, and so the sounds of hammers shifting, keys pressing, and pedals springing back up is more prominent than even in a well-recorded live piano. This suits one of my collaborative projects well; I’ll write about that later.

Learning

At the same time, I am learning how to write music using piano again. I’m still not a great player, but it’s quite nice to be able to sit down and experiment with harmonic texture alone, rather than fall back on my trusty timbral texture (TTT) as I usually do. Discussions from Friday’s class helped me to realise (with relief) that timbral texture is as important as harmonic content, but it’s really nice to feel my sense of intuitive musical theory growing, simply from messing around on a piano every couple of days.

Speaking of Friday’s class, it was back to more analysis, which I always find useful. We continued to watch Rejected, and from my notes, I’ve developed a couple of key points:

Serve the piece: Many of Hertzfeldt’s sound designs are rough, distorted, lo-fi. This is appropriate given the style of the animations; it makes sense to not use super polished sound design when the animations are sketchy and lo-fi. The use of distortion, at least for a sound geek like me, makes things even more hilarious. I’ve noticed similar sound design techniques used in Tim & Eric’s Awesome Show sketches, where distorting and accentuating sounds usually edited out, such as lip smacking noises, breathing, coughing, etc. can add to the comedic value. Such shows/animations simply wouldn’t be as ridiculous with highly polished sound design.

Juxtaposition adding to comedic value: I may have written about this already, but the mixture of, or sudden cuts between horrible and/or grotesque sounds and ecstatic or relaxing sound design and music can be a key element in comedic perception. This was even shown in a more high-budget clip, from Monsters, Inc., where peaceful jazz music was used to set up what would become a chaotic scene.

The use of music in the Monsters, Inc. scene wasn’t particularly innovative, but giving it the attention of analysis showed how effectively music could be used in subtle (and not so subtle) ways to enhance the comedic effect. The use of Latin American music in a peak chaotic moment really accentuated the ridiculousness, and provided even more urgency to the actions. Returning to jazz at the end of the scene was another, somewhat more subtle way to accentuate the comedy, using unempathetic music to forcefully wrap things up and declare that the scene is over.

I’ve never thought about comedic sound design before (even as someone who regularly makes non-serious music), and the Friday classes of weeks four and five have been incredibly useful to help me realise that I may be able to give it a go someday, without completely being unable to achieve the desired effect.

Project work

Over the break, I created a rough composition for the aforementioned collaborative project (music starts at 0:35):

My collaborator for this project (and the other one!) is very good at detailed feedback, and has written some notes for how I should proceed, so it’s just a matter of interpreting what they wrote in order to create a suitable piece. I did get a little thrown by the use of some emotional keywords, as I’m not very experienced in interpreting emotions through music, but from Darrin’s advice I’m going to go back and ask for some examples where such words were used.

One thing I’m keeping in mind, especially with this project, is the nature of going from rough animatic, to various iterations of draft animation, to final. Watching the duration of the animatic change quite drastically, even between two early versions, has pushed me to make everything quite scalable. This does mean that I can’t concentrate on a tempo-based structure (at least not yet), but it’s making me learn about ensuring that there’s enough tonal content to stretch things out, or compress certain parts, without affecting the impact of the piece too much.

Overall though, I think it’s going well so far, especially given how much time we have left to complete these projects.

I haven’t had the chance to add anything to the other piece (overwhelming positive emotions), but I’ve received some detailed feedback, which outlines pretty much exactly the direction I’d like to take the piece; I’m pretty happy that I’ve managed to create something appropriate pretty much immediately. I think I’ll play around with some of the ideas in the next couple of weeks.

Final things

I’m learning to feel less guilty about spending time just messing around with gear. Discussions in class have helped me realise that those sessions are an essential part of the creative process as well, and even if nothing comes out of them directly, there is the possibility that they’ll contribute to an understanding of the gear, so that when the time comes that I’ll need it for a production, I can more accurately “dial in” an appropriate sound. I’ll be keeping this in mind for the next Sound Design assignment, as I have some field recordings which are currently too obvious to be used directly; instead, I think I’ll process them through the MS20.

That said though, the past few assignments for Audiovision (and my current collaboration projects) have been further examples that doing everything in software can not only be as effective as using hardware, but allows for a lot more iteration and adjustment of sounds if needed. So, a best of both worlds approach is probably good..

sonification nord

Week 4

It’s been a good week! I’m feeling inspired.

I’ve noticed there’s a level of excitement that comes from researching a subject for my assignments and finding people have written at length about it, when it’s either something I’m passionate about, or something I can relate back to my work in a somewhat obscure way. The research for this week’s assignment resulted in finding a dissertation titled “Metal Machine Music: Technology, Noise, and Modernism in Industrial Music 1975–1996”, which is incredibly detailed, and I’m sure I’ll be able to cite it in many of my future texts, as I’m obviously very influenced by all kinds of industrial music.

Tuesday was a presentation and constructive criticism session, where we showed off our pieces for assignment 2.1. I received some positive comments about the tightness of my synchresis, which I was happy to hear. Constructive criticism was useful and valid, describing a need to ramp up the stakes a little, and to play with space to even more of an extreme than I attempted already. Injecting more variation into the piece could also assist with engagement, either through layering or simply variation in sound sources. Lately I’ve been somewhat obsessed with having everything sound like it’s being generated by one object, even if the context doesn’t call for it.

Friday was mostly a session mostly dedicated to watching and analysing two visual pieces—Brothers Quay’s In Absentia, which I hadn’t heard of before, and Don Hertzfeldt’s Rejected, which I had seen before. Despite this, I wrote a fairly large amount of notes.

I had no idea that Brothers Quay had also directed the video for Peter Gabriel’s song Sledgehammer, which is surprising because I’m a huge Peter Gabriel fan. Anyway, In Absentia, in contrast, is a very different mood, especially when accompanied by the Stockhausen soundtrack; it’s quite unsettling. I learned a lot about how to create such unsettling environments in audio, particularly through use of deliberately inconsistent/incomplete synchresis (cf. Darrin pointing out that a window swings six times, but only the first four being articulated with sound).

Analysing Rejected was a pretty fun exercise. It’s such an absurd collection of animations that I’d never really paid super close attention to the sound, but having a deeper look at it reveals even further absurdity due to the choices made. A key takeaway, even if simple, was the idea that the use of birds can subtly/subliminally open up a space. This is something I’d never even thought of before.

I’d also never really thought about comedic audio before, and positively, have come away from that session with some interesting ideas about how to accentuate or even create humour with audio. One example from Rejected which stood out was the perspective cuts of the screams from the “Fat and Sassy” animation. They were not only cut in such a way that you can only hear the person on-screen, but it also sounds like the screams are starting at each cut, making it seem like each person is taking turns screaming. This just makes it even more absurd to me!

I decided on a more “music video” approach for assignment 2.2. I wanted to make something heavy, and went with the obvious choice of making something similar to Autechre’s Second Bad Vilbel, given the visuals created by Chris Cunningham had some similarity to the robot entity in the visual work I chose. I took some inspiration from SOPHIE and Gridlock as well, the former being a rather recent, yet important influence, and the latter being an artist I’ve been a fan of since the early 2000s.

Using sounds I created from databending techniques in my piece has got me thinking about it again. I think I’m going to continue feeding my machines corrupt sysex data to see what happens. The Nord Lead corrupt data was surprisingly glitchy without crashing the synth, and I’m well aware that my other gear is probably not as flexible in terms of handling such data. We’ll see though..

A nice video I found while looking for inspiration for 2.2:

It doesn’t apply to what I did for the assignment, and I didn’t reference it in my text, but I think it’s important to save it here.

I got a new computer, just in case my iMac completely dies mid-edit. Switching to a PC is quite a contrast, but its modular/upgradeable nature is reassuring me a bit. Plus, it is absolutely lightning fast when scrubbing through videos in both Premiere and Reaper, so that’s going to help a lot with my future projects.

Speaking of future projects, I had a few emails from potential collaborators during the week. I’ve said yes to two so far:

  1. A music piece and sound mixing for a ~4-minute animation about a fisherman’s psychedelic experience, and
  2. A one minute music piece synchronised to a projection mapped installation about overwhelming positive emotions.

I’m pretty excited about both, really. The first is an excuse to get super aquatic and possibly a little experimental, with potential for some heavy synchresis. The second, being an installation, is something I’ve been wanting to do for years; there’s also potential for us to inspire each other, too, which I’m very excited about. I’ve already started creating a very loose sketch for the music:

It will likely end up changing drastically, but it’s good to have the motivation to get started super quickly. I’m a little concerned about a one-minute piece of music getting annoying if it repeats constantly, so I’ve suggested the idea of creating several iterations of the music to accompany the same visuals, so it repeats every 5–10 minutes instead. I don’t think it’d be too much work; it’s mostly about focusing on a different element in the piece to build upon. If I create enough layers it could be a good way to stretch out a bit.

palette/canvas

Week 3

It’s been a busy week.

I’m enjoying the class discussions. I’m getting used to writing more notes and actually remembering what was discussed, and generally leave each class feeling inspired. I do worry that I talk too much and go off on tangents though..

This week has been a continuation of the psychological concepts discussed in the previous weeks, with some interesting thoughts about engagement and emotion, and how certain devices can be used to enhance both. I was particularly intrigued by the idea of removal of an element for emotional impact, with an example being a machine hum suddenly becoming silenced; the viewer will be more likely to notice its removal than its presence. I actually noticed this last night as I watched No Country For Old Men—several scenes rely on the sound cutting to almost silence for additional impact / tension.

Another interesting concept discussed was the use of perceptual devices, notably how sound designers deal with shifts in perspective. It’s something I think about a lot, especially when watching videos such as those in the “musicless music video” style.

On a similar note, I was reminded of this the other day; it was particularly relevant/hilarious as I was working on AT2.1 at the time:

Perhaps a mid-semester/mid-year break project could be to re-score a pop video clip with ridiculous IDM or something.

The Sound Design specialty is also proving quite relevant to the Audio Vision studios. Not only am I slowly getting used to Pro Tools, but I’m discovering techniques I’d previously not explored, in particular the “palette/canvas” approach of separate recording and editing sessions—record a long session of just messing around on an instrument and then cut little bits out to edit into a layered composition later. I’ve done it twice this week and it’s worked out really well. I’m definitely keen to explore this method more in the future. As I mentioned in my writeup for AT2.1, it has the benefit of also being scalable, so if I’m working with someone who isn’t finished editing yet, or if the project is an interactive one with non-fixed cue points, the audio has room for adjustment. It’s quite different to my previous method of just linearly editing jams into songs.

Speaking of AT2.1, that was a lot more fun than AT1, from an editing perspective. As I wasn’t trying to use Redux to track out micro percussion along to the video, it ended up being a lot more of a “design” process and allowed for more fluid motion in the sound.

The source material mostly came from processing some old unfinished pieces through Emission Control 2, a free granular sound processor based on Curtis Roads’ original OS9 software. I’d found out about it from an interview with Richard Devine on Mr. Bill’s podcast:

Sadly, I think my iMac’s graphics card is on its way out, as it crashed quite spectacularly mid-edit (luckily I’d saved!), with warning signs being some screen glitches before it crashes—something that has happened twice before when streaming. Definitely need to look into that.

Research for the week includes actually reading the Mark Fell article I linked in the previous post, which informed the techniques I used in AT2.1 (and has some parallels with what was discussed in class on Friday). It’s also very relevant to the “gardens” idea from last post—slow-moving, or static, sound that acts as a sonic world rather than a narrative. I’m even more compelled to write a non-moving techno album now.

Another interesting point of research has been looking more into shape-sound correlations in terms of synaesthesia, after realising I’d been instinctually associating certain shapes or textures with sounds in AT2.1. I am definitely interested in exploring this phenomenon (and other synaesthetic relationships) in my future work, even to the point of doing the opposite of what my particular “brand” of synaesthesia tells me.

I’ve been trying to get into the habit of watching more films, because I notoriously haven’t watched many at all (my response to people saying “I can’t believe you haven’t seen *film*!” is always “I can’t believe you haven’t heard *prog concept album*!”). This week, along with the aforementioned No Country For Old Men, I watched the following:

Atomic Blonde – I feel like my impression of this film could have been improved greatly if it had a different title. I wasn’t into it, story-wise, but the cinematography and sound design/music was pretty amazing, and almost arty, considering how mainstream it is. I was surprised to hear a Ministry song used in a scene.. one that wasn’t Jesus Built My Hot Rod, at that.

Sound of Metal – My girlfriend brought this up as it’s apparently a pick to win an Oscar for best sound this year. And rightly so—the sound design is so on-point and is incredibly immersive. Disturbingly, in fact, because as someone who definitely suffered an amount of hearing loss from that one Sunn O))) gig without earplugs in 2007, it made me aware of a horrible reality that awaits if I don’t continue to wear hearing protection at loud gigs. There’s a brief, but interesting article here about the process of designing sound for the film (also note to self: listen to that podcast).

the first two weeks

Week 1

Beginning the Bachelor program as an articulate was something of a “thrown in the deep end” moment, and as such the first week was a blur of emotions, in a bizarre web of impostor syndrome, confidence, confusion, and inspiration. The discussions were lively and dense, however, and despite sometimes feeling overwhelmed, I did learn a few things, notably about Michel Chion’s concept of synchresis; it was good to know that a name existed for this technique.

The first assignment was very much in line with my aesthetic interests. We were to select from one of Gina Moore’s animations and create a sound design piece to accompany it. The brief was left quite open in terms of approach, with the only guideline being that we were to pay attention to audiovisual detail. The animations that stood out most to me were the more “glitchy” pieces; I eventually chose Cafe Figures Moblur:

Naturally, this reminded me of early-00s minimal IDM and the accompanying abstract video clips of the time. I began revisiting some of those clips—in particular, Lucio Arese’s incredible unofficial video for Autechre’s track plyPhon:

While the animation and musical piece are much more complex than both Moore’s animation and my resulting accompaniment, Arese’s animation inspired me greatly to create a tightly synchronised piece of audio.

The process of creating the audio for Moore’s animation could have used some refining. I touched on it in my writeup, but for future reference, I’ll outline a few technical points here. The most frustrating technical issue I experienced was that the software I used didn’t quite allow the degree of previewing audio events and their relation to the video that I expected. If I were using a straightforward DAW it might not have been too difficult, but I wanted to incorporate some of the detail that only tracker software can efficiently facilitate. Unfortunately, the VSTi I used (Redux), doesn’t synchronise as tightly to the DAW as I’d hoped; perhaps there is a mode I’m unfamiliar with, but I was only able to review the piece in pattern-by-pattern chunks. Ideally, I’d love to work with a tracker interface where stepping through each grid position would cause a video to update in real time; this is something I’m keeping in mind for if and when my programming abilities extend to software development.

During the research for my piece, it was interesting to see just how much has been written academically about experimental / glitchy electronic music. I found that one of my favourite sound designers, Mark Fell, had written a small blog post on synchresis. I didn’t refer to the post in my writeup, but continued to explore his work, and found that he has written a number of texts; most notably, Patterns in Radical Spectra.

I’ve begun assembling a list of bookmarks related to this program, and my creative practice in general, and as long as I can get through the sometimes inaccessible language, I think they’ll be valuable texts for future reference.

Week 2

Gina Moore attended our class in Teams this week, and provided some positive feedback on my final submitted piece which can be seen below:

It was good to get some constructive feedback from classmates and lecturer as well. Particularly in regards to making the sound move in some way as the animation progresses; this seems like a bit of a challenge for such a short clip, but I can understand the need for such progression to keep things interesting.

Moore also has some great ideas and mentioned a potential collaboration between our class and hers, on an abstract VR environment. This is definitely relevant to my recent interests, as I’ve been hoping to get into audio implementation and sound design for VR applications for some time. In the end, I selected and submitted eleven pieces as possible textural inspiration for Moore’s students; possibly overkill, but I have created a lot of abstract musical works over the past twenty years, and it’s time I showed them off.

Friday’s class was quite a detailed discussion, covering many concepts and techniques. I found this session useful even in relation to the music I create and release as standalone pieces—in particular, the idea of gardens vs. ships in plays and film, where a “ship” style production has a definite progression, and a “garden” is more of a world that viewers can explore and find their own meaning. I had been referring to some of my more minimalist and/or unchanging works (my work z0 being an extreme/ridiculous example) as plains or installations, but it’s good to have another metaphorical word to use to describe the concept.

One other personally relevant piece of information from Friday’s class was a discussion about justifying unorthodox ideas. This is something I’ve struggled with when working with other people on collaborative projects in the past, so it was good to talk about it and learn about how to “convince” collaborators to give some of the less obvious ideas a chance. One key solution to this is to show examples of where other works have used similar ideas/styles and describe how it relates to the ideas presented for the new collaborative works. Similarly, learning to talk about my own work in more detail from a less technical, or even non-technical point of view is going to be one of my key goals during this program.