adventures in procedural generation

I’m back again! This time I’ve decided to write independently about my various projects and work, mostly as a record of what I’ve been working on.

Procedural generation of dungeon maps

I’ve been looking into roguelike games recently, and while a lot of them are fairly impenetrable or just not the kind of game I like to play, I do enjoy the procedural generation aspect, particularly in how terrain and maps are generated.

My motivation for exploring these techniques is to expand on my prototype project Dungeon, which is a concept for a sequencer where items in the field can be picked up and placed in a step sequencer at the base of the field. My current procedural generation technique leaves a little to be desired:

  

Not the most inspiring terrain. Even when scaled up (as in the prototype Dungeon16), this technique isn’t very interesting, as it is still effectively just a rectangular room with obstacles peppered throughout:

These were created by a simple 50% chance of an empty space becoming a wall, and then carving away at the walls using a few rules (e.g. no large clusters, no shapes that frequently result in unreachable areas). It’s not a bad start, but clearly I needed to research some expanded techniques.

This video was incredibly useful:

A lot of the techniques are somewhat inappropriate for my use case, but the one that stood out most to me was diffusion-limited aggregation, which involves starting with a small open area and firing “particles” at the walls to extend it. My interpretation of the technique may be slightly different to the one explained in the video, but it was surprisingly simple to implement something with a similar outcome.

As mentioned, the inital state is a small open area, in this case 4×4 tiles:

Next, I implemented code that would generate a particle at a random position in the field, and if that position contains a wall, it would try again. If the position is empty, it would start movement, one tile at a time, in a random orthogonal direction. As soon as it hits a wall, it destroys the wall, and then stops. A new particle is generated and the cycle repeats.

My first experiments with this, and 100 particles, weren’t very interesting:

 

I increased the particle amount to 1000, and.. still too blobby:

 

It wasn’t giving me the kinds of obstacles I wanted in the middle of the field. It’s good for large open areas, but I’m not really looking for that.

I wrote a rule that would only allow for walls to be removed if tiles next to the target tile contain a wall (currently above/below, left/right, and two diagonal cases). This alone resulted in some much more interesting maps:

 

At larger sizes, these tend to look much more like the kinds of maps I’d expect from an effective procedurally generated game:

 

I could probably tune this algorithm a little more (and there will need to be some tuning in order to ensure access to the sequencer row at the bottom of the original version¹) but I’m very happy with this result, which only took me a couple of hours to develop—including the p5js editor crashing and losing my work last night.

Notes

  1. A possible solution to the access to the sequencer row could be to contain it in a separate screen or menu, but this may detract from the experience due to a disconnect between the game world and the sequencer. There’s a very specific implication that the player is picking up and dropping the objects in the current prototype versions which would be lost if it were treated like an inventory puzzle.

internship update + conclusion

Wednesday 12th October

Honor and I visited the Capitol briefly to be filmed for some content about the inaugural Capitol commission, which was a little terrifying as I’m not great on camera. Filming seemed to go pretty well though. I’m still not quite sure where that will be shown, or if I need to sign a release form, but we’ll see!

We also had some time in the venue before the filming to do some tests of the newer states ahead of the rehearsals. This was very useful as we had the chance to do some proper colour matching between the screen and the ceiling lights. I’m particularly proud of this one as it was quite tricky to match:

After this, we headed back to the studio for more cue preparation. Honor did have a last minute addition to the show though: a synthpop track that needed some vocoder work. I’d brought my vocoder with me (Roland VT-4 for those playing at home) and we set it up to quickly record the vocals for the chorus. This was pretty fun and unexpected. It made me feel like I was taking on more of a “producer” role, which is something I’ve been wanting to do for years.

We didn’t quite finish the cue preparation by the time I had to leave (7pm), but we decided to use part of the tech rehearsal day (Thursday 13th) to prepare the last two acts. Cutting it fine!

Thursday 13th October

Tech rehearsal day! We met at the studio at 8.30 to pick up the gear that Honor brought along for the show and transport it to the venue. At the venue, we did some partial run-throughs, but mostly worked on the tech side of things—stage lighting, sound, alignment of visuals, etc.

I took a bunch of notes while we did the run-throughs, and realised that there were a few places where things wouldn’t run properly, due to previous graphics not releasing. I think I need some more experience with Qlab in order to make this process more efficient.

This was quite a long day. I had a nap on the couch when we got to the studio while Honor did some guided meditation, and things got a little emotional as we realised how close the show was, and how much we needed to do in order to get it to run smoothly.

We left the studio at 9.30pm, picking up some surprisingly fresh donuts from the 7-11 along the way.

Friday 14th October

This was intended to be a dress rehearsal day, but we still weren’t 100% finished with the cues. We had the first three acts pretty much downpat, with some minor changes, but acts 4 and 5 were a little sketchy. This was mostly fine, as we were able to do some partial run-throughs and work on the cues along the way, but the decision was made to make the last two acts a lot simpler (apart from the massive crescendo in act 4, which had been mostly completed already) in order to save time and our mental health.

I still had some notes for a few of the cues in the first three acts, which I was confident I’d be able to fix by the time we were to do a proper run-through on Saturday, so we concentrated on getting acts 4 and 5 into a playable state. As we’d made the decision to simplify them, this was a smoother operation than the other acts, but we didn’t finish work on them until 8pm.

Around this time, I uploaded the latest version of the lighting design project file to the Capitol system, which I’d been doing regularly during the day (and previous days). Something strange happened this time though, and the system appeared to crash, becoming unresponsive to network messages either from Qlab, the Pharos web interface, Pharos Designer, or the panels around the venue. An incredibly stressful hour followed, as we tried a few things in order to reset the lighting system, with nothing working, until we called Darren, the Ops coordinator for the venue, and asked where the Pharos controller was so we could reset it. After some instructions that sounded like escape room riddles, we were able to go up to level 5, the area containing the old film projectors, and find the controller box. I made doubly sure with Erik (my supervisor/manager) that it would be ok to try power cycling the controller, and after his blessing, I pulled out the ethernet cable (which was also powering the unit) and re-inserted it. The controller came back to life with all connections working, pretty much instantly. A huge sigh of relief from not only myself, but also Bec and Helen, my colleagues who helped to not only show me to the level 5 area, but also reassure me that everything was going to be ok.

This would have been fun if it wasn’t less than 24 hours before the show! Getting to level 5 of the Capitol felt like we were being let into secret areas, with lots of twists and turns.

We were finally out of the venue at 9pm.

Saturday 15th October

Show day! I was somehow able to get a decent amount of sleep after Friday night’s stress, and returned to the venue at 12pm. I continued working on tightening up the cues before Honor arrived, and then we went into the theatre to do a full rehearsal (minus microphones and cameras). This went well, with some minor things I needed to fix.

I stayed in the control booth, unplugging the sound and putting my headphones in instead, so I could work on the final changes without disturbing anyone, but still being connected to the projector screen and lighting. This was a much needed period of relative isolation, as it was possible to take in the changes quickly and effectively without being disturbed by other people.

The next few hours were honestly a bit of a blur, and all of a sudden it was after 4pm and we were running out of time to do a full rehearsal. The rehearsal was almost perfect, except for yet another minor timing and layering issue I needed to fix (this was something only I noticed; I’m not complaining!), as well as the camera not working, which we eventually realised was an issue with Qlab and not any other deeper technical issue (thankfully).

By the time we were finished with the rehearsal, it was getting very close to opening time, and I realised I hadn’t had any food since my 11.30 Woolworths sushi lunch. Luckily I had a muesli bar in my bag, and Honor’s partner bought me a 7-11 pie (thanks Graham!), which I promptly devoured mere minutes before doors opened. I made a mental note to eat healthier in the coming weeks.

Doors opened and people started filling up the theatre. I could feel the anxiety building. However, after being very shaky for the first two acts, I started to get in the zone, and was able to launch all of the cues without any timing issues. I even got to have a bit of fun with some live controlled lighting while Honor performed an unsequenced guitar and vocal song.

Honor’s performance was on point, with enough improvised material to keep me on my toes in terms of triggering cues, but also managed to keep the audience engaged. The fifth act was possibly the most impressive, as it was delivered with almost no lighting cues, and with Honor sitting on the stage lit by a single spotlight—a huge contrast to the rest of the show being flashy and decorative. The event was sold out, and the audience gave a standing ovation at the end, so I consider it a great success.

I sadly didn’t get any photos or video during the event itself, but there are some bits and pieces over at Honor’s Instagram page, mostly as stories (which may disappear by the time this gets read/assessed). I’ve been told that the event was filmed though.

Final thoughts

This has been quite a journey. My role expanded from simply creating lighting states and animations to almost a producer role, spending a lot of time working closely with Honor either at her house, in the studio or at the venue. Honor has been an absolute pleasure to work with, and is receptive to not only new ideas, but also very understanding of when something can’t be done, which I found important, since it was the first time I’ve had to give such feedback, and it was nice to have someone take it well. We get along very well creatively, and have been talking about potentially working on creative projects again in the future.

The RMIT Culture team has been incredible too. I appreciate that they have welcomed me as a colleague and not just an intern; they too are receptive to my ideas and have put up with a lot of both Honor and I taking our time to get things prepared! In addition to Erik providing a lot of support and showing me around the various RMIT Culture areas, I also must acknowledge Simon’s openness and willingness to share knowledge about Qlab and other technical areas.

I’ve already been called upon to create some more lighting states (in fact, I was in today to work on some lighting; thanks again Bec and Darren!) and have spoken to a few people about pitching an event idea, which seems to actually be possible. I’d really love to continue to be involved with the Capitol, as it’s an amazing venue.

internship update

Not too much to report from the last few days. As we’re getting closer to the show (this Saturday!) we’re basically just going into crunch time with Qlab, getting ready for the rehearsals.

Outside of the work on this project, I have also been offered some more lighting design work, which I will be working on in the venue on the Monday after the show (17th), as well as potential music related lighting design projects in the future (likely next year). I’m pretty happy about this, as I’m starting to feel recognised as someone to trust for such projects.

7th October

Started in the morning at Honor’s house, working on getting all of the cues named properly, and inserting some new files. In addition, I prepared a music cue based on a song Honor will play at the end of the show, as a buildup to the song itself. For this, I used Spitfire Labs’ Glass Piano and Vital software synth, both free plugins.

We headed over to the residency studio in the afternoon to set up the projector and sound system and simulate the venue tech setup. It was good to finally get some time in the space to test things at a larger scale. I learned a few things about Qlab as well, such as being able to stop entire cue groups (this was a huge realisation and would have saved us a lot of time had we known about it earlier!) and, similarly, play entire cue groups (which will come in handy if we have any recurring cues throughout the show).

The cue preparation took us a lot longer than expected, and we only managed to get through the first act before calling it a day at 7.30pm.

10th October

Another day of cue preparation and playing around with some new ideas, including some AI-generated subtitles for the “life tape” sections. These will be all taken care of by Honor, but I had some creative input in terms of font selection and some other technical details.

I continued work on the music cue I’d started on Friday, rendering a version with reverb in order to be able to fade to reverb as the show ends. The only work that remains for this is to edit it into loops for the cue triggers.

Speaking of loops, I also took on the task of editing a pre-existing musical cue into a loop for a section where Honor may become overwhelmed and need to take a break. This took some time to get right, but we eventually settled on a gentle looped version of the main chords which can be faded into and out of without too much of a jarring transition.

We managed to complete most of the show’s cues by the end of the day, at least in a draft state. We will be in the venue on Wednesday morning to briefly test some new lighting patterns, but will likely return to the residency studio to do a run-through of the show before the tech rehearsal on Thursday.

internship update

Apologies, I intended to update on Monday, but we were working all day and I had other commitments in the evening.

30th September

This was a full 9am–5pm day at the Capitol, partly as a training day, but also to work through some production details for the show. The training wasn’t as RMIT-focused as I’d originally expected, but it was good to meet a few more people involved with RMIT Culture, as well as some others who will be running various parts of the show.

My knowledge of Qlab is really coming along; I was surprised how easy it was to figure out how to trigger the Pharos system from Qlab, using UDP messages sent over the network. It just required the IP address and port number to be entered, and a plain text message to match the trigger message in the Pharos project. Cues from Qlab trigger the lighting timelines instantly; in fact, it’s so instant that we had to build in a delay in order to synchronise with the projector’s latency of ~200ms.

Another Qlab technique came from my colleague Simon, who showed me how to create timeline groups:

By default, all cues in a timeline group play simultaneously, but using a pre-wait time, cues can be scheduled, and even dragged around the timeline within the interface. In the above example, lighting cues have been synchronised with the audio; this could have been programmed as a single timeline in Pharos Designer, but for this use case, it’s more convenient to have each state as a separate timeline, triggered from Qlab. This also makes it easier to move things around if we end up re-cutting the audio.

We spent quite some time testing one small group of cues in order to get the aforementioned latency right, and will apply the latency to the rest of the lighting triggers using pre-wait times. Coupled with my new knowledge about timeline groups, this will be much easier to achieve than I initially expected.

3rd October

This was to potentially be a day spent in the residency space, but as the space was unavailable due to filming, I visited Honor’s house, which was more convenient as she only lives one suburb over. This was a productive session, and I managed to use the techniques I learned on Friday to tighten up the Qlab project and start creating timeline groups for all of the cues. I was able to connect a projector to my laptop—which has been decided as the computer that will control the audio, video, and lighting cues for the show itself—in order to test.

5th October

Today was another day spent at Honor’s house working on cues. I created some new lighting timelines, as there are some newly edited voice lines that needed articulation with lights. After this, I resumed cleaning up the Qlab project, reading through the script and making sure cues were named correctly. I didn’t finish this, but it’s mostly for readability / convenience, so our plan to run through the draft cues in the residency space on Friday shouldn’t be affected.

internship update

I had a mystery illness from the 22nd until the 28th, so apologies for the lack of updates in that time. A lot has progressed since then though; I’ll update again on Monday with the results of the training day on the 30th.

21st September

I was invited to attend a work-in-progress version of a Capitol lighting & live music show that students from various VE stage programs created based on Jim Moginie’s album The Colour Wheel.

I didn’t realise it would be a live performance, and one consisting of six guitarists at that (one of whom was playing a Bass VI at that!). Very relevant to my interests. It was a great mix of avant garde composition and more accessible post-rock/shoegaze music. A highlight of the show was the performance of Opus 1 No.2 BLACK in the middle of the set, which involved the performers unplugging their guitars and rhythmically pressing their thumbs on the bare plugs, which resulted in some satisfying minimalist noise more akin to a Ryoji Ikeda piece than anything involving guitars.

Some notes I took on the lighting articulation:

  • Most of the lighting was static, with the colour matching the track titles from the album. This gave the brief moments of movement more emphasis, especially in the aforementioned BLACK where the entire ceiling was flickering randomly with areas of white.
  • The static scenes did use subtle movement, mostly slow random pulsing, but in RED there were some very nice rings of white descending from the ceiling, crossfading back to red. I’m taking some ideas from this for the remaining designs in Honor’s show, in particular a section where I’ll attempt to control the lighting live.
  • The colours in general looked great, especially the yellows. It looked like they’d spent a decent amount of time in the venue in order to test colours properly.
  • Something I hadn’t noticed, despite working on three shows at Capitol, is how the proscenium and wall lights can be treated as one piece if lit consistently. I don’t think I’ll find anywhere in Honor’s show to use the technique, but if I get offered any more shows, I’ll be sure to experiment with it.

Overall I was pretty impressed with the show, and it gave me some good ideas for both Honor’s show and future work. I believe they were using direct DMX control over the lighting, rather than the somewhat clunky Pharos system. Unfortunately I was unable to stay around to talk to anyone about that, but it’s something to look into for sure. It fits in with my ideas regarding generative/procedural control of the system.

25th September

Honor and I had intended to work together in person at some point over the weekend, but some delays in receiving audio files, as well as both of us individually developing different illnesses, it didn’t end up happening. Instead, we met over Zoom, and worked on getting a very basic Qlab session together, with the audio cues sent over from Marty (sound designer). This was a productive session, and it was good to have a starting point from which to attach the lighting and video cues required.

I’m learning a lot more about Qlab and how to schedule, layer and trigger cues. Turns out it can do a lot of very complex scheduling. The aim for this show is for a list of cues that can be continuously stepped through using the spacebar key, with the complexity hidden away under layers. If we do have a live lighting control portion of the show, the controls can easily be assigned to other keyboard keys, effectively giving me the opportunity to perform.

internship update

The past week consisted of more research, mostly regarding Qlab and whether or not it could trigger Pharos Designer, as I mentioned last week. I emailed Pharos support and they replied that unfortunately there is no way to control the software directly—the actual control from Qlab is for the Pharos lighting controller, which responds to UDP commands in order to trigger certain actions (e.g. start/stop timeline). This is somewhat disappointing, as we won’t be able to rehearse properly outside of the venue, and will have to rely on manual synchronisation (ie. pressing spacebar on two laptops simultaneously).

~

Today, I met Sarah, who is involved with gallery residency spaces; she showed Honor and I (along with Erik and Simon) around the space we’ve been given access to for developing the show. The space looks great, and will be a good “office” to work from. I’ve found that I’m more productive for this project outside of my home studio, as there is less potential for distraction.

Honor and I stayed back to discuss schedules for the next couple of weeks. We’ll likely be back in the space on Friday or Sunday, in order to create a preliminary Qlab project with all of the audio cues. In the meantime, I’ll be experimenting in Qlab on my own in order to learn some of the more complex features.

After a quick lunch, I visited Building 16—with some minor issues entering the building—and met up again with Simon, who was showing Elliot (a fellow casual Capitol employee) some Qlab features. I joined in and picked up a few techniques that I’ll be experimenting with this week. We also discussed equipment required for the residency space, getting a basic list together based on Honor’s notes. Essentially, we’re aiming to set up a smaller verson of all of the gear used in the show, in order to make the rehearsals as realistic as possible in terms of potential failures.

Erik also made another appearance at Building 16 and took me on a whirlwind tour of the RMIT galleries. It was great to see these spaces, and it looks like I’ll have the opportunity to display some work of my own at some point. We took some gear from the storage areas (cables, mixer, speaker, projector) and transported it to the residency space. There will be more to come, but this is a good start.

~

In additon to Honor’s show, I’ve also potentially been offered more work at Capitol, for more lighting design, this time to create a lighting experience for an album by a fairly well-known Australian electronic musician. I’m very excited about this as they’ve been involved with a lot of projects that I am fond of, and it’d be amazing to be involved with their work. Hopefully I’ll have more news on that in the coming weeks.

internship update

I realise I’m only updating this every two weeks, and I should be doing it weekly, but I’ve tried to separate it somewhat here so it reflects the different days I worked on the project. I’ll try to get on top of updating weekly, especially as the day of the show approaches.

~

2nd September

The venue visit went well, and I managed to test some colour values I’ve been programming. For some reason, anything yellow shows up as a little too green when displayed on the LEDs, so I’ve shifted all yellow lighting towards a more orange hue. Similarly, many purples tend to look more blue, and thus they should be skewed more pink in Pharos Designer. A minor pain, but it’s solved by recording hue values. Saturation and brightness also behave a little strangely, with lower saturation values quickly disappearing into white while they’re still visibly coloured in the software, and a brightness of 1 still being quite bright rather than barely visible.

Other than colours, we tested some synchronisation I’d created for the first four minutes of Act 1. This was well received and made everyone quite excited for the rest of the show. One part, where I articulated a voice using lighting in a manner similar to a VU meter or visualiser, was particularly spectacular, and made me feel good about the amount of time I spent on the articulations. It would be great if there were a way to automate this; I’ll look into it for when the voice appears again later in the show.

During this session, it was possible to take in some changes while in-venue, which I think is the best way to go, but I also entered some changes into a notebook for later, offsite work. I’ve since transferred these to a Trello board, which makes things easier as they’re now in a checklist format.

~

7th September

I met with Honor, as well as Marty (sound designer) and Simon (RMIT staff) via Zoom to discuss audio cues and how we can sync them up to the lighting and visuals in Qlab. We came to the conclusion that we’ll try building the cues as layers, using the individual elements (pre-recorded voices, music, sound effects) mixed within Qlab itself for the most flexibility. This also means we can route some sounds to the rear speakers of the venue if required (e.g. in the aforementioned part where I’m articulating the voice using light). We will of course run some detailed tests once we’ve got enough files to work with.

Offsite rehearsals were brought up in this meeting, and I began research into if/how Qlab could trigger the lighting cues in Pharos Designer using MIDI loopback or a similar method. I haven’t found anything yet, but I will continue my research in the coming weeks.

~

9th September

I worked through the Trello checklists, taking in changes to the lighting designs that I’d noted during the visit on the 2nd. I’ve been diving deeper into some of the more detailed animation features in Pharos Designer, which is helping to develop some of the effects I have in mind.

I also met again with Honor via Zoom, and we discussed cutting down some of the complexities in the show (e.g. motion design and other video elements) as well as reducing the intensity of most of the lighting, in order to make the final two acts stand out more. During this meeting I took in some changes while sharing my screen with Honor, which helped greatly for being able to instantly iterate and obtain some degree of approval, as opposed to going back and forth via email.

We also discussed the possibility of using one of RMIT’s gallery spaces as a room for development and rehearsal when we aren’t able to fit into the Capitol’s schedule. This would be ideal, as we could meet and work on the show together, and it’d feel more collaborative than working on it in our home studios.

~

12th September

We were booked in for another venue visit from 4.30pm, but I went into the city earlier and sat in a library working on more changes and new sections. Honor joined me for most of this and we worked together on a few parts. This was a very productive session and I’ve decided to work like this more often, free from the distractions of working from home.

Venue visits are becoming more elaborate and I’m starting to retain a lot more knowledge in terms of setting things up. I learned about how to correctly reset the lighting system to the default programs, which is what the cleaners use to turn on all of the lights when they’re working in the venue. Later in the day, Erik also showed me the lighting panels elsewhere in the venue. Another staff member will always be present when I’m there, but it’s good to be across these things.

I’ve been tasked to learn more about the lighting console used in the venue:

.. which looks quite intimidating, but I’m quite interested in the possibility of learning to create more complex programs. Currently it’s only being used for basic lighting controls as nobody has had the time to sit down with it and learn it in depth.

We tested some more animations and sync, including a new part later in the show which becomes more high energy with corresponding animation intensity. This part was created earlier in the day in the library, and it was quite exciting to see it go from the very basic simulator on screen to being articulated in the space. During this session I also took in extensive changes, which made me feel more confident about being able to run the show correctly.

I had the chance to use some of the mechanical controls, which includes opening and closing the curtains, as well as closing the screen down to support different aspect ratios. The curtain controls are somewhat fiddly, and take some mashing in order to open/close correctly, which makes me a bit nervous about potentially operating them during the show, but I’ll try to get some practice at each venue visit.

I also found out more information about the gallery space we’ll be using; I’ll be getting a key and will have access until 11pm every day, which makes the working hours a bit more flexible. I’ll try to not make a habit of late night development sessions, but it’s good to have that flexibility. I’m getting an induction for the space on Monday 19th, and will likely be spending some time there post-induction.

internship update

I’ve now been fully onboarded into the RMIT system, and have access to the Capitol staff Teams group where shifts are posted. This means I’ll be able to experience the production of some other shows in order to get an understanding of how Honor’s show will work from an AV perspective. I’ll be heading in to the venue at some point in the next few weeks to meet the rest of the team.

~

The second venue visit will take place this Friday (2nd September), and for that I’m aiming to have a synchronised version of the first act developed, as well as most of the other lighting cues for the show ready to go in a draft state. The synchronisation will still be a little rough, as the sound isn’t quite finished, but it’ll be useful to evaluate how much more work and/or detail the draft cues require—it’s difficult to get a sense of the intensity of the lighting when viewing it through the simulator.

We’re also meeting sometime this week (hopefully!) with Marty, the sound designer, in order to talk about synchronisation of audio and lighting cues. From my experiments with the Qlab system, as well as some further research, I’ve realised we could run the lighting and audio together in the Pharos Designer files, broken up into the individual cues, and triggered as one cue, which would definitely make things easier in terms of sync. Adding projection mapped video to this may be a challenge, but at the very least, separate cues that can be sent to MadMapper to hopefully trigger the videos in sync with the audio and lighting. Best case scenario would be to render the MadMapper sessions down into video files we could run in the Pharos session alongside everything else, but I expect that the side screens will need some fine tuning on the night of the show, and we may not have the luxury of being able to run with pre-rendered content.

~

Back to the subject of simulation, a few weeks ago I contacted the Pharos support team requesting details on the file format, in order to possibly pull things apart and build my own simulator of the venue that can load Designer files. Sadly, this has reached a dead end, as I received no further replies after my enquiry was passed to the development team. This is a little disappointing, but I am determined to continue my research into the format, even if it means intercepting the UDP (I think?) communication sent from Designer to the system, and using that to control the lighting in a 3D simulation.. or even create a Max/MSP environment that can procedurally generate lighting designs to send to the system via UDP, which was my initial ambitious idea last year. Of course, I’d have to be quite aware of which commands I’d be able to safely send, but once that is understood, I imagine it’d be easy to create something that allows for procedural lighting control (e.g. true audio reactive lighting).

 

internship update

Shortly after my last blog entry, Honor sent through a full script for me to review and use to create some draft lighting cues. As a result, my week was quite busy with note taking and ideation, and as such I have created some initial lighting cues for around half of the show.

I’m starting to dig deeper into Pharos Designer 2, experimenting with features such as the fade in/out movement settings:

It’s a small thing, but the Skew and Direction settings allow for lighting fades to be progressively staggered throughout all of the lights in the group, which is a great help for several of the cues used in the show. I’ll be experimenting further with these, as well as creating some custom lighting groups, over the coming week as I continue to develop the remaining cue ideas. The existing lighting groups are somewhat fragmented, and as an aside from the main project I’m working on, I’m hoping to develop some groups where the lights are organised in such a way to allow for the movement based animations to make sense.

~

I had another meeting with Honor to discuss the script and share my draft cues. This was a very productive meeting, and the feedback I received was very positive and encouraging. I’d noticed there were a few segments in the script that would potentially involve some motion design work, so I mentioned that I’m capable of creating those designs if required. I don’t mind taking on the extra work, especially if it’s something I can add to my portfolio!

Another notable discussion in the meeting suggested possibly using some hired stage gear for one segment, and I mentioned that I have peers in the music scene who have used similar equipment in their performances. I asked around over the weekend, and did a small amount of research myself, and provided Honor with links to a couple of hire companies that may be appropriate.

~

I’m close to being fully onboarded in the RMIT system; I’m just waiting on a security pass. Hopefully this means I’ll be able to visit the venue again soon, and hopefully I can gain some experience by being present during setup and running of other shows between now and Honor’s show in October.

internship update

Week 3

It’s been a slow couple of weeks as I’ve been waiting for my RMIT staff onboarding to come through. It’s nearly done now, but a large amount of my time for the internship over the past two weeks has been consumed by the various onboarding tasks required. There are some minor hurdles here, such as getting a security pass for the Capitol building, but I’m sure they’ll be worked out soon. Overall I’m pretty excited to be in the RMIT system, and am quite happy to now have my own staff email address instead of having to use my student address for all communication.

Week 4

I viewed some QLab tutorials and experimented with the software to get a sense of how to operate it. After this, I met with Honor again to discuss some details of how we could use QLab for the final production. It’s early days of course, but the more we can automate, the better. At some point this week or next week, I’ll be meeting with Honor, Marty (sound designer) and Carla (visual artist) to discuss how everything can fit together—I’m hoping we can synchronise audio, visual and lighting cues so everything can be procedurally launched in sequence.