internship update

I’ve now been fully onboarded into the RMIT system, and have access to the Capitol staff Teams group where shifts are posted. This means I’ll be able to experience the production of some other shows in order to get an understanding of how Honor’s show will work from an AV perspective. I’ll be heading in to the venue at some point in the next few weeks to meet the rest of the team.

~

The second venue visit will take place this Friday (2nd September), and for that I’m aiming to have a synchronised version of the first act developed, as well as most of the other lighting cues for the show ready to go in a draft state. The synchronisation will still be a little rough, as the sound isn’t quite finished, but it’ll be useful to evaluate how much more work and/or detail the draft cues require—it’s difficult to get a sense of the intensity of the lighting when viewing it through the simulator.

We’re also meeting sometime this week (hopefully!) with Marty, the sound designer, in order to talk about synchronisation of audio and lighting cues. From my experiments with the Qlab system, as well as some further research, I’ve realised we could run the lighting and audio together in the Pharos Designer files, broken up into the individual cues, and triggered as one cue, which would definitely make things easier in terms of sync. Adding projection mapped video to this may be a challenge, but at the very least, separate cues that can be sent to MadMapper to hopefully trigger the videos in sync with the audio and lighting. Best case scenario would be to render the MadMapper sessions down into video files we could run in the Pharos session alongside everything else, but I expect that the side screens will need some fine tuning on the night of the show, and we may not have the luxury of being able to run with pre-rendered content.

~

Back to the subject of simulation, a few weeks ago I contacted the Pharos support team requesting details on the file format, in order to possibly pull things apart and build my own simulator of the venue that can load Designer files. Sadly, this has reached a dead end, as I received no further replies after my enquiry was passed to the development team. This is a little disappointing, but I am determined to continue my research into the format, even if it means intercepting the UDP (I think?) communication sent from Designer to the system, and using that to control the lighting in a 3D simulation.. or even create a Max/MSP environment that can procedurally generate lighting designs to send to the system via UDP, which was my initial ambitious idea last year. Of course, I’d have to be quite aware of which commands I’d be able to safely send, but once that is understood, I imagine it’d be easy to create something that allows for procedural lighting control (e.g. true audio reactive lighting).

 

Leave a Reply

Your email address will not be published. Required fields are marked *