Sonic Sanctuary 01 // Live AV Set

Earlier this year my friend LayerZero, who runs local monthly IDM listening sessions through Bay Area Braindance, invited me to play a live set at a party he was hosting for us fellow electronic music enthusiasts. My perfectionist nature had me hesitant at first, but eventually I entertained the idea and started browsing sketches on my Digitakt to see if anything would work as a suitable starting point.

Initially I was thinking of putting together a relatively short set – maybe 20 minutes or so – but had so much fun exploring the possibilities of live performance on the Elektron boxes that it ended up closer to 50 minutes. I also found the process to be an effective way of bringing together half-baked tracks or ideas that I felt had potential but was struggling with arrangement on. The need to have each track flow together combined with the limitations of a dawless setup and a fixed performance date provided just enough constraints to keep myself moving forward with it at a steady pace.

My initial anxieties about performing live (what if I screw up noticeably? what if they think it’s lame, or even worse, boring?) had me concerned that more controls would mean more potential for error and confusion in a realtime setting, so I thought I’d just stick to the Digitakt for maximum flexibility in sound palette. But my starting track – the first sketch I thought would be suitable for a generally braindance-themed party – was an experiment in loading all 72 factory single-cycle waveforms and randomizing them per step, leaving me with just 56 sample slots for the remainder of my set. So eventually I brought in the Syntakt for more variation, figuring that for each track I’d favor one or the other as the primary sound source. But by the time I got to the end I found myself utilizing both of them fully, sacrificing precise memorization of which sound was on which track for more sonic variation.

Additionally, I thought it would be nice to incorporate my Korg minilogue here and there as way to break out some longer lead sequences from the limitations of the Digitakt and Syntakt’s 64-step patterns. In practice though I found it difficult to work patches into the “sweet spot,” and had some other technical issues with midi program changes. That was enough motivation for me to spring for an Arturia Minifreak, which became an essential part of the set and has thus far proven very enjoyable both to play with and program patches for.

Despite the number of times I practiced the whole thing, including a couple of dry runs, there were some unanticipated hiccups on the day of. We had some gain/clipping issues at the beginning so the first track loops much longer than intended while we sorted that out. I also didn’t account for people hanging around and talking in the room where I was performing, which was lovely for the party vibe but also resulted in me tweaking volume levels I’d carefully balanced before in order to even hear the changes and tweaks I’d planned as part of the set. And while I wasn’t too nervous at the time, I still think it wasn’t my best take overall. I’ve been mulling over the idea of doing a “studio” recording of it with all of the inputs separated and doing a proper mix with additional risers, impacts and effects, but that’s also a lot of work for something that’s essentially already there, and perhaps would detract from the spirit of a live set, mistakes and all.

So for now I’m just considering this a prototype for a new method of producing; as I mentioned the limitations were just enough to keep me motivated and moving forward at a steady pace, circumventing so many of the mental blocks that I seem to unavoidably blockade my creative process with time and time again. I think it could work well as basis for some initial constraints and framework which I’d follow up with recording everything into Live for the additional polish I generally aim for.

Also, as much fun as I had with it, I do think it also lacks cohesion, jumping sort of randomly between IDM, techno, breaks and drum&bass. Despite most of my albums being similarly stylistically diverse, I’d like for my next attempt to start with a broader idea, motif or overall arc and stick with it.

Visuals

Beyond hosting the party and contributing a mind-melting DJ set of his own, LayerZero brought the experience to the next level setting up projectors with live, interactive, synced visuals via Resolume, (not to mention recording the whole thing on multiple cameras and putting together the final video!) When I was pretty much done writing the set, we collaborated on layering some VJ loops from his library to accompany each section of it, which he tweaked in realtime while I played.

At some point I was struck by this project’s similarity to my inspirational roots, the PC demoscene, and was inspired to created some visual loops of my own that would sync tightly to elements of my set. These were all of course animated in After Effects and not generated algorithmically like a proper demo. Nevertheless an audiovisual project like this has long been a goal of mine and I’m grateful to have an encouraging friend provide me not only with motivation and encouragement but the technical means to glue it all together and make it happen.

I’m going to take a moment here to lament the fact that in 2023 there’s no straightforward way to put a series of 60fps video loops on WordPress, though I’m sure if it helped sell crap from Alibaba it’d be integrated into the next Chromium nightly build. But I digress. Here’s a series of clips in 30fps dithered GIF using decades old compression technology.

Some of these are simpler than others as they’re intended to be layered with the existing library loops that we’d selected. Overall I really enjoyed coming up with fairly abstract ideas to visualize aspects of the music I created, and if you watch the YouTube video you’ll probably see where he mixed them in. Ideally (maybe next time?) they’d be 100% original loops, but that is indeed an intimidating chunk of work.

On the fourth clip from the top, I’d be remiss not to mention that I generated those cyberpunk-headphone-girl faces using Stable Diffusion. While I certainly wouldn’t consider myself a proponent of “AI art,” I do find myself conflicted about this controversial topic. On one hand, as professional artist, the ability to effortlessly create beautiful pictures just by typing out what one wants is unnerving, especially in the hands of our consistently inhumane corporate overlords. On the other, the application of technology to art has been an inspiration for my career and something I’ve always enjoyed exploring. So in this middle position I’ve been contemplating how and if I might ever utilize this unprecedented and powerful new technology in a way that seems ethically sound. This particular application seemed fine to me – small audience, non-commercial, a supporting piece of the whole and not the end result itself, and not something I could reasonably create on my own without committing significant portions of time and money.

Anyway, all-in-all I’d say this project ended up fulfilling some creative goals I’ve had for many years, and even though it didn’t come out perfectly it was full of invaluable learning experiences. And once again I want to express gratitude to LayerZero for providing the means, venue, impetus and encouragement to explore some new creative territory.

Leave a comment