Heya, I’ve been really obsessed with the timbre, structure and approach that Visible Cloaks use in their music. Wondering if anyone has any idea how to get a similar approach going within the Norns & Monome eco system? I think they use a lot of randomised arpeggiations with non-synced tempos, modulated with LFOs.
I love that album; I can’t offer any useful information, though I recall there is an interview floating around where they discuss a bit about their process. I believe they use some voice-to-midi pitch mapping in some their songs…
Thanks yeah that article is great and full of stuff. Are they any Norns scripts that anyone can think of which might be good for non synced arps or a multi timbral approach like they mention?
I went to a workshop with them at moogfest a little after this record came out. It was a lot about how they would map pitch to other sounds (like someone talking), and then layer these sounds. It’s been a few years so it’s hard to remember more of the details, it was a great workshop, wish it was recorded somewhere
EDIT: here was the blurb about the workshop
A hands-on workshop to explore MIDI translation, chance operations, and other methods of composing beyond the self. Visible Cloaks (Spencer Doran and Ryan Carlile) will share compositional techniques guiding their work and sketch out historical underpinnings for such compositions.
I only recently discovered this (gorgeous) record.
One thing I noticed is that in comparison to a lot of ambient music there is a lot of space. Tracks will fade and stop into silence before reappearing.
There is also no ‘big reverb’ that you hear on a lot of ambient records, and not too many layers/instruments occurring at once (that I can hear anyway).
The space/silence seems to be a big part of the albums character, and it enhances the tracks for me.
It’s definitely made me reappraise my own music (I have been very guilty of whacking a big reverb and layering stuff so there is always something happening).
I also get the impression they are using FM/Wavetable, rather than traditional analogue, for their synth patches.
You can definitely hear a lot of inspiration from Japanese Ambient music (the compilation they did on Light In The Attic is fantastic, and it shows their influences pretty clearly).
As mentioned above, they do a lot of audio-to-MIDI transformations in Ableton, particularly of improvised acoustic material. I also know that they use generative sequencers (Wotja, if I remember conversations correctly).
I recall reading an interview where they described a rather creative use of reverb, where sounds/passages would be fed through a sequence of varying spaces — to me, it really intensifies this feeling of plasticity/elasticity.
I hadn’t heard that in the album; I definitely need to listen more closely (and definitley on headphones ).
That is really interesting, and it totally opposite to what I think of the normal/traditional use of reverb (to blend/glue separate sources and give them the illusion of being placed in a uniform space).
Totally. For some reason, I’ve always had an allergic reaction to the application of reverb in my music, but that likely points to personal limitations more than anything. Of course, the main thing I always struggle with is precisely this: the evocation — realistic, experimental or otherwise — of space. Visible Cloaks have given me a lot to chew on in the last year, though.
Amazing track here if you need a good starting point, and you can hear very clearly how they translated the vocal audio into MIDI and then ended up riffing off that in various musical ways. Lead sounds appear to be physical modeling VSTs, as they mention in that article.
I recall reading an interview where they described a rather creative use of reverb, where sounds/passages would be fed through a sequence of varying spaces — to me, it really intensifies this feeling of plasticity/elasticity.
@Olivier they are just using Altiverb and running a reverb into another reverb.
The bulk of their older self titled record was mostly Waldorf Blofeld, microKorg, DSI Tetra, hand mallets, Line 6 multi fx and some other random studio bits.
Reassemblage onward is almost entirely Ableton. They still use Blofeld controlled by a Korg micro sampler for MIDI. A lot of GRM / IRCAM type VSTs for effects rather than generic filter / chorus types. Their sound sources are Kontakt or Ableton sample VSTs, or samples cut from old recordings. A lot of the voice generation is from a program called Infovox Voice Manager, which lets you generate text to speech with an accent. They also use Ableton’s Operator VST. AFAIK the physical modeling VSTs were used on the newest album, from the company Swam.
Parts of this could be done in hardware but it’d involve a lot of convoluted signal chains and buying extra gear to do something in 15 seconds with a computer. Its intentionally working with the strengths of modern computer music. While you can get complex with CV control, the specific ways in which Ableton automation and MIDI devices bounce off each other has a unique feel.
SOURCE: I’ve worked with them on events and one of them sold me weed for awhile.