Sway

Also, kinda wanna map an Arc to those 4 parameters…

3 Likes

Could you explain that a bit more? You want to see the values of the parameters on the Arc’s leds? Or somehow control those parameters via Arc?

Personally, I will probably work on modifying the script to allow the Arc to actually override/change the parameters (mostlyjust X and Y axis values).

I understand that is probably counter to the original purpose of Sway, but for me, adding that level of control while still having the cycling fx on each axis could make for a compelling compositional tool.

2 Likes

Ah yes, of course. That makes a lot of sense. It’d be great if you could pull request your changes when you do as I don’t have an Arc and wouldn’t be able to test it out.

1 Like

Arcify makes adding Arc control over parameters super easy. It’s only a matter of a handful of lines of code.

@beepboop I just put together a vanilla SC version tonight. This is very rough around the edges but it should work! This is available as a branch of the norns version.

1 Like

Oh man, I saw you play a few years back at the library in Santa Monica, and I’ve been thinking about this patch ever since! Don’t have a norns, and my SC chops aren’t up, but I’m exited to look into it!

1 Like

Whoa! Awesome that you were there! The patch didn’t quite work for that performance, haha, but this should be relatively simple to get going. Let me know if you have any questions.

This is so cool, @carltesta! I’ve been experimenting with threshold settings for piano and other keyboard sound inputs. The processed output is often mysterious (but exciting). Can you provide (or point me to) a little more info about each of the built-in processing types? Reverb, delay, amp mod, freeze, pitchbend, filter, textural and cascade…Can I access their parameters within Sway, such as delay time, sample capture sizes for textural and cascade, etc, beyond the threshold settings that trigger them? If not, what are their default settings? I’m especially eager to get more understanding of the textural and cascade processes, because I’m finding they can spiral out of control in my hands. Thanks for this inspiring creation!

Hi @tnelson ! So the immediate averaged values of the density, clarity, and amplitude determine the values of the parameters like delay time, feedback, etc. You can see how those linkages are connected here: http://carltesta.net/files/analysis_parameters.pdf

Starting at line 229 sway/Norns_Sway.sc at master · carltesta/sway · GitHub you can see how the analysis values are mapped onto the various parameters. For example, amplitude values between 0 and 30 will change the frequency of amplitude modulation from 1 to 14hz. And amplitude values between 0 and 30 will change the reverb size from 0.3 to 1. You can switch the relationship by turning polarity on.

I could make up a chart that lists all the default values but that will take me some time. The best way to experiment with it is to turn grid analysis off and then manually select a processing type and then play in different ways (with density/sparseness, with pitch/noise, with loudness/quiet) to hear how the processing changes.

I’m happy to answer any other questions. Thanks for the interest.

2 Likes

Thanks, @carltesta!! This is quite a sonic Wonderland to explore!

This is a brilliant brilliant algorithm.
Thanks for sharing it!
I understand Pitch clarity analyses frequency, and amplitude analyses volume…
But what is density analysing exactly?

Thanks!

p.

1 Like

Thanks for the kind words! Density is measuring the number of onsets in a given time period and then averaging over that length of time. It isn’t precise, but generally onset triggers at the start of a sound. So a more frequent or dense selection of music (say 16th notes at 120bpm) would register a higher density value than whole notes at 60 bpm. Definitely play with the density threshold as different intrusments have different definitions of “density”. For example, my level of perceived density as a bass player is generally lower than a pianist for example. :slight_smile:

3 Likes

Thanks for your reply!
So density tracks ‘transients’ focusing on the attack of the sound (rather than the overall volume) for a specific time period?
Is that time period also based on tempo in bpm or just ms?
Cheers

It’s not connected to BPM in any way. The density over the last 1000ms determines how the effects parameters change and the density over the last 30 seconds determines whether your position in the grid is moving up (above the density threshold) or down (below the density threshold).

1 Like

Thanks!
Do you apply any envelopes (attack, decay…) to the ‘density triggers’?

Nope, you can see where the onset detection occurs in line 150 of the NornsSway.sc file: sway/Norns_Sway.sc at master · carltesta/sway · GitHub It basically gets the Onset value and uses OnsetStatistics to track the onsets over a short window and a long window set to be 1 second and 30 seconds respectively. No audio processing here, just putting values into buses that are used later on.

1 Like

Thanks!
I’m curious to see how the statistical data modulates the sound without causing jumps or artefacts… even though, maybe that’s exactly what it sounds best! :slight_smile:

Is it also possible to override the modulation manually while playing live on your script?

I don’t have a norns (yet) ;(

But I’m now reading:
‘Acoustic and Virtual Space as a Dynamic Element of Music’
Pauline Oliveros (1995)

Thanks!

Thanks for the heads up on the article. I hadn’t come across that article despite Pauline Oliverios’s work being key to the conceptual framing of Sway.

Some folks have asked about overriding the modulation and having more direct control over the processing but I’ve not yet been able to implement anything. Sway really is meant to be more autonomous so that the musician performs and then Sway is running automatically in the background with the musician going along with the changes enacted by Sway. In this way it is really a combination of George Lewis’s Voyager with Pauline Oliveros’s Expanded Instrument System because it is autonomous (like Voyager) but focused on live processing (like the EIS). Further it is less like improvising with other musicians and more like improvising with a changing environment, which is why this article by Pauline Oliveros is such a useful document/framing.

All this being said, I am currently working on combining my Manifold and Sway patches to create a kind of hybrid, so that an instrumentalist can choose which types of processing are being used while retaining the audio analysis control over the effects parameters. So you could layer different effects on top of each other instead of fading in and out from one to the next. We’ll see where it goes! I’m not too good about keeping things updated and I need to make sure Sway is still working well on norns in its current state.

P.S. if you don’t have a norns you can still try it out using the desktop branch of Sway. You just have to install SuperCollider with the SC3 Plugins and the Singleton quark to get this going: GitHub - carltesta/sway at desktop

3 Likes

‘improvising with a changing environment’
i love that! :slight_smile:

much appreciated for letting me know I can still try Sway on my computer.
I didn’t know that!

I do like your idea of creating statistical data extracted from the nuances of somebody’s playing to control the overall sound.

I like to revolve between the ideas of:
Agency > Indeterminacy > Aleatory > Chance operation

Nevertheless, one of my favourite things in Ableton is the capability of manually override any automation while playing live. That for me is very important because it lets me change the sound to a particular direction at anytime while performing live.

How about using Sway to also automate the orbit of a sound in a spatial panner??

One of the things I like the most from the latest Pauline Oliveros’ EIS Max patch was that she used VBAP Spatializer functions to mix the different delay lines in live…

I mention this to go back to your idea of ‘improvising with a changing environment’

Plus if we add looping to the performance then we can greatly improve the overall mix by not letting all the layers getting stuck in the same position of the mix…

Cheers