Random strategies
This section may be a bit meandering. There are so many avenues to go down and side topics to explore when we talk about randomization.
Within music contexts we talk about topics like white, pink, blue, and grey noise. Within the context of jitter as it applies to granular synthesis, it isn’t always clear what the randomization strategies are being deployed. Is it pure random (as much as such a thing is possible in digital systems) or is weighted in some way? For example, a glance at the Tasty Chips GR-1 manual discusses ‘spray’ aka jitter but doesn’t define the behavior nor seem to provide control over it. The wonderful Borderlands app is no less opaque. Madrona Labs Kaivo, one of my personal favorites, does give us control over a noise source and hence some jitter structures but not all.
Before we go too much further let’s define two related concepts when it comes to randomization within our context. Jitter is typically the random variation generated at a sampling point. However, we have another important concept, which is time based randomization. This can be expressed as the chance that a trigger should occur at sampling point. These names may be different in different implementations (for example, the aforementioned GR-1).
For our proposes and to follow Kyma convention: density is the chance a grain will be triggered and jitter is the amount of variation for a given parameter of the grain.
Jitter
While Kyma has a wealth of CapyTalk expressions for generating random values we are going to stick with a generalized approach that translates to almost any environment and may be useful in understanding how other systems are approaching their implementation.
We’ll use a white noise as our primary source. White noise will generate an even distribution of values over a reasonably high number of sample points. If we only take 10 samples of white noise, the values may be weighted towards some values but as we add more and more samples, that weighting will be more even. After 10,000 samples, any weighting will be extremely small.
An easy way to create different weights is use a waveshaper, mix multiple noise sources together, raise the value to a power, or use the ‘saturator’ module. The ‘saturator’ module is a bit deceiving in name, to be honest. At the core is a S-shaped (polynomial) waveshaper that can be dynamically adjusted. Feed a 1-order saturator with a curve of 1 with a triangle and you’ll receive a sine output.
Let’s take a bit of time to explore how white noise can be redistributed using the later approach.
As you can see, via this approach we can shift our values to the extremes so there a re a lot of little values and fewer larger values or the inverse. When these values are sorted in order, they can make up a power law or exponential curve depending on our approach. Power laws are extremely common in the natural world and as such we recognize and react to them in very different ways to random or sudden state changes.
Our brains love pattern matching. There is a vast amount of research that can occupy years of reading and endless exploration on the topic of how pattern matching influences our perspective of the world around us, how we may make decisions, and ultimately evolutionary responses. One of the fun aspects as it relates to granular synthesis and music in general is that we can exploit the listeners’ need to pattern match. When we decide to generate values out of order, our brain may or may not reconstruct the distribution curve depending on factors including length of time between sample points, number of data points, deviation range, and what is being modulated. Variations within that expected model can engage our brain and create moments that range from engagement and understanding to confusion and emotional responses as varied as calm, delight, and anxiety.
Building on this concept we can modulate or vary our distribution patterns over time to interesting effect. We can have the jitter slowly drift the center point or narrow the jitter down over time. Or we can have multiple overlapping distribution patterns and see what macro patterns emerge. Another suggestion is to feed the jitter into “interpolateFrom: To:” CapyTalk expression and modulate the From and To values whether by hot parameters or via modulation. Using this method we can easily create a min / max for your Jitter boundaries.
Time based variations and density
Earlier, we defined chance as the likelihood a trigger would happen at our sample point.
Our sample point could be:
- 44,100 times per second (global sample rate example)
- Triggered by an event
- Time based (every 100ms)
- Combinations of the above
Density is typically a combination of the number of maximum grains possible and the likelihood they will be triggered.
If we have 8 grains, we may want them to be spread out so they don’t all trigger at once. We can take a couple of approaches here. The basic approach is just set the chance low enough that there are unlikely to be multiple triggers at once and, over sufficient time, the spacing will even out over time. At the other extreme is using grains in the context of pitch shifting. In that case, we offset the grains so they are out of phase so that as one fades out, the other grain fades in, providing a smooth amplitude response.
There are other times where we may want to cluster our triggers for rhythmic or other effects. There is a handy set of CapyTalk expressions called randomTrigger, randomExpTrigger, normalTrigger, and BrownianTrigger. I highly recommend using and exploring these expressions as they are useful in a number of situations and tend to be my default choices as they are quick, easy to use, and efficient.
But in the spirit of building some examples from scratch let’s try some other approaches.
A simple structure is a noise source into a threshold detector to create the trigger. Depending on the density we want, we have a few approaches to slow down the rate of triggers. We can either scale everything down so the chance is very small that the grain will trigger upon each clock cycle or we can slow down the clock. The later approach is compelling if we want to work with longer grains or want them to trigger on a clock division, for example, the probability that a grain will fire on each 16th note.
To create clusters of triggers, we dynamically modify the threshold. For example, add an LFO to the threshold going from 1 (no triggers) down to 0. To see how this can cluster triggers, let’s slow the sample rate down to one sample per 50 ms. We can see the triggers clusters around center points (the top of the LFO) every 6 seconds (the frequency of the Triangle LFO). By adding a distribution here we can create different distribution patterns by applying our methods outlined above in the jitter section.
There are other approaches to explore here. One easy example is to gate the noise by a pattern generator (step sequencer, logic gates, euclidean, etc). Add a gate duration so you can control how long the gate is open. Then pass that through to the threshold. You can use this to create a swing or simply a gated burst of random triggers. Other examples include random ratchets clusters or oddities like controlling trigger density based on an envelop follower.
Correlated random changes
If you want to correlate your changes so that they are a deviation from the prior value rather than a completely random value, you can use a feedback loop and interpolate it with the input. Here is a rough example of the Buchla 265 stored random voltage with correlation:
This is very much a brownian or random walk and there are examples within the prototype library using a lossy integrator as well as a pre-existing Capytalk expression for this. I recommend those for most implementations within Kyma. The feedback method may be useful to create correlated signals or other types of governor systems when those other methods may not be appropriate.
Wrap-up
In the interest of time and getting this part published, I am going to stop here. We didn’t even get into topics like hotpink noise, chaotic maps, random index distributions, or creating interdependent systems where one set of random values influence other parts of the structure. If there is one takeaway I want to impress from this section it is to encourage you to explore beyond the normal random distributions choices offered.
I may have glossed over a lot here so feel free to ask questions. I’d love to hear your thoughts on your different approaches to building randomization into your structures.