eleventh-hour exploits engage!!

put some basic architecture to the page late last night, but am beginning to rethink in light of @infovore—looks amazing!

1 Like

this project could actually go really quickly, if tweaking drone parameters didn’t create such a trance state. i keep catching myself frozen, listening.

keep that tea nearby @autreland

8 Likes

some good old gaussian noise I just dug up from an old project. should come in handy. tends to be more surprising and interesting than math.random() - I use it a lot for visual stuff

I think I will use it to randomly select neighbors of a given collection index

function randomGaussian() 
    u = 1 - math.random();
    v = 1 - math.random();
    return Math.sqrt( -2.0 * math.log( u ) ) * math.cos( 2.0 * math.pi * v )
end
4 Likes

10 Likes

just a note on visuals: they were not meant to be showing off! I just find myself coming to norns and, when stuck, thinking about visual interface, given there’s a screen, and I greatly like the greyscaliness…

So I wondered what would happen if a) I made the worlds visual and b) made the visuals meaningful, and whilst the planetary bodies were the first thing I did, the thing always at the back of my head was using something like Star Guitar as a sample sequencer.

I considered the y-axis of the sample being significant, but am not sure the sample rate slewing around is the best idea. The drawing code is, to coin a phrase, dumb as a sack of hammers. I will note I’ve not done much very sophisticated with the sound, but this is my first softcut script. Also, my plan was to fit the whole thing into an afternoon, which it just about did.

I do wonder if it should have more control, but I’ve had some nice sessions with it. I shall record one to TAPE as requested.

2 Likes

OK, here is a take of how planetary can be ‘played’:

Judicious use of randomising items before fading them up, and a combination of filter/volume on individual tracks, as well as shifting focus around the worlds, makes it a fun tool for making some collage. No post-processing beyond a little mastering.

(also there’s actually a significant bug in the version of the script I filmed that makes it somewhat underwhelming; the audio version is correct.)

4 Likes

i hate that i’m turning into such a pedant on here but i couldn’t let this one go.

there is a long history of approximations of the gaussian aka normal aka bell curve distribution. (coming up with them is kind of a popular mathematical pastime.)

this is a famous one based on the Box-Muller transform. it has been implemented a lot in hardware.

your implementation as stated has a few issues. the biggest one is that you can get unlucky and end up with log(0), which in lua math lib is -inf so :skull_and_crossbones:: :exploding_head:. (it’s of course worse in other environments where you will just crash.) (and to be fair this issue is also present in the C code given in the wikipedia page on B-M transform.)

nevermind, my bad. math.random() has exclusive upper bound.

the more minor issues are (1) efficiency and (2) no explicit control over variance/range.

if you do want to add something to control the range, it may help to know that it happens to be [-3, 3] as stated. here’s a surface plot of taking u and v independently in [0, 1]. its nice because it clarifies the cos/log relationship in the B-M transform.

cos_log_range

anyways, i would do the following:
- don’t bother with 1-u, 1-v, it does nothing
- restrict u and v to be nonzero
(never mind, i am dumb.)

  • scale output by 1/3 (or whatever)

somewhere i have a collection of other normal approximations implemented in python. i’ll see if i can dig it up. one of my favorites is based on gamma functions (theory is here but it is crunchy math.)

another common and crude way to do it is to just take the mean of many uniform distributions. (“average of dice.”) this approximation gets better as you average more, but for creative purposes you can use a pretty small number of dice and a pretty crappy PRNG and still have fine results. this is great if you happen to have an existing source of uniform randomness.

another classic transform closely related to box-muller is marsaglia’s polar method. (this is good if you happen to have a fast ln() approximation handy.)

marsaglia also came up with a brilliant implementation known as the ziggurat algorithm. it is super efficient at runtime but requires some setup.

on topic, i will try to put a script up later this evening, and i am online in case anyone has questions about softcut.

4 Likes

shaky research suggests that 1- u / 1 - v bits remove the zero probability, but I shall find out if something randomly explodes

oh, i get it, duh. in both lua and javascript, math.random() with no arguments gives values in the range [0, 1) (non-inclusive upper bound.) so 1-math.rand() excludes zero, which is what you want. my bad.


ok @andrew, closing the loop on this dumb tangent, i found some things.

TL/DR:
i’d recommend using this “dice” method instead, which is a bit more performant and has predictable output bounds:

-- output has unit variance and is bounded to [-D, +D]
function randomGaussianDice()
   local D = 3
   x = 0
   for i=1,D do
      x = x + math.random()
   end
   return x*2 - D
end

  • the choice of scaling in your original code isn’t arbitrary and is useful in a stats context: it gives a standard variance of 1. (that’s actually an interesting result of the approximation.) in a musical context, you are probably interested in both the bounds and the variance.
  • but the downside is that the output bounds do indeed approach [+,-] infinity (i made a mistake earlier.) that also makes it potentially useful for research purposes but probably not for this context.

(here’s why:)

infs are avoided because of the non-inclusive upper bound of the random(), but you can still blow up arbitrarily large. one reason i don’t love this technique is that we rely on the random() implementation to be noninclusive and on whatever detail of that restriction to determine the output bounds range. as u approaches 0, log(u) approaches -inf (which causes the output to approach +/-inf with the other random term.) how big it actually gets is entirely implementation-defined! (max value is something like +/- e^(ε) where ε is the smallest representable floating point number. or maybe it’s 1 / 0x7ffffffff… if math.random() uses integer rand and scales it… who knows!)

  • i did some stats and benchmarks and found that with the “dice” method in lua, using only 3 dice, you get a about the same accuracy (didn’t really bother with confidence intervals but they are both within 0.1% of unit variance,) and about 20% speed boost, using math.random() to evaluate the dice. (if instead you prefer a better approximation and ~equal performance, increase dice count to 4.)

  • with the “dice” method, given normalized dice (in [0,1] inclusive or exclusive, doesn’t much matter), and taking N= number of dice, we get unit variance simply by summing the dice, multiplying by 2 and subtracting N, which also restricts our output bounds predictably to [-N, +N] (or i guess [-N, +N)?)

  • i also implemented the dice method with a custom, super simple PRNG (linear congruential.) this distribution is not statistically sound enough for, say, cryptography, but for music it’s fine and the final variance is just as good.
    but somewhat weirdly, the performance was far worse in lua (in C it is far better; i guess this is about lua’s odd treatment of integer types under the hood and the fact that the library just calls down to <rand.h> in C.)

    this is too bad, because 1) the dice performance boost scales with cheaper random(), 2) i think it is nice to be able to more deliberately manipulate the PRNG guts - re-seeding of course, but also being able to deliberately reduce the randomness in weird ways.

in any case, the not-very-performant lcg.lua:

lcg = {}
lcg.__index = lcg

lcg.new = function(seed, a, c)
   local o = {
      x = 1,
      a = 1597334677,
      c = 1289706101
   }
   
   if seed ~= nil then
      o.x = seed
   end
   if a ~= nil then
      o.a = a
   end
   if c ~= nil then
      o.c = c
   end
   setmetatable(o, lcg)
   return o
end

function lcg:seed(x) self.x = x end

function lcg:a(a) self.a = a end

function lcg:c(c) self.c = c end

function lcg:next()
   self.x = (self.x * self.a + self.c) & 0x7fffffff
   return self.x / 0x7fffffff
end

return lcg

i am wondering if it’s worth it to add this and some other related pseudorandom algos to norns via a C library.

if there is appetite for more random flavors/distros in the norns environment, i would be happy to add them (pink, brown, perlin, poisson, skewed gaussian, chaotic maps, etc?) and i would take some care to make them performant, accurate, usable etc. (not my first rodeo here.)

and the speed test:

socket = require 'socket' -- just for `gettime()`

function measure(name, func, n)
   n = n or 10000000
   print("measuring "..name.."...")
   t = socket.gettime()
   for i=1,n do
      func()
   end
   elapsed = socket.gettime() - t
   print("elapsed: ".. elapsed .. " seconds")
end


-- output has unit variance in (-inf, +inf)!
function randomGaussianBoxMuller()
    u = 1 - math.random()
    v = 1 - math.random()
    return math.sqrt( -2.0 * math.log(u)) * math.cos(2.0 * math.pi * v)
end


-- output has unit variance in [-D, +D]
function randomGaussianDice()
   local D = 3
   x = 0
   for i=1,D do
      x = x + math.random()
   end
   return x*2 - D
end

x = 0
measure( "B-M", function() x = randomGaussianBoxMuller() end)
measure( "dice", function() x = randomGaussianDice() end)

and for completeness, the stats analysis in python, with result histograms:

import numpy as np
import matplotlib.pyplot as plt
from scipy import stats


def gauss_sqrt_log_cos(n):
    u = 1 - np.random.rand(n)
    v = 1 - np.random.rand(n)
    x = np.sqrt(-2 * np.log(u)) * np.cos(2 * np.pi * v)
    return x

# brute-force, average of uniform
def gauss_dice(n):
    d = 3
    x = np.random.rand(d, n)
    print(x)
    x = np.sum (x, axis=0)
    x = x
    return x* 2 - d

# for comparison
def gauss_builtin(n):
    return np.random.normal(0, 1, n)

def evaluate(func, n=1000000, b=100):
    name = func.__name__
    x = func(n)
    desc = stats.describe(x)
    print("{}: {}".format(name, desc))
    plt.figure()
    plt.hist(x, b)
    plt.title('{}, variance = {}'.format(name, desc[3]))
    plt.show()
    # plt.savefig('{}.png'.format(name))
    print("")

evaluate(gauss_sqrt_log_cos)
#evaluate(gauss_builtin, 10000000)
evaluate(gauss_dice)

gauss_dice gauss_sqrt_log_cos

(note that the difference in the width of the bell curve is due to the fact that the dice method is predictably bounded and the direct B-M method is not.)


ok. now my brain will let me do something else.

13 Likes

indeed I have liked infinite bounds before for visual stuff but yes hard bounding is much more useful here, thanks !

some dice:


looping through a random array of randoms to get the repetitions. probably cooler ways to make things musical but I also want to maybe sleep tonight : )

5 Likes

Yes, please! I rely on randomness a lot when I’m out of ideas and this would be lovely to play around with.

5 Likes

a snapshot—i wonder if it still qualifies as drone-like . . .

@tehn great minds

6 Likes

PR’d & goodnight

the visuals were all by accident and I’ll leave them a suprise :^)

some late deets now that I'm awake

I stuck with the giant database of loops from the seeker script posted above. evolve and density were the interesting / very time consuming bits (this thing is 600 lines of utter chaos):

evolve chooses a random loop from from a collection of loops (start/end/rate/sample) unique to each world & voice - so the sound world is new every time but also pleasantly familiar + distinct form every other wrld

I thought about density as a probability density - so at 0 probability is condensed to a single loop per voice but as you turn it up it starts to let in numbers from a looping random sequence of dice roll value arrays. different numbers there are mapped to swap loops + offset volume, pan & filter cutoffs. auto cheat codes style !

then E1 & E2 are just manual global level & filter freq

the visuals were very last minute but somehow ended up cool & fitting ? in wrld 1 I was trying to get a rotating hexagon mapped to the density values - but I goofed and somehow rotated the sides of the hexagon independently which was good news

for the other two worlds I just randomly swapped out the screen.line() command with arc and curve and those somehow ended cooler and respectively fitting to sound wrlds

but damn was this a way cool exercise and an important push to approach norns as a composition tool rather than gear. rlly happy with what I made ! definite juice for a future standalone script

9 Likes

PR’d and tape attached below:

Postscript.
I initially was trying to create a granular drone controlled by “bytebeat” formula.
The sound slices are more macro than micro so I missed the mark in the granular part, but I think the results are more sonically pleasing. I think there is more to explore in this idea so I may return to it in the future.

edit: moved some text to my script intro

6 Likes

I didn’t get near finishing anything unfortunately. I got as far as finding some nice loops and like @tehn mentioned got distracted just listening to them ha.

Looking forward to studying what everyone else did though.

link

I think I’ve submitted a PR correctly and above is dropbox link

3 Likes

Hello worldbuilders.

Here’s my result.

I built this script for non-musicians. Let them to put sounds and hear what happens. It gave me a lot of interesting results. The audio file need some treatment to tame loudness though, this is what I got from the TAPE.

Edit: uploaded mastered version

4 Likes

holy cow it is super easy to get lost in these samples.
viola + environmental sounds = many joyful hours

(longest single track i’ve ever made!)

some notes on the script
  • K3 cycles through samples to load into voice 1 (muted in the main mix)
  • voice 1 is piped to three separate read/write voices, which are pretty much delays in this scenario
  • K2 switches voices and sets new random loop points for voice 1, which is fed into the selected voice
  • E1’s volume adjusts the level of the selected voice
  • E2’s brightness adjusts the presence of the selected voice’s lo-pass filter
  • E3’s density affects the selected voice’s filter cutoff, the presence of the hi-pass filter, and feedback amount

these are incredible

12 Likes

request submitted—tape enclosed here!

both are titled ‘interpoint.’

it was great fun—looking forward to the next one!

Just PRed :slight_smile:

I used the seeker script as a starting point, and as I lacked time I was not able to work on nice visuals as requested by the first post… But thought I would share my script anyway.

‘scanner’. 3 softcut voices that can scan through the loaded audio file, controlling loop start and length with encoders 1 and 2, as well as playback rate/direction with K2.

A sample:

  • E1: volume of selected voice (starts at 0 for all 3 voices).
  • E2: brightness (loop start point).
  • E3: density (loop length).
  • K2: evolve (select from available playback rates).
  • K3: change worlds (change voice to edit).

I learnt a lot writing the script, looking forward to the next one!

EDIT: uploaded a reasonably sized audio file.

9 Likes