(norns/circle/02) rhythm and sound

a gathering— a group study where participants each create a script according to a prompt. scripts are submitted by an established deadline. discussion and help will be provided to facilitate completion of scripts.

this series will focus on softcut. for an introduction, see softcut studies. for general norns scripting, see norns studies. this edition will also extensively use clocks.

upon completion we will release the pack of scripts as a collection along a bandcamp compilation of captures from each script. see the collected past gatherings.

we’ll be here throughout the process to assist everyone in getting scripts working. please ask questions— this is an ideal time to learn.

future prompts will have different parameters. don’t go overboard building out your script with extra functionality— try to stay close to the prompt.

(norns/circle/02) rhythm and sound

build a rhythmic, generative sound machine using provided tonal and percussive samples. provide your own textural/field/environmental audio clip to color your creation.

  • 3 samples (two are provided (one tonal, one percussive), you provide a textural sample)
  • 3 voices (keywords: clouded, sharp, clear)
  • 3 clocks per voice which modify playback quality (keywords: location, elevation, presence)
  • no USB controllers, no audio input, no engines
  • map
    • E1 volume
    • E2 velocity (overall tempo)
    • E3 regularity (simple rhythm vs. complex rhythm)
    • K2 shuffle
    • K3 draw
  • visual accompaniment (keywords: data collection, rewritten history, cultural amnesia)

parameters are subject to interpretation. “velocity” could mean playback speed. “shuffle” could mean preserve rhythm and change parameter mappings. “draw” could mean progress to a new sequence, or randomize a quality, emerging something new.


  • your included sound clip should be 10 seconds maximum
  • feel free to use additional softcut voices for sonic embellishment, but keep it focused
  • think both about short and long time: short syncopation, phrase-length modulation, microsound, breath

deadline: rolling/postponed.

the last couple weeks have been nothing resembling normal. we’d like to see more completed contributions to this project, though not with added pressure. rather than simply extend the deadline we’re going to leave it open. eventually we will close it ahead of a new circle.

when you’re ready:

  • submit your script by creating a PR to github: https://github.com/monome-community/nc02-rs (we will help with instructions when the time comes, or feel free to submit early)
  • capture a short video and post it here. consider using the TAPE feature to get direct audio (which will mean an extra video editing step— ask for help if needed!)

to get started, go to maiden’s project manager, refresh the collection, and install nc02-rs. note, this will take some time to download as it includes some audio files.

if you need a hint getting started, check out innominate.lua.

livestream happened! check it out

we’ll make some sounds using the new clock system and show how things fit together. twitch link will be posted ahead of the stream.


apologizes for a probably dumb question, but how do i manage 3 samples with only 2 buffers available in softcut?

is it good practice to just load them all in one buffer using audio.file_info for measuring lengths?

Yeah. You only have two buffers, but you have more - six, iirc - play/record heads. So you can have one head looking at one section of the tape, another looking at a second, and then you load samples onto those areas of the buffer.

1 Like

How about an example of this?

i made a voice / buffer management lib if anyone’s into that

kinda handles the sharing buffers thing. but not hard to do it yrself either


check out https://monome.org/docs/norns/softcut/#1-basic-playback

clip loading has an argument for what position in the buffer to put it:

softcut.buffer_read_mono(file, start_src, start_dst, dur, ch_src, ch_dst)

start_dst means where in the buffer to insert the file.

so, for example, mlr has a bunch of files spaced out by 30 seconds or something.

basically think of buffers as a long bit of tape, and you can put clips allover it.


That’s how my Beets 1.0 drum-slicer works. Each voice uses a single Softcut voice and buffer, but loads as many drum loops as you like.

It places each loop onto a separate point in the buffer, starting every five seconds. Then it remixes the loops by moving the softcut read head on the beat with position(). For example, to play slice 5 of loop 3, it moves to 5 times the length of a slice plus 3 times the loop offset.

Relevant code lines:


Sure. Have a look at how the init function in my nc01 entry works:

(not all visible on the screen, click over to Github to view)

The arrays that supply the buffers and start points are lines 30-32. Basically: samples 1 and 2 go I to buffer 1, and I kick off sample 2 around 181s in. Sample 3 goes to the beginning of buffer 2.

Most of the time in softcut, you are addressing the heads, rather than the buffers.


I see! It is starting to make sense. Thank you for the replies everyone.


livestream this friday at 3pm EST!

we’ll make some sounds using the new clock system and show how things fit together. twitch link will be posted ahead of the stream.


Do you have a link for the stream yet?

i’ll be at https://www.twitch.tv/tehnnnn


Awesome! Thanks! Adding this to my calendar event.

getting started with something simple:

i’ve decided to map samples to voices as follows. what are the other possibilities?

tonal -> clear
sharp -> percussive
textural -> clouded

thinking about implementing the simple to complex rhythm transition by creating a few sequences of numbers to sync the clocks to. then maybe interpolating between them or just changing the range of loops inside the coroutines. no ideas for the interface so far :slight_smile:


Thanks for posting, I’m new to Lua so I’m still absorbing and learning disciplines for nice syntax and readable code, and this example shows me I can refactor to make better use of table.key notation :slight_smile: :+1:

Because everything norns is basically new, I’ve drifted away from the instruction to stay totally focused on the scope of this exercise re: parameterization and general amount of engine setup… problem is as I extended my code in a more immediate and naive manner I was running into confusion because I either didn’t understand the whole system, and/or was trying to re-invent a wheel somewhere… but one thing I can’t find right now is how to create good controlspecs for something like loop enable in softcut. This works, but I have to spin an encoder a fair amount in the params edit menu, and a stream of values will get printed from my param action: ie:

Setting perc_voice_loop to: 1
Setting perc_voice_loop to: 1
Setting perc_voice_loop to: 1
Setting perc_voice_loop to: 1
Setting perc_voice_loop to: 1
Setting perc_voice_loop to: 1
Setting perc_voice_loop to: 1
Setting perc_voice_loop to: 1
Setting perc_voice_loop to: 1
Setting perc_voice_loop to: 1
Setting perc_voice_loop to: 1
Setting perc_voice_loop to: 1
Setting perc_voice_loop to: 1
Setting perc_voice_loop to: 1
Setting perc_voice_loop to: 0
Setting perc_voice_loop to: 0
Setting perc_voice_loop to: 0
Setting perc_voice_loop to: 0

I would have expected the step value for this control spec to force each adjustment to have an increment of 1… but it does feel like some accumulation-to-threshold situation is happening here.

Also, the params menu displays the value as 1.0, can I fix that to be just 1 for on/off parameters?

This may be better suited to the general development thread since, at this moment, I have no specific need to have loop enable controlled by a param… it’s just an example of something I feel fairly sure I’ll want at some point soon for some project or another.

i wouldn’t use controlspec for a parameter to enable/disable a voice. looks like option or number would do a better job here.

1 Like

Ah! I need to re-read the params docs :). add_number works much better, but I can’t find an example of using add_option and this doesn’t work like I’d expect, on the params screen both enabled params show as 0 (and changing it has no effect on softcut). However, in maiden I see:

  -- set voice enabled
  -- https://monome.org/norns/modules/softcut.html#enable
  params:add_option(voice_name.."_enable", voice_name.."_enable", 
    {0, 1}, -- options
    0 -- default
      if (PARAMS_DEBUG) then
        print("Setting "..voice_name.."_enable to: "..x)
  params:set(voice_name.."_enable", 1)

… Dynamically enabling voices is really out of scope, but working with options is useful so I would like to understand better what’s happening here, but this is prob more suited to general development questions I suppose.

an example of using option can be found in norns itself:

alternatively, you can use the params table provided by softcut to setup parameters per voice, it actually has a number spec for the enable param:

>> tab.print(softcut.params()[1].enable)

action	function: 0xd5f7c0
type	number
name	cut1enable
min	0
id	enable
max	1
default	0
1 Like

Are polls the only way to check the play position of a voice? I’m trying to do something like the cue feature from w/ where I press a key and the current position is pushed onto an array.

I could setup a poll running “fast enough”, but naively that just seems like overkill for the task.

live in an hour: https://www.twitch.tv/tehnnnn

clocks + softcut = :star::star::star::star::star:

i’ll show how it works!