Norns: crone/supercollider

norns

#41

there’s a few issues with the multichannel handling here. do look at overview helpfile called Multichannel Expansion. also recommend helpfile for Control.

  • .dup makes a new array with two copies of its operand, use this if you want to send a mono signal to L/R in an Out.

  • the .slice is returning some array of channels, then you duplicate that array, and send it to stereo output, dunno what will happen but you won’t hear everything in the original array.

i think the array manipulations are over-complicated for what you want. and it can get confusing when working with OutputProxys - these are under-hood placeholders for ugens that can have variable channel counts / rates. (at synthdef compilation time, we don’t know what will be passed into the Controls that bring synth arguments in - could be floats or signals.)

anyways, for mono output, this works for me (de-commented for brevity and emphasis) - i am just being a caveman and multiplying the carriers by the “on/off” arguments

SynthDef.new(\polyFM7, {
	arg out, amp=0.2, amplag=0.02, gate=1, hz,
	hz1=1, hz2=2, hz3=3, hz4=4, hz5=5, hz6=6,
	amp1=1,amp2=0.5,amp3=0.3,amp4=1,amp5=1,amp6=1,
	phase1=0,phase2=0,phase3=0,phase4=0,phase5=0,phase6=0,
	ampAtk=0.05, ampDec=0.1, ampSus=1.0, ampRel=1.0, ampCurve= -1.0,

	hz1_to_hz1=0, hz1_to_hz2=0, hz1_to_hz3=0, hz1_to_hz4=0, hz1_to_hz5=0, hz1_to_hz6=0,
	hz2_to_hz1=0, hz2_to_hz2=0, hz2_to_hz3=0, hz2_to_hz4=0, hz2_to_hz5=0, hz2_to_hz6=0,
	hz3_to_hz1=0, hz3_to_hz2=0, hz3_to_hz3=0, hz3_to_hz4=0, hz3_to_hz5=0, hz3_to_hz6=0,
	hz4_to_hz1=0, hz4_to_hz2=0, hz4_to_hz3=0, hz4_to_hz4=0, hz4_to_hz5=0, hz4_to_hz6=0,
	hz5_to_hz1=0, hz5_to_hz2=0, hz5_to_hz3=0, hz5_to_hz4=0, hz5_to_hz5=0, hz5_to_hz6=0,
	hz6_to_hz1=0, hz6_to_hz2=0, hz6_to_hz3=0, hz6_to_hz4=0, hz6_to_hz5=0, hz6_to_hz6=0,

	carrier1=1,carrier2=1,carrier3=1,carrier4=1,carrier5=1,carrier6=1;
	var ctrls, mods, osc, osc_mix, aenv, chans, chan_vec;
	ctrls = [[ Lag.kr(hz * hz1, 0.01), phase1, Lag.kr(amp1,0.01) ],
		[ Lag.kr(hz * hz2, 0.01), phase2, Lag.kr(amp2,0.01) ],
		[ Lag.kr(hz * hz3, 0.01), phase3, Lag.kr(amp3,0.01) ],
		[ Lag.kr(hz * hz4, 0.01), phase4, Lag.kr(amp4,0.01) ],
		[ Lag.kr(hz * hz5, 0.01), phase5, Lag.kr(amp5,0.01) ],
		[ Lag.kr(hz * hz6, 0.01), phase6, Lag.kr(amp6,0.01) ]];

	mods = [[hz1_to_hz1, hz1_to_hz2, hz1_to_hz3, hz1_to_hz4, hz1_to_hz5, hz1_to_hz6],
		[hz2_to_hz1, hz2_to_hz2, hz2_to_hz3, hz2_to_hz4, hz2_to_hz5, hz2_to_hz6],
		[hz3_to_hz1, hz3_to_hz2, hz3_to_hz3, hz3_to_hz4, hz3_to_hz5, hz3_to_hz6],
		[hz4_to_hz1, hz4_to_hz2, hz4_to_hz3, hz4_to_hz4, hz4_to_hz5, hz4_to_hz6],
		[hz5_to_hz1, hz5_to_hz2, hz5_to_hz3, hz5_to_hz4, hz5_to_hz5, hz5_to_hz6],
		[hz6_to_hz1, hz6_to_hz2, hz6_to_hz3, hz6_to_hz4, hz6_to_hz5, hz6_to_hz6]];

	osc = FM7.ar(ctrls, mods);

/////////////////////
	chan_vec = [carrier1, carrier2, carrier3, carrier4, carrier5, carrier6];
	osc_mix = Mix.new(chan_vec.collect({ |v,i| osc[i]*v }) );
/////////////////////
	
	amp = Lag.ar(K2A.ar(amp), amplag);
	aenv = EnvGen.ar(
		Env.adsr( ampAtk, ampDec, ampSus, ampRel, 1.0, ampCurve),
		gate, doneAction:2);
	Out.ar(out, (osc_mix * aenv * amp).dup);
}).send(s);

you say

mix them into stereo

so then you need to add a Pan2 for each output or something. Mix will flatten only the outer array, correctly mixing an array of 2-ch arrays of ugens down to a single stereo array.

with panning per carrier:

SynthDef.new(\polyFM7_pan, {
	arg out, amp=0.2, amplag=0.02, gate=1, hz,
	hz1=1, hz2=2, hz3=3, hz4=4, hz5=5, hz6=6,
	amp1=1,amp2=0.5,amp3=0.3,amp4=1,amp5=1,amp6=1,
	phase1=0,phase2=0,phase3=0,phase4=0,phase5=0,phase6=0,
	ampAtk=0.05, ampDec=0.1, ampSus=1.0, ampRel=1.0, ampCurve= -1.0,
	
	hz1_to_hz1=0, hz1_to_hz2=0, hz1_to_hz3=0, hz1_to_hz4=0, hz1_to_hz5=0, hz1_to_hz6=0,
	hz2_to_hz1=0, hz2_to_hz2=0, hz2_to_hz3=0, hz2_to_hz4=0, hz2_to_hz5=0, hz2_to_hz6=0,
	hz3_to_hz1=0, hz3_to_hz2=0, hz3_to_hz3=0, hz3_to_hz4=0, hz3_to_hz5=0, hz3_to_hz6=0,
	hz4_to_hz1=0, hz4_to_hz2=0, hz4_to_hz3=0, hz4_to_hz4=0, hz4_to_hz5=0, hz4_to_hz6=0,
	hz5_to_hz1=0, hz5_to_hz2=0, hz5_to_hz3=0, hz5_to_hz4=0, hz5_to_hz5=0, hz5_to_hz6=0,
	hz6_to_hz1=0, hz6_to_hz2=0, hz6_to_hz3=0, hz6_to_hz4=0, hz6_to_hz5=0, hz6_to_hz6=0,
	
	carrier1, carrier2, carrier3, carrier4, carrier5, carrier6,
	pan1=0,pan2=0,pan3=0,pan4=0,pan5=0,pan6=0;
	
	var ctrls, mods, osc, osc_mix, aenv, chans, chan_vec, pan_vec;
	ctrls = [[ Lag.kr(hz * hz1, 0.01), phase1, Lag.kr(amp1,0.01) ],
		[ Lag.kr(hz * hz2, 0.01), phase2, Lag.kr(amp2,0.01) ],
		[ Lag.kr(hz * hz3, 0.01), phase3, Lag.kr(amp3,0.01) ],
		[ Lag.kr(hz * hz4, 0.01), phase4, Lag.kr(amp4,0.01) ],
		[ Lag.kr(hz * hz5, 0.01), phase5, Lag.kr(amp5,0.01) ],
		[ Lag.kr(hz * hz6, 0.01), phase6, Lag.kr(amp6,0.01) ]];
	
	mods = [[hz1_to_hz1, hz1_to_hz2, hz1_to_hz3, hz1_to_hz4, hz1_to_hz5, hz1_to_hz6],
		[hz2_to_hz1, hz2_to_hz2, hz2_to_hz3, hz2_to_hz4, hz2_to_hz5, hz2_to_hz6],
		[hz3_to_hz1, hz3_to_hz2, hz3_to_hz3, hz3_to_hz4, hz3_to_hz5, hz3_to_hz6],
		[hz4_to_hz1, hz4_to_hz2, hz4_to_hz3, hz4_to_hz4, hz4_to_hz5, hz4_to_hz6],
		[hz5_to_hz1, hz5_to_hz2, hz5_to_hz3, hz5_to_hz4, hz5_to_hz5, hz5_to_hz6],
		[hz6_to_hz1, hz6_to_hz2, hz6_to_hz3, hz6_to_hz4, hz6_to_hz5, hz6_to_hz6]];
	
	osc = FM7.ar(ctrls, mods);
	
	chan_vec = [carrier1, carrier2, carrier3, carrier4, carrier5, carrier6];
	pan_vec = [pan1, pan2, pan3, pan4, pan5, pan6];
	osc_mix = Mix.new(chan_vec.collect({ |v,i| Pan2.ar(osc[i]*v, pan_vec[i]) }) );
	amp = Lag.ar(K2A.ar(amp), amplag);
	aenv = EnvGen.ar(
		Env.adsr( ampAtk, ampDec, ampSus, ampRel, 1.0, ampCurve),
		gate, doneAction:2);
	
	//// notice we removed `.dup` because `osc_mix` is now stereo
	Out.ar(out, (osc_mix * aenv * amp));
}).send(s);

here’s a test:


x = Synth.new(\polyFM7_pan, [\gate, 1, \hz, 110]);
x.set(\hz2, 1.25);
x.set(\hz3, 2.0);
x.set(\hz4, 2.25);
x.set(\hz3, 3.0);
x.set(\hz4, 3.5);
x.set(\hz5, 4.5);
x.set(\hz6, 5.25);

Routine { 
	
	10.do { 
		// randomly change carrier amps a few times
		6.do({|i| x.set(("carrier"++(i+1)).asSymbol, 0.2 + 0.5.rand) });
		0.2.wait;
	};
	
	1.0.wait;
	// randomly change carrier panning a few times.
	10.do { 
		6.do({|i| x.set(("pan"++(i+1)).asSymbol, 1.0.rand2); });
		0.2.wait;
	};
}.play;

you could indeed make the synthdef less verbose with array arguments. but i dunno, it’s a tradeoff - maybe more confusing esp. when using ctl busses. in other words, dubious benefit unless you can actually send arrays as arguments at runtime. (and i bet your editor has a macro function.)


responding to code comments:

 // set all the defaults. Why aren't these values the same as the the values for the SynthDef args?
// DRY it up?

the extra dict is there in polysub because 1) wanted to programmatically make commands, didn’t want to make command for every synthdef arg, 2) couldn’t figure out a way to pull the arg defaults out of the synthdef. you’re totally correct that this makes the synthdef defaults redundant, they could be removed

    // the output bus, is this multiplication the right way to do this?
   // oscilator times envelope times vca.

works for me

// set the amplitude to 0.2. Didn't we already set this somewhere else?

yes, looks like dbg cruft, my bad

// NodeWatcher informs the client of the server state, so we get free voice information from there?

yes. it is more convenient than setting up a dedicated trigger or something when the env stops. we want to wait til env really stops before updating voice allocation stuff


#42

Thanks. I’ll hack on this tomorrow after going through the multichannel expansion tutorial. That part of SC has always made my mind melt.


#43

Any ideas how I could adjust the TAPE mechanism to record the stereo inputs as well as the outputs? I just did a performance where I was live processing bass and I didn’t need the input signal passed to the outputs via the monitor fader. So my recording is only of the processed signal instead of both the input and output. I know SuperCollider can record multichannel files so a 4 channel file with Output LR and Input LR interleaved would seem ideal.


Approaching: norns
#44

you would have to hack Crone.initTape to use something besides the Recorder convenience class (which can only see a single range of bus channels - master outputs by default.)

something like Recorder but using e.g. { DiskOut.ar(buf, [In.ar(0), In.ar(1), SoundIn.ar(0), SoundIn.ar(1)) }

but yea, that would be a handy option to have.


#45

That one bit me too. It seems unintuitive that the recording feature doesn’t record some of the inputs. Could be cool to wire up the inputs to that.


#46

a big rework of the TAPE system is about to get started and i’ve likewise been thinking this feature should be added.

but i’m wondering if quad files are weird— not very reusable on norns itself. perhaps better to do split files, output and input, each stereo, which could then immediately be reused within norns itself.

TAPE will then have some settings, but that’s ok.


#47

I’d be fine with split stereo files as long as the sync between them is good. Quad files take care of the syncing right? So that it’s ensured the recording starts and stops at the same time?


#48

i believe supercollider should be able to sync-start two files with decent accuracy, but that’s certainly worth testing!


#49

If I may add one wish: it would be great to have access REC, PLAY… (i.e. the typical transport functions), Record Level and Playback level as Lua commands. I would so love to e.g. start/stop recordings etc. via e.g. a USB controller.


#50

this was also my intention, thanks for the reminder. made an issue: https://github.com/monome/norns/issues/606


#51

Awesome! That will be very powerful.

One request/question: Is it possible to add a software switch to use the tape playback as input to an engine instead of the hardware inputs?


#52

Love that, but it would be awesome to have a second tape to record the results on…


#53

Oooh, definitely. Iterative composition/processing is my favorite use case for the Morphagene.


#54

this crossed my mind as well.

been considering the routing possibilities, and it’d be great to have a split TAPE player and recorder simultaneously. all of this needs a lot of UI design.


#55

second tape to record first tape

not familiar with morphagene but this is also getting very close to reinventing the op-1 tape/“album” workflow (which i would love…)

but it prompted another thought, can the aux sends/inserts be accessed through LUA? slash do they persist between scripts? a potentially easier to conceptualize UI would be a delay/looper as a second aux effect.

user cooks up something in loom and wants to chop it in MLR, turns on the delay send, captures audio to delay buffer. loads mlr, sets voice one to resampling.

maybe easier than trying to keep track of tapes 1 and 2? maybe only makes sense inside of my sleep deprived brain.


#56

currently there are no audio RAM buffers that persist between engine loads.

i’ve thought that a basic delay/loop (not softcut, just single-speed record and varispeed playback) would make a good addition to the aux effects. and you’re right, it would allow you to loop audio from a previously-loaded engine.

and yes, it’s straightforward to extend the bus matrix in MRL to capture aux busses. (to be clear, it can’t do this now - it can only see inputs from the ADC, and outputs from its own voices.)

like @tehn points out, making features usable is maybe a bigger (/ more controversial) challenge then adding the support on the audio engine side. for example, a simple parameter value list probably isn’t the best way to control a delay/looper.


#57

i don’t think this is an issue. we can trivially play a quad file (with VDiskIn.ar(4) instead of VDiskIn.ar(2) here) and arbitrarily select or mix two channels. this seems easier than worrying about perfect sync between two diskout streams. selecting quad/stereo mode could be an additional crone command.


#58

agreed the technical part is no problem. again it’s a UI issue. ie, for all the stereo-expecting scripts, what happens when we try to load quad files, etc… even more UI


#59

for all the stereo-expecting scripts, what happens when we try to load quad files

this should be transparent. Glut and SoftCut use Buffer.readChannel (for new buffers) or BufUtil.readChannel (to replace a buffer.) these will work as long as the requested channel exists in the referenced file, regardless of the total channel count of the file.

if there is some other engine that expects a certain channel configuration, it should be smart enough to inspect the soundfile and deal appropriately.


#60

I would really prefer if it recorded the inputs and outputs as two separate stereo files with “-in” and “-out” suffixes. I think introducing quad-channels files would bring up a lot of workflow and design annoyances.

EDIT: Just saw the comment about perfect sync above. Honestly, if I recorded both simultaneously, it would be for editing later, not simultaneous playback. I think spending time getting exact disk syncing wouldn’t be worth it.