there is so much possibility with Supercollider - I really need to dig in now I’ve figured out how it sits in Norns.

the whole Norns thing feels like a perfect way to unlock its potential and so exciting that it has a vibrant community of people who feel the same around it

2 Likes

I got to a milestone with my Engine_FM7 project. It loads on Norns, can be called through Maiden into a script and the Lua REPL works as expected to get and set params. It’s polyphonic, which was adapted from Engine_PolySub.

https://github.com/lazzarello/dust/blob/fm7/lib/sc/Engine_FM7.sc

It’s not super useful…yet since I need to figure out what to do with the modulation matrix. There’s a lot of params in there.

I’d like to adapt the Earthsea script to work with the FM7 engine. The args and UGen graph is a little different so it’s not a simple swap out of the engine.name.

update, whelp, that worked well

From there I think it could be a cool interface to use a grid as a selector for the 6x6 modulation matrix and an encoder to change the value.

Once I get a useful script that demonstrates it’s usefulness I’ll make a PR for the upstream dust repo.

Anyone else working on new Polyphonic synths?

5 Likes

Spent the last 5 days working on the FM7. There’s a UI that shows the phase mod matrix and a button (based on the second study) for random phase selection. All modulatable args in the SC engine are exposed as Norns parameters with control sets in the UI. It’ll play as a 16 voice synth with MIDI or a grid.

Tonight I got stuck.

The FM7 UGen outputs six channels, one for each operator. I don’t fully understand why the code I hacked works but it does. I have a suspicion it’s only outputting the first two channels when given as an arg to the Out.ar UGen.

When I use SynthDef graph function arguments to choose which operators to output (these would correspond to the “carriers” in FM parlance), the code examples for channel selection in the help file crashed sclang. I’ve commented my source code with the results. It appears args in a SynthDef are an instance of OutputProxy but I want an Integer so I can fill up an array and call some collection methods on it.

The end goal is to get between 1 and 6 channels selected from the UGen output array, then mix them into stereo when passed to the Out.ar UGen. The operators should not be “muted” because setting the amplitude to 0 would prevent that operator from acting as a modulator and a carrier.

https://github.com/lazzarello/dust/blob/fm7/lib/sc/Engine_FM7.sc#L57

2 Likes

there’s a few issues with the multichannel handling here. do look at overview helpfile called Multichannel Expansion. also recommend helpfile for Control.

  • .dup makes a new array with two copies of its operand, use this if you want to send a mono signal to L/R in an Out.

  • the .slice is returning some array of channels, then you duplicate that array, and send it to stereo output, dunno what will happen but you won’t hear everything in the original array.

i think the array manipulations are over-complicated for what you want. and it can get confusing when working with OutputProxys - these are under-hood placeholders for ugens that can have variable channel counts / rates. (at synthdef compilation time, we don’t know what will be passed into the Controls that bring synth arguments in - could be floats or signals.)

anyways, for mono output, this works for me (de-commented for brevity and emphasis) - i am just being a caveman and multiplying the carriers by the “on/off” arguments

SynthDef.new(\polyFM7, {
	arg out, amp=0.2, amplag=0.02, gate=1, hz,
	hz1=1, hz2=2, hz3=3, hz4=4, hz5=5, hz6=6,
	amp1=1,amp2=0.5,amp3=0.3,amp4=1,amp5=1,amp6=1,
	phase1=0,phase2=0,phase3=0,phase4=0,phase5=0,phase6=0,
	ampAtk=0.05, ampDec=0.1, ampSus=1.0, ampRel=1.0, ampCurve= -1.0,

	hz1_to_hz1=0, hz1_to_hz2=0, hz1_to_hz3=0, hz1_to_hz4=0, hz1_to_hz5=0, hz1_to_hz6=0,
	hz2_to_hz1=0, hz2_to_hz2=0, hz2_to_hz3=0, hz2_to_hz4=0, hz2_to_hz5=0, hz2_to_hz6=0,
	hz3_to_hz1=0, hz3_to_hz2=0, hz3_to_hz3=0, hz3_to_hz4=0, hz3_to_hz5=0, hz3_to_hz6=0,
	hz4_to_hz1=0, hz4_to_hz2=0, hz4_to_hz3=0, hz4_to_hz4=0, hz4_to_hz5=0, hz4_to_hz6=0,
	hz5_to_hz1=0, hz5_to_hz2=0, hz5_to_hz3=0, hz5_to_hz4=0, hz5_to_hz5=0, hz5_to_hz6=0,
	hz6_to_hz1=0, hz6_to_hz2=0, hz6_to_hz3=0, hz6_to_hz4=0, hz6_to_hz5=0, hz6_to_hz6=0,

	carrier1=1,carrier2=1,carrier3=1,carrier4=1,carrier5=1,carrier6=1;
	var ctrls, mods, osc, osc_mix, aenv, chans, chan_vec;
	ctrls = [[ Lag.kr(hz * hz1, 0.01), phase1, Lag.kr(amp1,0.01) ],
		[ Lag.kr(hz * hz2, 0.01), phase2, Lag.kr(amp2,0.01) ],
		[ Lag.kr(hz * hz3, 0.01), phase3, Lag.kr(amp3,0.01) ],
		[ Lag.kr(hz * hz4, 0.01), phase4, Lag.kr(amp4,0.01) ],
		[ Lag.kr(hz * hz5, 0.01), phase5, Lag.kr(amp5,0.01) ],
		[ Lag.kr(hz * hz6, 0.01), phase6, Lag.kr(amp6,0.01) ]];

	mods = [[hz1_to_hz1, hz1_to_hz2, hz1_to_hz3, hz1_to_hz4, hz1_to_hz5, hz1_to_hz6],
		[hz2_to_hz1, hz2_to_hz2, hz2_to_hz3, hz2_to_hz4, hz2_to_hz5, hz2_to_hz6],
		[hz3_to_hz1, hz3_to_hz2, hz3_to_hz3, hz3_to_hz4, hz3_to_hz5, hz3_to_hz6],
		[hz4_to_hz1, hz4_to_hz2, hz4_to_hz3, hz4_to_hz4, hz4_to_hz5, hz4_to_hz6],
		[hz5_to_hz1, hz5_to_hz2, hz5_to_hz3, hz5_to_hz4, hz5_to_hz5, hz5_to_hz6],
		[hz6_to_hz1, hz6_to_hz2, hz6_to_hz3, hz6_to_hz4, hz6_to_hz5, hz6_to_hz6]];

	osc = FM7.ar(ctrls, mods);

/////////////////////
	chan_vec = [carrier1, carrier2, carrier3, carrier4, carrier5, carrier6];
	osc_mix = Mix.new(chan_vec.collect({ |v,i| osc[i]*v }) );
/////////////////////
	
	amp = Lag.ar(K2A.ar(amp), amplag);
	aenv = EnvGen.ar(
		Env.adsr( ampAtk, ampDec, ampSus, ampRel, 1.0, ampCurve),
		gate, doneAction:2);
	Out.ar(out, (osc_mix * aenv * amp).dup);
}).send(s);

you say

mix them into stereo

so then you need to add a Pan2 for each output or something. Mix will flatten only the outer array, correctly mixing an array of 2-ch arrays of ugens down to a single stereo array.

with panning per carrier:

SynthDef.new(\polyFM7_pan, {
	arg out, amp=0.2, amplag=0.02, gate=1, hz,
	hz1=1, hz2=2, hz3=3, hz4=4, hz5=5, hz6=6,
	amp1=1,amp2=0.5,amp3=0.3,amp4=1,amp5=1,amp6=1,
	phase1=0,phase2=0,phase3=0,phase4=0,phase5=0,phase6=0,
	ampAtk=0.05, ampDec=0.1, ampSus=1.0, ampRel=1.0, ampCurve= -1.0,
	
	hz1_to_hz1=0, hz1_to_hz2=0, hz1_to_hz3=0, hz1_to_hz4=0, hz1_to_hz5=0, hz1_to_hz6=0,
	hz2_to_hz1=0, hz2_to_hz2=0, hz2_to_hz3=0, hz2_to_hz4=0, hz2_to_hz5=0, hz2_to_hz6=0,
	hz3_to_hz1=0, hz3_to_hz2=0, hz3_to_hz3=0, hz3_to_hz4=0, hz3_to_hz5=0, hz3_to_hz6=0,
	hz4_to_hz1=0, hz4_to_hz2=0, hz4_to_hz3=0, hz4_to_hz4=0, hz4_to_hz5=0, hz4_to_hz6=0,
	hz5_to_hz1=0, hz5_to_hz2=0, hz5_to_hz3=0, hz5_to_hz4=0, hz5_to_hz5=0, hz5_to_hz6=0,
	hz6_to_hz1=0, hz6_to_hz2=0, hz6_to_hz3=0, hz6_to_hz4=0, hz6_to_hz5=0, hz6_to_hz6=0,
	
	carrier1, carrier2, carrier3, carrier4, carrier5, carrier6,
	pan1=0,pan2=0,pan3=0,pan4=0,pan5=0,pan6=0;
	
	var ctrls, mods, osc, osc_mix, aenv, chans, chan_vec, pan_vec;
	ctrls = [[ Lag.kr(hz * hz1, 0.01), phase1, Lag.kr(amp1,0.01) ],
		[ Lag.kr(hz * hz2, 0.01), phase2, Lag.kr(amp2,0.01) ],
		[ Lag.kr(hz * hz3, 0.01), phase3, Lag.kr(amp3,0.01) ],
		[ Lag.kr(hz * hz4, 0.01), phase4, Lag.kr(amp4,0.01) ],
		[ Lag.kr(hz * hz5, 0.01), phase5, Lag.kr(amp5,0.01) ],
		[ Lag.kr(hz * hz6, 0.01), phase6, Lag.kr(amp6,0.01) ]];
	
	mods = [[hz1_to_hz1, hz1_to_hz2, hz1_to_hz3, hz1_to_hz4, hz1_to_hz5, hz1_to_hz6],
		[hz2_to_hz1, hz2_to_hz2, hz2_to_hz3, hz2_to_hz4, hz2_to_hz5, hz2_to_hz6],
		[hz3_to_hz1, hz3_to_hz2, hz3_to_hz3, hz3_to_hz4, hz3_to_hz5, hz3_to_hz6],
		[hz4_to_hz1, hz4_to_hz2, hz4_to_hz3, hz4_to_hz4, hz4_to_hz5, hz4_to_hz6],
		[hz5_to_hz1, hz5_to_hz2, hz5_to_hz3, hz5_to_hz4, hz5_to_hz5, hz5_to_hz6],
		[hz6_to_hz1, hz6_to_hz2, hz6_to_hz3, hz6_to_hz4, hz6_to_hz5, hz6_to_hz6]];
	
	osc = FM7.ar(ctrls, mods);
	
	chan_vec = [carrier1, carrier2, carrier3, carrier4, carrier5, carrier6];
	pan_vec = [pan1, pan2, pan3, pan4, pan5, pan6];
	osc_mix = Mix.new(chan_vec.collect({ |v,i| Pan2.ar(osc[i]*v, pan_vec[i]) }) );
	amp = Lag.ar(K2A.ar(amp), amplag);
	aenv = EnvGen.ar(
		Env.adsr( ampAtk, ampDec, ampSus, ampRel, 1.0, ampCurve),
		gate, doneAction:2);
	
	//// notice we removed `.dup` because `osc_mix` is now stereo
	Out.ar(out, (osc_mix * aenv * amp));
}).send(s);

here’s a test:


x = Synth.new(\polyFM7_pan, [\gate, 1, \hz, 110]);
x.set(\hz2, 1.25);
x.set(\hz3, 2.0);
x.set(\hz4, 2.25);
x.set(\hz3, 3.0);
x.set(\hz4, 3.5);
x.set(\hz5, 4.5);
x.set(\hz6, 5.25);

Routine { 
	
	10.do { 
		// randomly change carrier amps a few times
		6.do({|i| x.set(("carrier"++(i+1)).asSymbol, 0.2 + 0.5.rand) });
		0.2.wait;
	};
	
	1.0.wait;
	// randomly change carrier panning a few times.
	10.do { 
		6.do({|i| x.set(("pan"++(i+1)).asSymbol, 1.0.rand2); });
		0.2.wait;
	};
}.play;

you could indeed make the synthdef less verbose with array arguments. but i dunno, it’s a tradeoff - maybe more confusing esp. when using ctl busses. in other words, dubious benefit unless you can actually send arrays as arguments at runtime. (and i bet your editor has a macro function.)


responding to code comments:

 // set all the defaults. Why aren't these values the same as the the values for the SynthDef args?
// DRY it up?

the extra dict is there in polysub because 1) wanted to programmatically make commands, didn’t want to make command for every synthdef arg, 2) couldn’t figure out a way to pull the arg defaults out of the synthdef. you’re totally correct that this makes the synthdef defaults redundant, they could be removed

    // the output bus, is this multiplication the right way to do this?
   // oscilator times envelope times vca.

works for me

// set the amplitude to 0.2. Didn't we already set this somewhere else?

yes, looks like dbg cruft, my bad

// NodeWatcher informs the client of the server state, so we get free voice information from there?

yes. it is more convenient than setting up a dedicated trigger or something when the env stops. we want to wait til env really stops before updating voice allocation stuff

4 Likes

Thanks. I’ll hack on this tomorrow after going through the multichannel expansion tutorial. That part of SC has always made my mind melt.

Any ideas how I could adjust the TAPE mechanism to record the stereo inputs as well as the outputs? I just did a performance where I was live processing bass and I didn’t need the input signal passed to the outputs via the monitor fader. So my recording is only of the processed signal instead of both the input and output. I know SuperCollider can record multichannel files so a 4 channel file with Output LR and Input LR interleaved would seem ideal.

2 Likes

you would have to hack Crone.initTape to use something besides the Recorder convenience class (which can only see a single range of bus channels - master outputs by default.)

something like Recorder but using e.g. { DiskOut.ar(buf, [In.ar(0), In.ar(1), SoundIn.ar(0), SoundIn.ar(1)) }

but yea, that would be a handy option to have.

2 Likes

That one bit me too. It seems unintuitive that the recording feature doesn’t record some of the inputs. Could be cool to wire up the inputs to that.

1 Like

a big rework of the TAPE system is about to get started and i’ve likewise been thinking this feature should be added.

but i’m wondering if quad files are weird— not very reusable on norns itself. perhaps better to do split files, output and input, each stereo, which could then immediately be reused within norns itself.

TAPE will then have some settings, but that’s ok.

8 Likes

I’d be fine with split stereo files as long as the sync between them is good. Quad files take care of the syncing right? So that it’s ensured the recording starts and stops at the same time?

i believe supercollider should be able to sync-start two files with decent accuracy, but that’s certainly worth testing!

If I may add one wish: it would be great to have access REC, PLAY… (i.e. the typical transport functions), Record Level and Playback level as Lua commands. I would so love to e.g. start/stop recordings etc. via e.g. a USB controller.

3 Likes

this was also my intention, thanks for the reminder. made an issue: https://github.com/monome/norns/issues/606

2 Likes

Awesome! That will be very powerful.

One request/question: Is it possible to add a software switch to use the tape playback as input to an engine instead of the hardware inputs?

2 Likes

Love that, but it would be awesome to have a second tape to record the results on…

Oooh, definitely. Iterative composition/processing is my favorite use case for the Morphagene.

2 Likes

this crossed my mind as well.

been considering the routing possibilities, and it’d be great to have a split TAPE player and recorder simultaneously. all of this needs a lot of UI design.

2 Likes

second tape to record first tape

not familiar with morphagene but this is also getting very close to reinventing the op-1 tape/“album” workflow (which i would love…)

but it prompted another thought, can the aux sends/inserts be accessed through LUA? slash do they persist between scripts? a potentially easier to conceptualize UI would be a delay/looper as a second aux effect.

user cooks up something in loom and wants to chop it in MLR, turns on the delay send, captures audio to delay buffer. loads mlr, sets voice one to resampling.

maybe easier than trying to keep track of tapes 1 and 2? maybe only makes sense inside of my sleep deprived brain.

currently there are no audio RAM buffers that persist between engine loads.

i’ve thought that a basic delay/loop (not softcut, just single-speed record and varispeed playback) would make a good addition to the aux effects. and you’re right, it would allow you to loop audio from a previously-loaded engine.

and yes, it’s straightforward to extend the bus matrix in MRL to capture aux busses. (to be clear, it can’t do this now - it can only see inputs from the ADC, and outputs from its own voices.)

like @tehn points out, making features usable is maybe a bigger (/ more controversial) challenge then adding the support on the audio engine side. for example, a simple parameter value list probably isn’t the best way to control a delay/looper.

4 Likes

i don’t think this is an issue. we can trivially play a quad file (with VDiskIn.ar(4) instead of VDiskIn.ar(2) here) and arbitrarily select or mix two channels. this seems easier than worrying about perfect sync between two diskout streams. selecting quad/stereo mode could be an additional crone command.

1 Like