Supercollider tips, Q/A


#224

cool. so depending on what os you a running, you’ll need to grab some drivers.

if you are on mac, you’ll need serialosc.

there’s also a library that you’ll need to download and copy into SuperCollider’s class library folder, which you can find along with a step by step explanation here.

good luck! let me know if you run into problems / have any questions. my current project uses the monome grid to interface with SuperCollider, so I might be able to help you get started if you are having trouble.


#225

@capogreco thank you so much.

i knew the functions would be simple and it didn’t need much code but couldn’t quite figure it out.

thanks for referring the tutorials. i have studied them once through and now am working on his longer ones.

i have some more questions which i will ask on the other forum.

again, thanks


#226

I’m using a dictionary containing arrays of samples to randomly select buffers for playback in a Pbind. I understand that I can use Pkey to have another element of the Pbind refer to the selected buffer, but is there a way to find out how many frames the currently selected buffer is made up of? I’m trying to randomly select start positions for each buffer, but they vary greatly in length, so using a static value is sort of useless. This seems so simple and I just can’t figure it out!


#227

SoundFile might help:

http://doc.sccode.org/Classes/SoundFile.html#-numFrames

Or normalize start position to 0…1 instead of numFrames


#228

This sounds simpler and more elegant, but I feel like I’d still need to find out the length of the specific sample in question…


#229

In case anyone stumbles across this and is looking for a simpler workaround, I got this response on the sc forum. You can do this inside a Pbind:

\buf, ...,
\numFrames, 
Pkey(\buf).collect(_.numFrames)

#230

"Animalation - written in SuperCollider.
Animating the Animal.
For Grid 128 and Arc 4.

Animalation is a four-track live-input live-sampler, with mlr style position
triggering, trigger recording/playback, reverse, low-pass filtering with
cutoff and resonance, fm modulation, and modulatable loop start and length.
It allows you to record whatever is coming into an input channel on your
audio interface (or built-in microphone) and play it back instantly."


#231

@fauveboy, maybe this is what you are after?


#232

“This track contains the results of experimenting with home made resynthesis. A short sung phrase has been analysed in 24 bands for pitch, amplitude and “spectral flatness” (a scale which would have at one end a sine wave, and at the other white noise.) This information is used to play oscillators of various kinds. The control data may be transposed, slowed down or speeded up, slewed, skewed, quantised, thresholded or otherwise altered. The track begins with the original recording, followed by several resynthesised results, played live with SuperCollider and containing no additional processing.”

Anyone think they can make something like this? I can’t do it. Above my pay grade. But it sounds amazing. I can’t forget it. It was on his blog a few years back


#233

It sounds like he took a recording he made, split it into 24 bands somehow

(this makes me think by frequency, à la graphic EQ, but if you’re splitting a “monophonic” signal by frequency, I don’t understand why you’d analyze for pitch! Surely you just get better or worse approximations of whatever note you were singing at that moment? Or some harmonic thereof? Actually maybe that could be useful… hmm)

And then extracted 3*24=72 “control data” values, via a pitch follower, an envelope follower and then some measure of how harmonically dense the band is (i.e. does the 60Hz band sound more like the rumbly bits of white noise or more like one sine within that band) and then used those (with plenty of processing) as “SuperCollider CV” to control the playback of various “oscillators”.

You could think of this as some kind of some kind of vocoder—indeed, it sounds like it’s in with that whole “spectral” move in experimental / computer music that I think started in 2009-ish—where instead of using just amplitude data from your voice to control the amplitude of a filterbank that’s letting through an oscillator, you’re letting more parameters modulate various different things.

If you’re interested in making something like that, my guess is that you should start by looking at “phase vocoders” in your favorite platform, be it SuperCollider or what. I wouldn’t know how to make one myself, though.


#234

I once made a max patch that divided a signal into x ammount of bands and processed them separatly. After i lost the license and later the patch i tried to recreate withou sucess that same patch in pure data and supercollider. No luck whatsoever.


#235

he analyzed the recording but it plays oscillators I believe


#236

I have a couple ideas on where to start. What artist/recording is this?


#237

Cylob


#238

Is there a way to hear the track? The flash player fails with the message "
Sorry, this track or album is not available"


#239

I would use way back machine. Thats all I can think of


#240

wayback doesn’t seem to work. I guess not. Cylob is unreachable too


#241

re: amp follower on filterbank

easy to cook up something straightforward like this. (i’m using 24 bands on the bark scale, but this is kind of arbitrary.)

[ https://gist.github.com/catfact/288693fa47cbd5d15f04c6452f42a7c7 ]

now, one caveat is that this is not really a “good” filterbank - supercollider doesn’t really have one out of the box. (AFAIK! would love to be corrected.)

by “good” i mean that recombining these subband signals will result in something not much like the original. and subband phase is distorted such that anything but pretty coarse analysis like this envelope following, will not be super useful. for this it’s fine. for more stringent applications there are more effortful options.


re: spec flatness

“spectral flatness” aka weiner entropy == geometric mean of power spectrum / arithmetic mean. so it’s low if the signal is “peaky” and high if it’s flat.

supercollider has a SpectralFlatness ugen to do this calculation on an FFT chain. NB that its output is linear, but in most psychoacoustic literature it is -db.

NB that also in psychoacoustics, it’s common to take weiner entropy of a subband for analysis. but the SC ugen won’t let you do this - it always takes the power spectrum of the whole signal. so i don’t think the results from literally feeding SpectralFlatness with a bandlimited signal will be very psychoacoustically meaningful (it will mostly be a function of subband total amplitude, width, placement, and of the filter rollof characteristics.)

that said, this naive “subband flatness” will do something, and can probably be sort of ad-hoc normalized per subband. probably good enough for music.

but if you literally want to get spec. flatness of many subbands, i’d consider trying to do the filtering in the frequency domain:

  • take FFT of signal
  • use PV_Copy for each subband
  • use PV_BrickWall to bandpass
  • (the tricky part) use .pvcalc to compute the weiner entropy of each subband directly. something approximately like:
{ 
    arg mags, phases;
    var mean, geomean, weiner;
    var n, y;
    
    n = mags.size;
    // return arrays, zero initially
    y = [ mags.collect({0}), phases.collect({0}) ];
    mean = mags.sum / n;
    geomean = exp(mags.collect({|m| log(m)}).sum / n);
    weiner = geomean / mean;
    y[0][0] = weiner;
    y
}

(^ this is totally untested, and i haven’t really tried this specific method. the theory is that placing the calculated value in the DC magnitude bin, then running IFFT, should give you a DC signal corresponding to that value, which can then be written to a control bus. let me know if you’ve tried this or other tricks to get .kr out of a pvcalc!)

the bad news is, this .pvcalc function will be very resource intensive. the good news is, it can be set up to only run on the relevant FFT bins for each subband, saving resources and also giving a mathematically meaningful result.

but TBH i’ve found supercollider a bit frustrating for this kind of low-level work on STFT data - this is one of many instances where its easier to just make a custom ugen if you have c++ chops. (here max/msp is actually quite nice since these days, you can put gen code in a phase vocoder subpatcher.)


re: control signals

well, once you have stuff on control busses, “recording, playback, transposing, quantizing” &c becomes a technically straightforward matter of using SC’s many tools for buffer and signal manipulation. but the creative possibilities quickly become manifold…


other analyses

spectral flatness on a wide-band signal is very useful i think. especially when combined / triangulated with other metrics like spectral centroid (which is conveniently SpecCentroid in SC) - this is an approximation of brightness, where weiner entropy is a (rough) approximation of tonality.

SpecPcile gives you a specific designated point on the cumulative distribution of the power spectrum - a measure of spectral “rolloff” that can mean different things depending on where you put that point. it’s less independent from the other two.

be aware that Pitch.kr also exists, it is an autocorrelation pitch tracker that also emits the strength of the autocorrelation as a “clarity” measure. it has its own internal frequency transform and can’t be combined with other FFT chain ugens.

and finally the third-part MCLD ugens include many more spectral analysis tools, like FFTCrest which is a different kind of “peakiness” measure (max / mean.)


i guess it’s appropriate to say that i’ve used a lot of musical techniques in this vein. for example the 2nd track on side A of my record at the door consists of several minutes of sines+noise synthesis based on a few seconds of viola sounds, analyzed and resynthesized in a highly arbitrary way. and for a couple of years most of my live pieces were similarly derived (e.g. 2014 SFEMF performance.)


#242

I came up with something similar:

(
{
	var in = SoundIn.ar(0);
	var f0 = 20;
	var ff = 12000.log2;

	var ugen = LFTri;
	//var ugen = BlitB3Tri;
	// var ugen = SinOsc;

	var out = 24.collect{|n|
		var f = 2**(ff * n / 24);
		var b = BBandPass.ar(in, freq:f0*f, bw:0.5*8/24);
		var p = Tartini.kr(Lag.ar(b, 0.2), 0.99);
		// var p = Pitch.kr(Lag.ar(b, 0.2), ampThreshold:0.001, peakThreshold:0.1);
		var a = PeakFollower.kr(b, 0.9);//Amplitude.kr(b);
		ugen.ar(Lag.kr(p[0], 0.2), mul:Lag.kr(a * Lag2.kr(p[1],0.5), 0.2));
	};

	DelayC.ar(Pan2.ar(Mix.new(out)), 2, 1);
}.play;
)

The DelayC is there to mitigate feedback if you aren’t using headphones. If you are you can get rid of it, or set the delay to a small value.


#243

The BandSplitter quark (https://github.com/scztt/BandSplitter.quark) should be a basically lossless crossover / band splitting filter (lossless apart from delay), and has arbitrary order. It only goes up to 8 bands, but you just have to nest the filters further to split into more bands. If someone wants to implement BandSplitter16 (or better yet, a BandSplitterN that allows arbitrary numbers of splits), I am most definitely accepting pull requests :slight_smile:.