(autoslicer, &c): development diary

an experiment. after engaging with this post, i thought it would be nice to record my process for building some supercollider components starting from an “idea” prompt. it’s a good case because it’s not too hard (i can probably get this done rather quickly in limited free time) but also has non-obvious elements and doesn’t ovelap much with the usual content of tutorials for norns and/or supercollider.

if you have any questions, feel free to ask them in this thread. i may or may not answer as time permits…

acronyms used:

SC : supercollider (i know that SoftCut is the same letters, oh well)

POC : proof of concept. a version of the thing that demonstrates the validity / basic soundness of a given approach, but is probably janky/hard to use and is missing features. POC code will likely be completely rewritten / refactored.

MVP : minimum viable product. a higher bar than the POC, it may not have everything we could want in the tool, but it does the basic job. for me, an MVP should be well architected and free of obvious problems. (and preferably free of known bugs.) most importantly, it should have a humane and usable interface. MVP code is hopefully good enough to be worth maintaining and building on.


iteration 1.

we start with a pretty high level prompt:

i am thinking that this sounds like a good candidate for a supercollider demo. vanilla SC has pretty nice realtime analysis tools including a flexible onset detector, but plumbing it together with rolling buffer capture and file management is a little non-obvious.

right off the bat i should say: i’m not going to write a complete norns script here, and might not write any lua code at all. what i’ll do is treat the “monitors, samples and slices” part of the prompt as a spec for a SuperCollider component that can be used in a norns engine to fill this “autoslicing” role for any script idea that needs something like it.

i decide to reinterpret part of the prompt: rather than deciding what to save or discard in realtime, i’ll save everything segmented by onset, and each segment will have an associated metadata file with spectral analysis parameters. the “client” of the autoslicer (sampler or whatever) can read the analyses and decide how to sort or arrange the samples.

as is typical with supercollider stuff, i start it away from norns. (to be honest: i don’t have a norns set up right now! i moved house about 5 months ago but am still setting up studio space (basement work :hammer_and_wrench:); i don’t rely on norns and haven’t bothered setting up our LAN to accomodate one. and where did i pack that dongle anyway…)

but fortunately, the norns SC environment isn’t actually very special to the device (by design) and i can usefully put everything together more generically.

pretty soon this should be made into a Class, but my first pass is just interpreter code with global environemnt variables:

here i’ve answered some of my intial questions about whether/how certain things can/can’t be done in SC:

  • i re-learned that SendTrig cannot send array values, only single floats. bummer. have to use multiple SendTrigs with different IDs, which looks silly but more importantly involves separate OSC messages for each analylsis parameter. this leads to some shaky assumptions (like that they will be received in the order they are sent); i don’t have good alternatives right now but it’s something to keep an eye on. (one alternative i discarded: write analysis params to control busses, and have onset trigger responder read them. the problem there is the basically unknowable latency between writing to the bus and responding to the OSC trigger; here it’s quite important that all the control data be recorded form the same control cycle, and doing it all in the synthdef accomplishes that.)

  • i want analysis parameters to be averaged over the duration between onsets, or maybe over the total non-silent duration between onsets. (especially for the spectral flatness parameter, which will otherwise likely be dominated by near-silence.) so i have to solve the tiny idiomatic problem of building a resettable accumulator in the synthdef (my solution causes me disproportionate amusement) and build some sort of silence detection in there that will probably need tweaking.

  • there are some other little non-obvious things like adding lookahead to the detection circuit.

so far so good. running the demo script listens to the audio input and creates segmented audio files in the given output directory (nice), alongside many individual analysis files (not great). it keeps doing this forever until you stop it (not great.)

i’ve also switched back and forth in the POC between saving whole segments or just estimated non-silent initial durations of each segment. both seem to work OK but have tradeoffs, so i’d like it to be a selectable option.

since all the many parameters for the Onsets ugen are exposed, there are a lot of tuning variables. this is cool because you can choose parameters that work better for, say, slicing on pitch/timbre changes and not just amplitude spikes.

iteration 1.5

now is the time to evaluate the near-automatic decisions made in the POC implementation and consider alternative decisions for the next refactor/rewrite.

in the meantime we get a more detailed prompt:

i’ve already pretty much decided that i’m working just on steps 1-3 here. higher level categorization like step 4 will rely on the low-level analysis results i’m producing here. the rest of the steps are about sample playback and generative composition, which are very separate topics that are well-covered by existing examples and studies for norns.

(also we have a name: “edward slicerhands.” i feel silly typing this (sorry! lol) so i’ll just say “edward.”)

since i am reconsidering architectural decisions now, i start thinking of other ways this thing might be structured:

    1. the capturing could actually be done by softcut rather than by supercollider. SC engine would just send realtime messages as polls when segments are completed, as well as producing the analysis log on disk.

there’s some advnatges to that (less code, free up SC, non-issue to bring samples back to softcut later if we want) and disadvnaages (limited by sofcut buffer size, can’t use softcut for anything else, can’t easily keep developing off-norns). so i’ll keep with the basic structure of doing all the DSP and buffer manipulation/saving in supercollider.

    1. we could deal with the output differently, for example by recording one big audio file with just a separate file for markers. that also has advantages (code becomes super simple, doesn’t have to manage lifecycle of individual buffers (just DiskOut and an accurate time counter)) and disadvantages (have to either use maximal disk space, or do something a little sticky to pause recording and reconcile timeline in file with analysis timeline when we excise stuff.)

(hm i realized another strong advantage to the “single file with markers” approach: it’s non-destructive, so in case (say) the lookahead parameter was a bit off and the attacks all got truncated, that can be fixed after the fact. hm…)

on balance, i think the individual output files are fine: it’s closer to the prompt for one thing, and to save disk space (or whartever) we can tweak the logic to save (say) only non-silent things, or only thing that hit a certain analysis critera as suggested by original prompt.

so, i think this architecure is basically OK, but it needs some improvemenr right away

  • it needs to be a Class, both because norns wants things that way and because its just better and more reusable. (this is my usual flow with SC: proto/mockup in interprer code, then clean it up as a class. the rewrite/refactor is also a chance to review and revise.)

  • it needs a “session” interface with explicit start/stop commands. that will let it track disk space and organize all the analysis results together in one spreadsheet-like file. (in the POC i’s CSV, but probably lua code defining a table would be nicer.)

  • it should have callbacks for segment completion so that it can be easily plumbed into the Engine “polls” interface and drive script logic in realtime.

  • it should have more optional behavior switches (save silence or don’t, etc)

  • i should take my own FIXME advice and make a more robust buffer-saving job queue / worker

  • thinking it might be a nice option to save the segmented sample Buffers right here in the class, so that other supercollider code (like a sampler) can just access them directly instead of going through disk I/O. (at present, we just treat these cropped buffers as temporaries and free them after writing to disk.)

i’m about 50% through implementing these changes so that will be the topic of the next update (iteration 2.)


just stopping in to say thanks
i love this diary style peek into your mind as you work through this


For the rest of us non-developers, is is very insightful to read through your process. Thanks for being so thorough.


iteration 2

i finished cleaning up this demo and putting it in a class. that is now here:

this involved making some things less messy / magic-number driven, and also adding some behavior options and smaller features that were mentioned before:

  • session interface
  • disk/ram size limit on session, stopping automatically
  • ability to save buffers/analyis in RAM, or not
  • ability to write buffers/analysis to disk, or not
  • analysis file format management (i’ve still only really tested with CSV)

there are also more comments now. so if you are interested i think this class is in an okay place to take a look at, and feel free to ask for clarifications or reasonings (there are no dumb questions i promise, this whole exercise is, for me, 98% an experiment in communication and 2% about what the thing actually does.)

another thing i added, and maybe an interesting point about workflow: at this point i am testing the class a lot so i put a little test code in a startup file.

the way i use this is to make a soft-link from that .scd file in my local repo copy, to startup.scd in my supercollider support directory (e.g. ~/.local/share/SuperCollider on linux or ~/Library/Application\ Support/SuperCollider on macOS.) then, after making changes to class code i just hit the “recompile class library” command and it executes my test code right away.

[ed] oh right: i do something similar for classes actually. you may have a different preference but this is how i “install” the classes from the z-sc-demos repo:

(on macOS)

cd ~/Library/Application\ Support/SuperCollider/Extensions
ln -s ~/code/z-sc-demos/classes ./z-sc-demos-classes

(and for completion, as mentioned above for the startup file:)

cd ~/Library/Application\ Support/SuperCollider
ln -s ~/code/z-sc-demos/onset-slicer-startup.scd ./startup.scd

this has a couple benefits:

  • i don’t have to have git repos buried in the support dir
  • if i don’t want some repo’s classes to be included in the SC class lib, i just delete the link

next steps

the next step for me is really to test this more. disk output looks basically reasonable but i need to try more combinations of options.

here is a sample of the output, just making mouth sounds into RE-20 dynamic mic:

please take a look if you are interested - or better yet, try using the software yourself (these are very early tests still.)

for my own amusement, my next steps will be to make some simple data visualizations, and very basic sampler in SC that reads the in-memory session results from OnsetSlicer and maps them to MIDI notes and velocity layers according to some sorting/clustering procedure. so that will be the next iteration of this diary.

in the meantime, this could be ready to wrap in a norns engine (of course bugs are possible/likely.)

as far as the overall project of developing a script, i’d be happy to hand this off / collaborate on the next parts that touch norns directly, but i doubt i will do it on my own.

(total personal tangent):

i still haven’t set up a norns. i got a little closer! i have everything but the heavy-guage usb cable for power.

this dev diary process has forced me to confront/reveal an interesting fact: i am not actually very interested in using norns or writing scripts for it at the moment. and really, i never have been and probably won’t be much in the future.

this is sort of odd but also kind of obvious. the fact is, i have been using supercollider for almost 25 years now (since sc1 in… 1998? 99?) and using laptops for live performance since… 2003? when i got my first one. i’m totally fine using a laptop on stage and its certainly more convenient at home (where i spend all my time these days anyway.)

i am very interested in small music computers and dedicated devices (though the consumer angle on them can be disturbing), but the only time i really feel a “need” for one is when i am not only playing live music, but doing so in a context where a laptop would feel really inappropriate. for example playing in someone else’s band or piece, especially something heavy and/or theatrical, and when the role of the software is fixed and limited. that’s when i have made the most use of a NUC/aleph/norns/daisy petal.

whereas when i am actually doing solo computer music stuff, it tends to involve one-off programs that i may well be editing at the show. laptop is far more convenient and ergonomic.

the norns software system in particulay is not something i ever needed. i helped put it together because brian is a dear friend and i thought it seemed like a great thing for him and his friends to have. (and of course monome paid me a little, ha)

but for me it would really be more useful to do everything in supercollider and maybe have a simple graphics API for sclang to draw stuff on the norns screen. (basically a “mini-matron” that only handles screen and GPIO, not MIDI or clocks or even lua.) i’ve been sort of meaning to create that on norns, but have just not bothered. maybe some day.

or maybe norns can just be for other people to use, that is fine too.

here are some random notes that i took during this round of development (which took 2 sittings of 1-2 hours each):

  • found a typo in my initial demo involving a magic number (count of analysis parameter keys.) this illustrates how magic numbers are best avoided. they can eliminated by making them programatically determined, so i’ll try to do that (by maintaining an explicit list of analysis parameter keys or something…)

  • found another bug in frame offset calculations when the capture buffer has wrapped during a segment

  • i convinced myself that this “ephemeral workers” threading structure is fine for now… for one thing, implementing necessary thread safty around the main and worker threads is sort of a PITA in supercollider anyway (we don’t have a lot of sync primitives to work with.) and anyways, it seems safe enough since we are using Server.sync liberally asnd all these (client-side) buffer operations are driven by server task completions.

  • observation: it’s taking me about as long to maintain this dev diary / thread as it is taking to actually write the code. at least same ballpark.

  • a thought: engineers and SC gurus may have some natural distaste for some of the decisions made here about the class structure:

    • it’s a monolith, with no architectural motivation besides convenience. i’m ok with that in this particular context but recognize that it could be broken up.
    • it makes some kinda nasty assumptions, like that it will be constructed inside a Thread or subclass. oh well. the problem there is sort of inerent in SC’s asynchronous design using network protocols between sclang (the interpreter) and scsynth (the audio processing… process,) and being able to use Server.sync really makes some things a lot simpler. (note to self: i should look at the actual implementatino of Server.sync. i think it is waiting on a \sync message and may be heavier than i tend to assume.)

. . . and maps them to MIDI notes and velocity layers according to some sorting/clustering procedure.

This is a brilliant idea. From instant sampler to instance performance tool, this concept is so exciting.


iteration 2.5

i added a new classfile called SlicerHands.sc and added a very simple distribution visualizer class called SlicerEyes. (SlicerHands will be the name of the simple sampler class.)

an instance of the visualizer is now created by the startup script after it explicitly stops the OnsetSlicer session.

i started building the sampler part but not yet done the most important bit: constructing a note/velocity map.

still wanted to post this partial update right away because the visualizer immediately revealed a bug with the average amplitude computation (it was averaging over the duration in units of seconds instead of control frames, so it was like 4000x too big.) that is fixed now, and was a serious error so it seemed worth posting this partial update.

spotted some other bugs and issues mostly related to edge cases, which i haven’t fixed yet. all pretty minor:

  • when a session is stopped manually, the last segment is discarded instead of being stopped early. (fixing this could be done by spoofing a final onset, kinda annoying.) not an issue when automatically stopped because of hitting ram/disk limits.

  • speaking of which, the limits are enforced loosely in that either limit is allowed to be busted once before stopping the session. (IOW, it saves the buffer/data for the segment, then checks the usage, then stops the session; to be more robust i guess it would first estimate whether the limit will be busted.) could be an issue if the last segment is really long. (actually now that i think about it, maybe not a big deal since the segment lengths are supposed to be limited to the rolling buffer size anyway.)

  • once, i may have spotted an error message due to race condition because the session was explicitly stopped while the buffer-writing worker was in the middle of something (and the buffer file was still empty.) not sure though; might have been user error.

  • some benign errors occur when the session ends with an empty onset history. not a big deal.

  • oh right: i should add a settable callback for session completion. for example in this context it would allow the visualizer to spawn when the session finishes due to reaching ram/disk limits.


iteration 3

added the SlicerHands basic sampler. seems to work ok.

i structured this class so that the sorting heuristic can be defined arbitrarily by the user.

the heuristic is structured as a function which accepts:

  • the session onset data produced by OnsetSlicer,

  • a list of note numbers

and returns:

  • a Dictionary where keys are note numbers, and elements are lists of IDs representing velocity layers. (these IDs are the unique timestamp-derived slugs that are used as keys for both the analysis data structure and the buffer list.)

let’s call this structure the note map.

SlicerHands does a few things at creation time:

  • accepts a list of notes, which defaults to a list of all MIDI note values ([0, 127])
  • builds the note map using the mapFunction parameter, or using the defaultMapFunction classvar if none is given
  • creates a Group node for each note number so that it can enforce a “solo” mode per note

and when a note-on is processed:

  • looks up the note number in the note map to get a list of IDs

  • maps the velocity to an ID from this list in a linear fashion

  • gets the buffer index, duration, and (precomputed) gain normalization factor for that ID

  • optionally (there is a behavior flag for this), stops all other synths playing on the note’s group

  • plays a self-freeing synth on the note’s group

i also added a sessionDoneCallback to OnsetSlicer so that the startup script can automatically spawn a visualizer and sample player when the session is done.

the startup file now uses the sessionMaxBufferSamples limit to stop after 10 seconds of total samples have been captured.

the default mapping heuristic is pretty dumb! it first sorts all the samples by mininum spectral flatness, then clumps them into N groups (where N is the size of the note list), then sorts each sub-group by average spectral centroid.

the idea is that higher notes should therefore play less-tonal sounds, and higher velocities should player sounds with more high-frequency content.

in practice this is pretty crude and has quite chaotic results! oh well, still kinda fun.

i can easily imagine many more sophisticated ways of using the data. starting with simple algorithms from the field of data classification, such as k-means and PCA. (there is a k-means quark that is probably worth checking out.)

here is the data and samples from a session with mouth and viola sounds through RE-20 dynamic mic. (beware, the sounds are dumb.)


[ed: sorry, this link isn’t working from discourse? because its a zip? because not https? huh idk; anyway i’ll un-linkify it and you can copy and paste the URL.]

and here is me jamming on that dumb sample set with a drum pad. totally ridiculous of course and i don’t know that i would ever make music this way, but i was amused! (this is with the solo mode off BTW)

[ http://moth-object.com/obj/SC_230324_112702.wav ]

for me, the main purpose of the playable live test is to check that the samples feel good as far as their onset time - not too much space at the beginning (mushy) and not having the attacks cropped. which is to say, the lookahead analysis and rolling capture buffer are working together properly. seems fine to me.

i think i’m pretty much done here. there are certainly still some rough bits; SlicerHands in particular is a bit messy and has a couple known issues:

  • will produce some errors if there are not enough samples in the onset data to distribute at least one to each note in the noteList
  • the solo mode might be a little wonky?
  • uh… shoot i forgot the other one i was thinking of

there are other utility features / classes that would be useful too. in particular it would be nice to be able to rebuild the content of OnsetSlicer.sessionData and .sessionBuffers from the disk output.

but for now, i am going to stop working on this and see if anyone else would like to pick it up for norns. (at which point i’d be happy to assist and/or fix stuff.)


please ask questions here if you want.

if you read this far, congratulations!

for now, over and out.


. . . I couldn’t agree more. Finding sense in chaos is a key part of experimentation. Every instrument should have a modulatable chaos-factor, like a tsunami that introduces more disorder as it rolls in, and then tidies everything up again on its way out.

1 Like

pushed tiny update to SlicerHands implementing a min and max input velocity. min velocity is 1 by default, and by default low velocities are ignored (but can be alternatively mapped to the lowest velocity layer.)

this is necessary for weird controllers that send velocity=0 instead of noteoff, and maybe helpful as a controller sensitivity thing. (but be aware that moving the min or max vel changes the mapped layer for all velocities!)

something i thought of doing but did not just yet: make a little realtime visualizer that plots the incoming onset data as it happens, with X/Y/shape/color mapped to some of the more important params like amp/duration/centroid/flatness. this is easy to do with UserView and Pen… maybe later.

i left this thing running by accident a couple times with music playing in the room, and a MIDI sequence running, and got some weird accidental remixes. so that’s good


added the quck and dirty realtime visualizer to the startup script. it shows segments as greyscale dots. could certainly make more careful decisions about range scaling etc. but it is useful to watch e.g. how well centroidAvg and flatnessMin are working as discriminators. (TBH it is looking a bit wonky, i should probably re-visit the “average over non-silent control blocks” logic.)

i had forgotten that you can define a custom draw function directly on a Window, so UserView is only needed for more complex layouts.

also i had erroneously failed to push the maxVelocity variable.


my final update on this thread i think, for now? used this code with some tweaks, on norns, in a performance this weekend, alongside a modified version of one of my things from dronecaster. it works. it’s not very complicated: some of the potential further developments i’ve hinted at (k-means clustering etc) indeed push us into “flucoma territory” where one might be well served by looking at existing work. but yes, you can (relatively) easily make a useful “realtime autoslicer” with feature analysis out of vanilla SC stuff and run it on norns; this particular small pile of code gets you 95% there and the remaining 5% is creative / individual (for me at least.)

[… at this point i became unmotivated to continue the experiment and considered it to have an essentially negative result.]

in any thanks again for reading along!

and particular thanks to @branch for pushing me to perform this coding exercise. always useful to try things. hope someone helps pick it up and make something useful to you.


For what it’s worth, this has entered (days ago) my bucket of projects that might be fun to run with — but that bucket is admittedly big.

I hope someone else has done so before I get a chance to.


I want to thank you for showing interest in this idea, and I’m sorry this thread did not get the engagement it deserves. In the context of communication, this experiment may have yielded a negative result, but as a PoC it surely was a success if you managed to use it on stage. Speaking of which, any chance we can see/hear this performance?

I’m absolutely certain that this real-time autoslicer has quite a few solid use-cases, and that many others would benefit immensely from a tool like this; heck, even as a mod that could interact with other sound-making scripts. So where do we go from here? I would definitely be interesting in helping with testing, and with creating audio/video content that may inspire others to continue this project, because ultimately it was born of real-life necessity.

Meanwhile, my hat goes off to you @graymazes. Thank you sincerely for putting your work out there, it certainly ain’t easy!


eh. this is about how i engage in software for music. mostly it is one-off programs. but i self-cannibalize as much as possible because the software is the boring part. you know?

well, to come full circle (and this is long / tangential so it gets the triangle treatment)

i performed a very brief (~4min) accompaniment to my friend Madison Brookshire’s film two suns, for an audience of maybe 150-200, at The Lab in san francisco’s mission district, as part of the Light Field experimental film festival. i’ve posted about this elsewhere. to me, it was a simply astonishing feat of film curation (not to mention technical projection work) - many many rare works over 9 seperate 1 or 2-hour programs. (particularly moving to me was the Amy Halpern programming… but i digress)

two suns is a filmic musical score; two images of the sun are projected through 2 apertures in real time, onto 5 lines drawn on paper. the blobs are kind of large (covering more than one “note”) and move at a rate that is inhumanly slow and slightly nonlinr (because of the angle of projection), as well as non-equal between the two holes. i think it is a great piece of conceptual score making - “conceptual” because it engages the mind in an interpretive fashion and encourages extreme attention to / contemplation of physical detail, as well as engaging with / “problematizing” the whole edifice of western music theory. ("oh i’m gonne read this as mezzo-sporano C clef… wait wtf am i doing, who am i, why? etc)

i have actually performed this piece before in an acoustic ensemble, but when invited this time wanted to use non-human processes to implement the “sundial” rate (which in this translates to avg 1 semitone per minute.) i also wanted it to be two very distinct voices each with a “cloudiness” that could be varied. one voice was the dronecaster graph called UNMEMQUA with more arguments broken out, controlled by my collaborator / duo partner. (there is something nice about this one because of its emphasis on a rather large span of “fundamental” to higher harmonics, without being specifically tonal, plus stacking of noise flavors, plus slow rational-but-chaotic-modulation… its especially satisfying given my particular form of hearing damage… but i digress again.)

the other voice was a set of chimes that i played into an instance of OnsetSlicer. each captured segment was fed to a kind of multitap/crossfaded ping-pong looper that would bend upward at a rate that was carefully calibrated to yield a specific curve in the pitch domain (not the freq or rate domain, making e.g. softcut less usable.)

so there ya go. i agree, it is most satisfying for something to finally be used in art making… enough so that there is little reason to work on things otherwise. in this case i could have done it with a button, but it was fun to let it go a little wild and (e.g.) allow mic feedback into the mix. sounded nice in the room.

there was an audio recording made with some decent-looking equipment, but no idea if it will be made available. not really something i’d be inclined to share if it were “my piece” personally but idk.


lordy, i hear that.

am considering options… pushing my github sponsors page as a motivator towards setting hours aside to “finish” or “productize” ideas that i am almost positive have substantial value to others? idk. (i have 1 sponsor right now.)

something happened in the last… 3, 4 years? somehow i started counting my life in hours. ( i am (40+x) years old)

ugh blah delete me


Just want to say that I’ve quite enjoyed this thread. Life is very busy for me at the moment, but I hope to one day get back to super collider.
This thread has been a great insight into your process and I’ve enjoyed seeing how you’ve gone about it.
Perhaps sc/norns dev isn’t as daunting as I remember.

Thank you for taking the time to share.


well…musical form is in the ear of the beholder. :stuck_out_tongue:

is there a simple link to install the script into norns somewhere?
(i think i missed it along the way) :crazy_face:

1 Like

Id just like to second what @TomFoolery wrote. Thank you very much for putting this together. I am still far away from learning supercollider but it’s something on my list and I believe that threads like this are invaluable for advanced learning. Being able to follow your train of thought and dev decisions helps a lot and makes the experience so much more interesting.

don’t think we’re quite there yet:

as far as I understood it correctly the supercollider engine is done and now it’s up to us to weave it into a norns script. something I’d love to do, but at the moment I have too many unfinished norns scripts that need attention.


Just wanted to quickly say that I’ve been studying the script and all the notes you shared in here and that they are very helpful for me to understand the more advanced bits of supercollider and the dev workflow in general… Sometimes I try to get everything together in my head before making questions out loud, but happy to know there’s a community willing to hear if I ever decide to ask.
Grateful to you for sharing :pray:


point taken - that part of my comment was unnecessary and snarky. sorry about that.

i’m just exploring how engagement works in the llllllll of today. i think it is a bit product-focused; which is fine really, but i’m allergic to discord. (btw: “product” not implying commercial product, but “usable thing.”)

i’ll try and make some time to wrap OnsetSlicer and perhaps SlicerHands in a norns script that seems fit for semi-public consumption, while tweaking some of the algorithms and behaviors. not sure when that will be but probably within a week. (i have some travel time coming up and that’s a good opportunity for 2-4 hour projects.)

i still hope that this will be essentially a collaborative process and not just me trying to deliver a product. (i don’t see norns as a product platform personally, hence my reluctance to “release” the programs i run on it.)

and re: collaboration, i would be curious about starting a conversation with e.g. @sixolet regarding modularity of supercollider components. like, onset slicing in supercollider logically and functionally doesn’t need to “consume” the entire “engine slot,” it should be combined with sample playback or whatever else.

in the classes shared here, the buffer and analysis data are available directly to other SC components, but the components i’ve added for playback are not very flexible or generic. it would be more sensible to combine it with a softcut-based component and/or other SC-based sample playback or synthesis.

i haven’t done a lot of code-diving on e.g. the nb ecosystem, and wonder if there is time / appetite for perhaps a high-level description of that SC architecture. is this a good place to have it?

BTW! thanks to posting a link here i have now tripled my number of github sponsors! thank you ever so much for your show of support. it is now enough to justify a regular cadence of development time dedicated to these projects. (not a huge amount but some.) that could include components like this, but also norns core features / improvements and other very different softwares. i am thinking of retitling this thread to diary whatever project i am working on. i really really do hope it becomes a little more interactive but maybe that’s a silly thought.

anyway, the next thing i’d be most interested in doing personally is building out the dowser program in several ways. specifically i’d like to integrate it directly with some of the resynthesis methods i’ve built up around that kind of spectral peak tracking data, and refactoring it into a norns-compatible library and a desktop application layer. this just happens to be a kind of digital sound manipulation that i have found personally compelling since forever, but still do not have a 1-stop-shop kind of tool for executing.