yes exactly forgot to finish the thought – no interpolation between BEES scaler table entries. (the real osc rez is pretty fine)
i think i decided this based on some really weird purist thought - that it was too not good, tuning wise, to have linear interpolation between points of logarithmic interpolation, better to keep things more predictable… this is probably insane
the GH issue i found though is that even after lookup values were not honored - i assumed because of MADD accumulator depth (40b), or something. maybe this is what you fixed
so in other words that issue is about the result of ‘hz’ and ‘tune’ params effectively quantizes the values of ‘hz’ or something like that
made me remember a random thought though - do you know a way to measure the time before & after running the DSP on bfin? cos we already have working param reads - so mean, modal, & median cpu times could get bolted on as params during module dev… then you could read them back to play screen during module testing to get a sense of CPU load
i’ve only done that by flipping a LED pin, which works fine
there is a cycle counter which you can get at through ASM (i don’t think theres an intrinsic but worth searching the toolchain). my attempts to make use of it as a readable parameter over SPI didn’t really pan out for some reason.
(recent misadventure with CV driver illustrates my reluctance to mess around too much with bfin core programming, blerrg)
yup I think it’s nothing to worry about - linear interpolation between 1/8 semitone steps seems to be closer than 1 part in 10^4 - I compute this would therefore require 8810 steps per semitone before the linear interpolation would lead to a pitch error!
Just implemented the linear interpolation for param scaler & improved frequency-phase calculation accuracy. This should now enable extremely fine microtonal control of the digital oscillators in either:
equal-temperament (aka log) paradigm… 256 increments per semitone
just intonation paradigm using LINLIN to convert rational numbers into fract - resolution of this guy can now generate like 30 second beat frequencies between 2 330Hz waves!
Ah, I thought I saw the discussion of that somewhere. Is there more detail somewhere on the process of converting and uploading the files?
Does not sound terribly easy, but perhaps not prohibitive? @bpcmusic is there a step-by-step somewhere?
let me clarify the crazy purist argument a little, just for fun…
yeah, sorry - i was pretty tired last night - the endpoint of the crazy purist thinking is that the ‘tune’ parameter can give you the additional microtonal control that the hz param scaler doesn’t.
without hz param interpolation, if you leave the tuning parameter fixed, then adding N to any hz parameter value will always give you “exactly” the same change in pitch - e.g. one semitone if N=8.
if you add linear interpolation to the hz table entries, this doesn’t hold true anymore - each interpolated step represents neither a fixed change in pitch, nor a fixed change in linear frequency, but rather a fixed ratio beteween two adjacent 1/8-semitone table entries.
so, given some number N, where (N%32) != 0, adding N to the hz param will change the pitch / linear frequency by a different amount, depending on where in between table entries you started. this hurts my brain! (my old teacher james tenney would have called me on the carpet for designing such a system!)
i guess it doesn’t really matter - you can always just ensure the hz parameter is a multiple of 32. but that’s the crazy logic. and you’re probably right that the difference is generally insignificant. it just felt like a complicated caveat to add, easier to forgo interpolation, ensuring that any two hz param values are always “in tune” (that is, known values specified in the table), and allow more arbitrary linear control with ‘tune’
in any case - by all means, certainly add interpolation on the hz table. (but be prepared to Justify it to the Ghost of Jim)
I wonder if this should not be made into it’s own thread, like “loading Scala scales into Telex” or some such thing. I am feeling like we are drifting off topic in the mainly Aleph centered conversation.
My definition of ‘too small to matter’ is anything less than 1/100 semitone. (ratio = 1.0005778). The max. error due to this linear interpolation (occuring halfway between two 1/8 semitone points) is 1.13122165e-4 semitones.
I was reading this http://www.kylegann.com/JIreasons.html, and it opened my eyes to the music-theoretic possibility there may be some interesting melodic constructs derived from (non-semitone) equal temperaments. So, once the gaps between 1/8 semitones are filled in, we can again use LINLIN to generate any equal temperament scale. The final result is (imo) a really beautiful system for juggling pitches. I guess one could even realise something like the ‘justonic’ sounds of this kranky old video ( https://www.youtube.com/watch?v=6NlI4No3s0M ) by combining Hz & tune.
aleph needs a simple chordal module - it’d be wicked to fork acid & replace one monosynth with a very simple poly synth, maybe replace half the dsyn-ish drum voices with some other computationally cheap digital percussion sounds. Microtonal groovebox!
As a silly aside, assuming the spacing between semitones on an unfretted string instrument is around 1cm, a human player would need finger accuracy of order 1 micron in order to beat the accuracy realised in latest PR. Also 1V/oct modular system would require better than than 10uV DC offset to match the pitch stability.
Hopefully no flaws in the above logic (or in the PR based on same line of reasoning). This feature would be even better PR if the ‘inbetween’ Hz values would display correctly on the BEES INS screen. Currently there’s no visual feedback to indicate the progression between 1/8 semitone increments, which kinda sucks now the log scale is not snapping to displayed values.
Seems that rawsc is compiling no probs. Too late to actually fire up the aleph right now - do you reckon existing modules would get appreciable performance kick by simply porting toplevel module_process_frame to module_process_block? Obv porting all the frame-by-frame DSP object methods seems a little daunting, but I guess it could be done piecemeal, working downwards from toplevel DSP objects.
Assuming rawsc seems to basically work, I might port acid over to block processing & try to squeeze in a very simple polysynth to cpu budget… Suddenly got curious about block processing - time to find out what all the fuss is about!
reckon existing modules would get appreciable performance kick by simply porting toplevel module_process_frame to module_process_block
yeah, pretty sure there would be some gains there, simply from doing more work in each interrupt (and gcc can inline per-sample functions, perform commoning, &c)
one point of concern i have is that my recent attempts to use DMA for CV output driver apparently smashed memory. there’s a chance that my application of DMA for block processing is similarly borked (though it has had other, smarter eyes on it)
sorry, again i’m travelling and this time without aleph (though can get to one if needed)