Tuning, Scala, Aleph waves module

yes exactly forgot to finish the thought – no interpolation between BEES scaler table entries. (the real osc rez is pretty fine)

i think i decided this based on some really weird purist thought - that it was too not good, tuning wise, to have linear interpolation between points of logarithmic interpolation, better to keep things more predictable… this is probably insane

the GH issue i found though is that even after lookup values were not honored - i assumed because of MADD accumulator depth (40b), or something. maybe this is what you fixed

so in other words that issue is about the result of ‘hz’ and ‘tune’ params effectively quantizes the values of ‘hz’ or something like that

yes let me get back to collecting the values to really check it worked. BTW I think there might be a radix error in the tune param! 440Hz/1.00000 on BEES screen reports as 220Hz in jaaa

oh that’s probably true, i can’t hear the difference between octaves sometimes

well… unless waves is out of CPU and downsampling :smiley:

bahahaha you never know! crazy little box!

made me remember a random thought though - do you know a way to measure the time before & after running the DSP on bfin? cos we already have working param reads - so mean, modal, & median cpu times could get bolted on as params during module dev… then you could read them back to play screen during module testing to get a sense of CPU load

i’ve only done that by flipping a LED pin, which works fine

there is a cycle counter which you can get at through ASM (i don’t think theres an intrinsic but worth searching the toolchain). my attempts to make use of it as a readable parameter over SPI didn’t really pan out for some reason.

(recent misadventure with CV driver illustrates my reluctance to mess around too much with bfin core programming, blerrg)

here’s a thread about enabling cycles count
https://ez.analog.com/docs/DOC-15311

measurements of waves’ output frequency vs tune param:

#BEES(16 bit), frequency(Hz)
8192,219.3
8211 219.8
8230 220.3
8248 220.8

therefore I calculate resolution in this frequency range is 0.2% or 1/25 semitone - that’s very much accurate enough for really good JT sounds, no!?

But I mean really the tune/Hz scheme should constitute massive overkill, not ‘just good enough’. Just looking at the function freq_to_phase I think this part could use some norm_fr1x32 magic…

1 Like

Specific scale lookup tables for the CV N?
Or a way to populate the PATTERN with note values from selected scale?
…?

You can change TXo’s scales using Scala files and a utility that generates code for the module’s firmware.

1 Like

yup I think it’s nothing to worry about - linear interpolation between 1/8 semitone steps seems to be closer than 1 part in 10^4 - I compute this would therefore require 8810 steps per semitone before the linear interpolation would lead to a pitch error!

Just implemented the linear interpolation for param scaler & improved frequency-phase calculation accuracy. This should now enable extremely fine microtonal control of the digital oscillators in either:

  • equal-temperament (aka log) paradigm… 256 increments per semitone
  • just intonation paradigm using LINLIN to convert rational numbers into fract - resolution of this guy can now generate like 30 second beat frequencies between 2 330Hz waves!
2 Likes

Ah, I thought I saw the discussion of that somewhere. Is there more detail somewhere on the process of converting and uploading the files?
Does not sound terribly easy, but perhaps not prohibitive?
@bpcmusic is there a step-by-step somewhere?

Haven’t done this for the Aleph. My work was for the TXo’s unique lookup table format. Sorry.

In my experience, the SCALA files are pretty easy to deal with and well-documented.

My question is about Scala for teletype / telex…

I just want to be able to experiment with a bunch of them, and am not sure how to go about it.

Oops - sorry about that. My brain has been a bit scrambled the last few days.

It isn’t super complicated - but if you want to add custom scales to the firmware, you need to change the module’s code.

I have an ugly little utility that generates the proper code here:

  1. Put some Scala scales in a subdirectory (I used “scl/” and found them at the Scala site: http://www.huygens-fokker.org/docs/scales.zip).
  2. Reference the scales that you want in the order that you want in a text file; see “items.txt” for an example.
  3. Run the “table_builder.py” script and reference your input file. For example: python table_builder.py -i items.txt
  4. The script will output a “scales.cpp” file in your current directory.
  5. Cut and paste the two parts of the output into the corresponding areas of the “Quantizer.cpp” and the “Quantizer.h” files in your Arduino project directory for the TELEX.
  6. Deploy your updated code to your TELEX’s teensy.

Does that help (and answer your question)? :slight_smile:

3 Likes

let me clarify the crazy purist argument a little, just for fun…

yeah, sorry - i was pretty tired last night - the endpoint of the crazy purist thinking is that the ‘tune’ parameter can give you the additional microtonal control that the hz param scaler doesn’t.

without hz param interpolation, if you leave the tuning parameter fixed, then adding N to any hz parameter value will always give you “exactly” the same change in pitch - e.g. one semitone if N=8.

if you add linear interpolation to the hz table entries, this doesn’t hold true anymore - each interpolated step represents neither a fixed change in pitch, nor a fixed change in linear frequency, but rather a fixed ratio beteween two adjacent 1/8-semitone table entries.

so, given some number N, where (N%32) != 0, adding N to the hz param will change the pitch / linear frequency by a different amount, depending on where in between table entries you started. this hurts my brain! (my old teacher james tenney would have called me on the carpet for designing such a system!)

i guess it doesn’t really matter - you can always just ensure the hz parameter is a multiple of 32. but that’s the crazy logic. and you’re probably right that the difference is generally insignificant. it just felt like a complicated caveat to add, easier to forgo interpolation, ensuring that any two hz param values are always “in tune” (that is, known values specified in the table), and allow more arbitrary linear control with ‘tune’

in any case - by all means, certainly add interpolation on the hz table. (but be prepared to Justify it to the Ghost of Jim)

1 Like

Yes, thank you!
We are getting closer.

I have more questions, but:

I wonder if this should not be made into it’s own thread, like “loading Scala scales into Telex” or some such thing. I am feeling like we are drifting off topic in the mainly Aleph centered conversation.

Agree the errors are hard to reason about! So I decided to model the problem instead of fretting about it…

Here’s the lisp code that lead me to conclude in my last post & PR the linear interpolation errors are nearly two orders of magnitude too small to matter:

(defun ratio-to-semi (ratio)
  (* (log ratio 2) 12))

(defun semi-to-ratio (semitones)
  (expt 2 (/ semitones 12)))

(defun collect-ratios ()
  (loop for i below 32
     collect (list (semi-to-ratio (/ i 256))
		           (+ 1.0
		              (* (/ i 32)
			             (- (semi-to-ratio 1/8) 1.0))))))

(defun collect-semitone-errors ()
  (mapcar (lambda (el)
	        (- (ratio-to-semi (cadr el))
	           (ratio-to-semi (car el))))
	      (collect-ratios)))

(defun compute-max-semitone-error ()
  (reduce #'max (collect-semitone-errors)))

My definition of ‘too small to matter’ is anything less than 1/100 semitone. (ratio = 1.0005778). The max. error due to this linear interpolation (occuring halfway between two 1/8 semitone points) is 1.13122165e-4 semitones.

I was reading this http://www.kylegann.com/JIreasons.html, and it opened my eyes to the music-theoretic possibility there may be some interesting melodic constructs derived from (non-semitone) equal temperaments. So, once the gaps between 1/8 semitones are filled in, we can again use LINLIN to generate any equal temperament scale. The final result is (imo) a really beautiful system for juggling pitches. I guess one could even realise something like the ‘justonic’ sounds of this kranky old video ( https://www.youtube.com/watch?v=6NlI4No3s0M ) by combining Hz & tune.

aleph needs a simple chordal module - it’d be wicked to fork acid & replace one monosynth with a very simple poly synth, maybe replace half the dsyn-ish drum voices with some other computationally cheap digital percussion sounds. Microtonal groovebox!

As a silly aside, assuming the spacing between semitones on an unfretted string instrument is around 1cm, a human player would need finger accuracy of order 1 micron in order to beat the accuracy realised in latest PR. Also 1V/oct modular system would require better than than 10uV DC offset to match the pitch stability.

Hopefully no flaws in the above logic (or in the PR based on same line of reasoning). This feature would be even better PR if the ‘inbetween’ Hz values would display correctly on the BEES INS screen. Currently there’s no visual feedback to indicate the progression between 1/8 semitone increments, which kinda sucks now the log scale is not snapping to displayed values.

4 Likes

rawsc is supposed to be a simple block-processed polysynth … time time time

yeah i seem to remember KG using 22tet? ([ed] oop, no, erlich and others though…)

anyway of course i’m totally fine with the lin interp error

1 Like

Seems that rawsc is compiling no probs. Too late to actually fire up the aleph right now - do you reckon existing modules would get appreciable performance kick by simply porting toplevel module_process_frame to module_process_block? Obv porting all the frame-by-frame DSP object methods seems a little daunting, but I guess it could be done piecemeal, working downwards from toplevel DSP objects.

Assuming rawsc seems to basically work, I might port acid over to block processing & try to squeeze in a very simple polysynth to cpu budget… Suddenly got curious about block processing - time to find out what all the fuss is about!

reckon existing modules would get appreciable performance kick by simply porting toplevel module_process_frame to module_process_block

yeah, pretty sure there would be some gains there, simply from doing more work in each interrupt (and gcc can inline per-sample functions, perform commoning, &c)

one point of concern i have is that my recent attempts to use DMA for CV output driver apparently smashed memory. there’s a chance that my application of DMA for block processing is similarly borked (though it has had other, smarter eyes on it)

sorry, again i’m travelling and this time without aleph (though can get to one if needed)

Dang, I got distracted with hardware over the holidays and didn’t touch my Aleph for this subject. I’ll have some more time in January.

On the upside, I disassembled a digital piano and learned to weld acrylic together. That was fun!

1 Like