Aleph - Bees optimization

Hi there,

After re-reading what @zebra wrote about how presets, number of operators, of inputs and outputs in Bees are interdependant (I could see the corresponding numbers in the file net.h), I was wondering what steps could be achieved if I wanted to add more operators in there.

For instance I could remove some operators which I surely won’t need for now (ARC, HID, SERIAL for instance), or I could put down the number of presets to 16, etc.

I have now found out how to create my own operators and compile the whole thing, which works fine. But I’d really like to expand on some elements in my programmation. So what are the steps to achieve? I tried to change the figures in net.h but my whole Aleph hated that, obviously…
Thanks for everything!

1 Like

the limit of 128 is the number of operators that can be simultaneously instantiated. removing operator classes (like ARC or SERIAL etc) will not affect this.

// max operator inputs
#define NET_INS_MAX 256
// max operator outputs
#define NET_OUTS_MAX 256
// max  operators
#define NET_OPS_MAX 128
// max DSP parameter inputs
#define NET_PARAMS_MAX 256
// max presets
#define NET_PRESETS_MAX 32

all of these numbers taken together define the amount of RAM and filesystem space allocated for each scene. NET_PRESETS MAX is the single most significant. basically, each preset allocates NET_INS_MAX input node values, NET_OUTS_MAX output node values, and so on.

so if you really, really need more than 128 operators, you could for example reduce the preset count to 16 and double everything else. but this doesn’t seem to me like a likely necessity. if you have very specific and complex needs for your bees patch, my honest advice is to learn some basic C programming and create your own operator.

by the way i’ll be the first to admit that this is an extremely inefficient use of resources. the whole system could be totally rewritten to dynamically allocate operators in a linked list. it would not even be very difficult but it would take some time and commitment.


Agreed on everything. It’s not so much the number of operators, but mostly the number of inputs as I made a scene that uses a few LIST16 ops… Anyway I just wanted to know what the possibilities were, and that makes things very clear - thanks for your fast reply !

Actually one more question: if a change was made in the number of presets, wouldn’t there be a risk somehow to corrupt all previously made scenes?

i would migrate a small part of the system over to linked lists and run a performance test between the two before committing. arrays benefit from cool things like random access, cache locality, and not having to allocate a second pointer for each list element.

hi @murray, thanks.

sorry, i should have been much clearer. in the post above, by “resources” i am exclusively referring to storage and RAM, which are impacted by the parameters we’re discussing. i wasn’t referring to CPU cycles, which basically are not.

i am not suggesting using a LL interface to access op and connection data during the runtime event loop. we don’t need performance metrics to tell us that would be a bad idea; bees network is arbitrarily interconnected and that would obvs require lots of looping over the list. (as to the point about cachce - avr32 UCA has a single level of 16k DCACHE which is probably not gonna come into play here.)

i am just talking about converting the data structure of the aleph:bees operator network to be dynamically allocated, rather than the very strange thing that it is now:
[ ]
[ ]

  • this looks so crazy because it was written before we had a fully functional malloc()!

it would seem reasonable to me to malloc() each operator and add 4 or 8 bytes to link it in the op list, just for the purpose of making arbitrary editing much, much easier - delete any op at any position by unlinking and free()ing it. but this really a minor detail.

preset content should also be dynamically allocated; this is the biggest restriction in the current structure, and i’d say also the trickiest part of this hypothetical rework.

but in any case, i would certainly continue to maintain regular arrays of pointers to ops, to be used during the event processing loop. possibly these would keep pretty much the same structure (array of op pointers, array of output targets as input indices, array of input value pointers), or possibly something else would be easier / more efficient - this would depend largely on the structure of preset data, and i won’t go into the details now.

the difference is that these regular arrays would be completely rebuilt any time the op list changes, and everything would generally be much smaller and more flexible.

maybe you are wondering why this is a priority. there are at least two specific problems to solve:

  • the OP’s issue - accommodate a variety of different patch shapes - a few big, self-contained ops (like WW, or custom things), a lot of small, generic ops with many I/O nodes, many presets, no presets, etc.

  • maybe less obvious, but even more compelling to me: if the serialized scene data were small enough (and many/most scenes would be, if the serialization were more efficient) then it could be stored in the UCA0512 internal flash. the limit would be (512K - 64K [dsp] - [code size] - [param scaling data]) - call it ~100k.

then, the aleph could run bees without an SDCARD using a single scene and DSP module, which would be really really useful.

this would be a great candidate for a bees-0.7.0 feature. not really that much work but not trivial either. would break scene binary compatibility.

1 Like

yes changing any of these constants would break compatiblity with scene binaries. but you could use beekeep utility to save JSON versions of your scenes, which should translate pretty easily to tweaked firmware version.

basic process is:

  • build beekeep against vanilla bees sources
  • run it with your scene binary to export JSON
  • tweak scene structure
  • rebuild beekeep
  • convert your .json back to .scn

your new .scn files will not be compatible with anyone running vanilla bees.

thanks @zebra for laying out your mind so thoroughly. yeah, that comment was made after a brief scan of the code and my head was in a totally different area. have a good undestanding now of how you mean to implement the linked lists. i find defining constraints a very interesting piece of system design, especially in context of open-source tools as you’re never quite sure what the user will want to do with the foundation you’ve laid.

1 Like