Monome modular firmware on aleph

Continuing the discussion from Orca op – aleph bees? and elsewhere

so i have been thinking about how to make a unified wrapper layer for monome modular firmwares to run as standalone apps on the aleph.

generally, all the firmwares in [ https://github.com/tehn/mod/ ] hew pretty closely to the aleph app framework already, except instead of sharing a main.c and customizing the event handler tables and timer init routines, each one provides a different main.c in which the entirety of the module functionality resides.

i haven’t fully diffed the contents of skeleton/ and system/ with the aleph’s avr32_lib but they seem quite similar, aside from the obvious differences:

  • they target a different processor (register locations, etc), so different bits of the ASF

  • they take clock input from a GPIO interrupt to generate a clock event (kEventTrigger in system and kEventClockNormal in skelelton, i’m guessing : [ https://github.com/tehn/mod/blob/master/system/interrupts.c#L81 ]

  • they control different output peripherals, of course, to put out CV and stuff. somewhat crazily, to my thinking, instead of having drivers for these peripherals, there are uncommented calls to the low-level SPI driver pasted right into the module functions in main.c! [ https://github.com/tehn/mod/blob/master/kria/main.c#L488 ]

  • there are some additional features like a HID keyboard driver that never made it back ‘upstream’ to the aleph codebase.

so, the wrappers i’m thinking of would have to:

  • provide a UI for assigning outputs to blackfin parameters

  • provide a UI for selecting DSP module (and storing it in flash, if there’s room; there should generally be since modules are uc3a0256 yeah?)

  • provide clock input from ADC or something

  • abstract some board config stuff out of the library.

it would be maybe a good opportunity to clean up the module code a bit anyways and cut down on copy-paste. make life easier for future development on the modules.

@tehn, @ngwese, @scanner_darkly, @rick_monster, any comments?

7 Likes

hmmm so my thinking is that one possible way to attack this is first address clean cross-compilation of existing aleph modules and apps targetting linux. Starting with dacs & mix, just to set a more manageable initial goal. Obviously doesn’t help a jot with the required changes to avr32_lib, porting HID drivers, etc.

Going to resist the temptation to harp on about how much easier it will be debugging modular firmware ports on linux emulator till there’s some working code to back up this utopian vision…

i quite agree that this would be useful… though of course sometimes debugging gets hardware-specific, plenty of times it is just the usual logic problems that would be easy to catch with GDB. (not that you can’t run the aleph with an ICE on either processor, but that’s not so accessible.)

because i agree, the aleph app framework is separate, and app code is basically platform-agnostic. (in fact i just added c++ stuff to bees sources in my fork, so i can build bees in a JUCE app.) so that’s a first step for module code.

it’s a little more daunting to provide emulation for all the hardware bits to make a linux build actually do anything. nothing too challenging in itself, but plenty of moving parts and probably a good number of hours. aleph/utils/avr32_sim/ is there to at least provide something for app code to link against; i’ve actually been cleaning it up at least to suppress lots of warnings, try and make sure function bodies are preprocessed out instead of commented out, and bring a few function signatures in sync with more recent changes. so it’s largely a question of getitng in there and making the function bodies do useful stuff for things like filesystem, timers, screen, usb, etc.

in some cases the hardware emulation is a design problem in itself. like, how do you emulate CV and encoders? an OSC layer? it’s not a bad idea; would be nice to simply have a make target to build the modules as OSC sequencer apps.

(happy to move this discussion to aleph github if that makes more sense BTW.)

yeah I saved soooo much time prototyping on linux for my module and serial prototype - even with really bad basic tools that wouldn’t link against the full code…

was thinking osc for everything apart from screen - but I see you’re already making some inroads with the JUCE thing so I’ll just focus on offline module dev for now. gradually moving initial hacks from grains module out into utils/bfin_sim right now…

I think one of the big upsides for doing this for the module users is that it opens up the possibility of having multiple apps in a single firmware. E.g. being able to load up kria and orca on a WW, with some sort of UI to switch between them and manage saving and loading.


Also, it’s slightly off-topic, but can I put a vote in for splitting the module firmwares up into separate git repositories (plus some submodule magic to bind things together)? I’m more than happy to do the work for this, and it feels like the kind of thing that might make this task easier, particularly if one of the end objectives is to merge system, skeleton and avr32_lib.


I’m not an expert on bees at all, but it is possible to go the other way… run bees on the modules with a statically configured network? Then you could convert the module firmwares to bees ops, and then run those in bees on the modules. I’ve just had a look in the other thread and it seems like bees might be a bit too ‘heavy’?

multiple apps in a single firmware.

this seems straightforward in some ways. but flash size is a significant limitation.

[sorry, i keep editing this as i peer at the the objdump outputs.]

the code and data space in each modules main doesn’t actually seem that significant. so i think there is plenty of room in flash for lots of different module logics. but they all use different amounts of NVRAM, for preset/scales data. (WW is ~30k, kria is ~60k) and each one tries to put that data at the top of the nvram section. so there would need to be some wrangling of that. (max nvram looks to be 128k by default for all the modules, which leaves another 128k for code space if i read it right.)


is possible to go the other way… run bees on the modules with a statically configured network

i guess so? kind of but bees assumes a lot of specifics about the available peripherals - as do all the modules, of course, but to a much more involved degree. it is also pretty specifically designed to be possible to edit the patch with the aleph UI (maybe not ideal, but still.) bees could be stripped down a lot and respun for the other boards. but there are probably better systems for offline program generation if that is the only goal.


submodules

seems like a generally good idea, pretty confusing to me to set this up on github. particularly with forks and stuff. of course it would mean extra care not to break anything with changes to the lib modules.


one of the end objectives is to merge system, skeleton and avr32_lib.

well at some level of course there will be differences. the modules and aleph have totally different processors for one thing (UC3B0256 vs UC3A0512 - maybe some of the module hardware is UC3B0512? i’m seeing linker scripts for both.) i don’t really know what the functional differences are, but i don’t think they’re profound. of course they have different register maps and whatnot, and they use different parts of the ASF, but that stuff can be customized in the library headers and makefiles.

of course this all means work to do and i don’t expect monome to give it any kind of a high priority.

Teletype uses the 512k flash version.

I’ll have a look at submodules and see if I can come up with some solutions to forks. Mainly I’d just like to give the module source code a bit of a tidy, I’m not really a C coder at all, so the tidier it gets the easier it is for me to fiddle with.

I’d like to merge system and skeleton, and if that’s a bit beyond my abilities then I’d like to make it as easy as possible for others to do so. Splitting the source code up and using submodules to bring in skeleton and the asf might be something that makes that a bit easier. Also there a quite a few binary bits in the repo that shouldn’t be there, if I split them up I can do some history rewriting along the way to get rid of them. I’m finding that git runs slowly every now and then on my computer in this repo.

(I’ve had a quick look and it looks like teletype uses system and everything else uses skeleton.)

1 Like

That’s my gut feeling, I saw your first reply and started fiddling with nm et al. I do find the outputs of those programs really confusing though!

Scales data could hopefully be common, I know Orca also uses the same scale data as WW.

In the module code a lot of the variables are statically allocated. Would that need changing to stack allocated instead?

[ redacted ]

sorry, i totally misread the linker map. of course .bss is in RAM.

still looks like stack size is only 4k unless it is defined somewhere else (not finding it with fgreps though): https://github.com/tehn/mod/blob/master/system/link_uc3b0512.lds#L85

1 Like

by contrast, on the aleph we have lots of SRAM. so in bees, there is a giant structure that holds basically all state variables, including the op memory pool (>256K) and it is heap-allocated. [ https://github.com/catfact/aleph/blob/dev/apps/bees/src/net.c#L213 ]

so yeah, kria/orca/etc could be operators pretty easily. (even teletype, except for UI issues.) makes more sense the more i think about it. refactor all the static vars into the op class structure where they belong. they won’t eat memory until the op is instantiated, and i think we have plenty of code space left in bees to accomodate the logic.

the operator versions would have to have their preset space significantly reduced though. the main objection i have to putting these things in bees is that it limits flash space; i’d really like to rework scene storage so that we can store a single scene plus DSP module internally; the more flash available the better.


by the way, i think i was wrong about this:

each one tries to put that data at the top of the nvram section. so there would need to be some wrangling of that.

in fact, the nvram assignment uses a gcc section attribute, so i think multiple uses of this idiom would just get placed next to each other.

So could you then take say op_life, or the hypothetical op_kria, manually insatiate it:

op_life life = { ... };

along with anything else you need

op_ww_output output = { ... };

wire them up together and run it on a module. i.e. use the infrastructure of bees without actually using bees itself.

By this, do you mean only able to store a single preset (rather than the 8 you get on a module)?

op_life life = { … };

along with anything else you need

op_ww_output output = { … };

wire them up together and run it on a module. i.e. use the infrastructure of bees without actually using bees itself.

you raise an interesting point. refactoring a modular firmware as an operator is basically equivalent to abstracting it from the direct hardware interfaces. so in that sense, actually, i agree that it works to think of the operator classes as being portable to modulars rather than the other way around.

in fact, yes, i think this is a very good idea. input nodes in bees are just function pointers. in setting up a module framework you could just have hardcoded input nodes corresponding to the DAC channels and so on, explicitly instantiate operators and patch their outputs to those nodes.


By this, do you mean only able to store a single preset (rather than the 8 you get on a module)?

not really, a ‘scene’ in bees is a collection of operators patched together, potentially a large number. each scene also contains a layer of preset memory (32 presets.) they live in RAM along with the rest of the network state. each preset can affect the connectivity of the operator graph and the values of the input nodes in the graph in a basically arbitrary way. additionally each preset stores up to 256 DSP parameter values.

so a “scene” in bees is quite large, and actually this feature i described has been held up by the need to build a more efficient packed storage format for these things. (their runtime structure is fast, but big and flat, and mostly empty most of the time.) so we write scenes to sdcards…, which kind of sucks.

1 Like

So, compile-time bees basically :slight_smile:

It seems like an interesting idea to explore. I guess trying to get something like op_life along with some meaningful output running on a WW is the next step then? Proof of concept.

ahh! i have much to contribute here but will not have real computer access until saturday (i can’t do long phone typing, evidently i am too old)

many exciting developments here that i highly support

2 Likes

It might be more production for all the devs but I enjoy reading about the progress in threads like this

I vote for at least some discussion to stay here if it’s cool with yall

some thoughts:

i think just separating the engine and abstracting inputs/outputs/events should be enough for 90% compatibility right there. even if it wasn’t done already it should be very straight forward to find all the instances where periferals are accessed directly and replace with proper functions.

didn’t occur to me this would also facilitate being able to use a simulator - this is a really nice benefit! and yeah, doesn’t need to be a full blown sim, just something you can plug the engine into without having to worry about the platform specific stuff.

another things to consider - this would be benefitial not only to port between modules/operators/standalone but also being able to run the same firmware on different modules, for instance, running WW on Earthsea, if you are more interested in using it as a 3 CV sequencer. this, of course, necessitates having this in mind when developing a firmware so these things aren’t hardcoded and it can adjust dynamically based on how it’s configured.

i would even go further than being able to switch between 2 different firmwares without reflashing - how about being able to run 2 or more simulatenously and being able to configure inputs/outputs so you could, say, split grid between orca and kria?

should probably combine skeleton/system at some point. right now any new functions have to be duplicated.

does Aleph have event processing similar to module firmwares? probably another thing that should be abstracted. and i think there is some contamination with interrupts being processed in the firmware instead of using event handlers that would also need to be cleaned up.

2 Likes

[sorry, misread your sentence]


does Aleph have event processing similar to module firmwares? probably another thing that should be abstracted. and i think there is some contamination with interrupts being processed in the firmware instead of using event handlers that would also need to be cleaned up.

yes, the event processing system started in the aleph. certainly each board would have to have a slightly different set of peripheral events; i’m sure we can think of a clean way to do it. an app gets a header defining the events for the board it’s being built for, and sets handlers appropriately. i dunno.

so i’m looking through the firmwares and taking another look at bees, and the only things that i’m seeing that are called from interrupts are the timer callbacks. you’re right that it doesn’t always seem appropriate to have these in the app code. (if that’s what you mean.) some of the timer callbacks in module code are quite heavy.

i was going to say that in bees, all the timer callbacks are just responsible for fetching some data (hopefully without a lot of blocking) and raising events for further processing. which is as it should be. but then i realized, holy hell! the screen refresh is being called directly from a timer callback. that is not the intention at all! @rick_monster no wonder there was no perceivable difference when we disabled app pause/resume in render_update()! so i’m trying that change now (adding kRefreshScreen event type and a handler, raising it from timer callback), see if things blow up. maybe there was a good reason for doing it this way. ( i could have sworn we had an event like that at some point.)

there’s definitely good reason, BTW, for an app to be able to set/unset the soft timers and define their periods. i can’t remember why all the timer callbacks went in the app when i first separated bees and lib. maybe just being lazy about rewriting soft-timer interface.

some examples:
https://github.com/tehn/mod/blob/master/skeleton/interrupts.c#L115

the clock_pulse is a function in main.c which gets called from the interrupt handler (this is the clock input). also, timer callbacks get called directly from the timer interrupt handler. but in this case i think it’s fine as the alternative would be to generate timer events, and for timing you want it to be as accurate as possible. but some cleaner separation would be good.

another example: skeleton creates an event kEventClockNormal when a cable is plugged or unplugged from the clock input, but the event handler reads the value directly from the port instead of using the event data.

ok-- hello from arcosanti

i’ll attempt to explain some of the choices made in the source code, but i’m afraid most of them will come down to time pressure to release. lots of good suggestions here and i’m looking forward to improvement (in the form of readability and future extensibility) and also learning some better practices (i’ve already learned a ton from @zebra)

the modules (all but TT) use an AT32UC3B0256. compared to the aleph AT32UC3A0512, the chip itself is physically smaller, cheaper, and has less flash and SRAM. (256k flash, 32k SRAM). for the TT i quickly realized i needed more SRAM (and flash) so splurged on the AT32UC3B0512 which has 512k flash and 96k SRAM.

my initial combing of aleph avr32-lib to create skeleton involved sequentially adding and understanding exactly how everything worked. i made system while trying to work out structural differences for TT, and in hindsight of course didn’t need to. so i’m all for going back and combining the two.


re: aleph ports. i want to do this, and i’m open to a consensus regarding the best method. my initial thoughts prior to this thread were some sort of basic mapping layer between a DSP and grid app. of course, this is what bees does. but i also like the idea of something statically configured via a config file or (ideally) an interpreted language.

re: simulators. i definitely think simulators are important for blackfin DSP. but honestly the grid apps seems pretty straightforward to develop. unless substantial effort was invested to make them usable in linux/mac/etc with some sort of OSC/MIDI mapping (etc), i see these efforts as better spent elsewhere (because there is a lot to do), but that’s just my opinion (as perhaps i’ve gotten to comfortable with the reflash USB cable-swap.)


github sub-repos. yes, i am up for this. it’s beyond my git-knowledge. let me know how i can help make something like this happen. separate issue trackers etc would be fantastic. i just added a bunch of you guys as collaborators to the tehn/mod github repo


will continue to process the rest here…

3 Likes