Heh, just replied on the other thread… pasting my reply from there here. Need to dash now, will try and think about what you’ve written later…
Have got the updated and pruned version of the ASF to build against all the modules and bees.
I had to patch usbb_host.c as Atmel missed out an include, looks like @tehn had to do the same patch, @zebra dealt with the bug in a different way (see the gist for the patch) . That was enough to get all the modules working.
To get bees to build I had to copy over the altered version of print_funcs.[hc] as well as add #include "aleph_board.h" back into board.h. So if you do ever want aleph/bees to build against the modules version of system (a.k.a avr32_lib) then the ASF stuff should be relatively trivial.
Now that I can check that everything compiles, I’m going to try and shrink the ASF down as much as possible, I’ll probably make a repo with the scripts and the ASF zip file, so that it’s easy to recreate the work (e.g. if we decide that some files that were removed are wanted again.)
So I was going to say, you could probably just make the repos now for the issues only… but you already have! I might end up having to force push to them as I’m trying to come up with a way to keep as much git history as possible, but I can’t imagine that will be a problem as no-one will have forked them yet.
We can call the system repo either just system or system_lib or libsystem or any other permutation. (I’m quite keen on libsystem at the moment.)
Any particular reason why? I need to get my head around 100% before I commit, but I think using submodules gives you the ability to bless a particular commit of the system repo as working with a module’s firmware.
Which ever way we go, I’d like to write a travis-ci script for the system repo, that checks out each firmware and tries to build it, that way any changes to the system repo that cause problems* show up straight away (esp. if you use pull requests for all changes).
If “system” is in a separate repo from each of the modules then pulling it in via submodules is probably the right way to go. Using submodules would allow each module repo to control the version of system they use.
On the downside I’ve never quite wrapped my head around the proper workflow when a particular change spans both the main repo (a module in our case) and the submodule repo (system in our case). I use git nearly every day and I still find submodules confusing. At a minimum we’d probably benefit from putting together a doc which describes “how to work with submodules” in the “system” repo.
I would definitely recommend against forgoing any mechanism to track the version of system a module is built against. I’ve run into that case in other code bases and it made debugging and reproducing old builds very difficult to impossible.
The question which might we worth asking ourselves is whether one tends to:
work on / focus on one module at time and rarely make sweeping changes across all modules
develop more common functionality and release new builds for all modules around the same time
If the former feels more common then an umbrella repository might be overkill.
If the later feels more common or if people want more of a one stop like the mod repo is today then an umbrella repository probably makes some sense.
Why don’t we just run a single ‘everything’ repo containing all aleph & module-firmware code? Still don’t see any significant disadvantage to that scheme.
Don’t properly know how submodule song & dance goes - used it only once very briefly. But looking in from outside I only see potential for versioning confusion & slightly increased barrier to entry for code-curious musicians.
Sorry in advance if I’m missing something crucial and being unnecessarily luddite!
I don’t disagree, if you’re trying to have a bisectable history, the only way I can think to do it is to commit to the submodule repo, then rebase your main repo changes on top of a git add <submodule>; git commit.
Good idea, I’m going to try and write up an intro to submodules post today. Everyone needs to be comfortable with working with them before we go ahead (IMO).
Here’s the downside… let’s say you’ve got one mega repo, with aleph and mod in it, every time you make any change to the common code (libavr32) you have to make sure everything compiles properly (and really you should test it on hardware too).
Whereas if you use a submodule for libavr32, you get to decide when the code for that submodule is updated to the latest version (or any other version if it takes your fancy).
Let’s say hypothetically, that whitewhale is declared perfect, and no more changes are going to be made. With the mega repo, you still need to update it’s source to track any changes made to libavr32 otherwise it will be broken in the master branch. With submodules, you don’t have to do anything, the libavr32 submodule will always point to the same commit, unless you explicitly change it.
The advantage for the code-curious musician is that we can say to them, don’t go in the libavr32 folder, there be dragons (or more specifically interrupt routines, same difference as far as I can see). Have a look at @scanner_darkly’s orca repo, it’s a lot less scary than being faced with the aleph or mod repo if all you want to do is make a PR to add some small feature.
I think this goes to the heart of it, but I’d suggest the dichotomy is between:
no umbrella repo, each module is it’s own thing, it has a submodule to libavr32
leave things as they are (and maybe make a mega repo with the aleph code in it too)
As @rick_monster says, submodules are going to add confusion, having nested submodules doubly so.
One last thing, I know for a fact that I’m going to start getting very very busy from the summer onwards, till sometime next year. I just want to make sure that I’m not the only person that knows how all of this sticks together, I’d hate to leave people in the lurch.
In other news, I’ve managed to shrink the asf down below 4mb, which I’ve done my being very aggressive at removing unused files. What I found, is that, as I shrunk the asf down it became easier to grok, I’m hoping that by keeping it very small, it won’t be such a black box of mystery (although I’m not advocating anyone editing any of the files in there). I’m going to create a repo (diet-asf?) with the script used to prune it to make it easy to re-add any removed files if they’re needed in the future.
If we decide to go ahead with the repo split up, I’ve found this script which should help with keeping as much commit history as possible. I’ve also done some reading up on the git filter-branch command myself.
that’s a feature not a bug! If we have a monolithic repo with decent top-level makefile, you can type ‘make apps’ and easily check whether your change breaks the build for other apps, then take steps to make sure the old apps get updated if the old API has to break. If an avr32lib change breaks userspace programs at runtime and we don’t notice immediately this could really suck, but we will eventually catch it, fix it and move on…
umm yeah more debatable but I still think any pull-request affecting avr32lib should be vetted against all the currently working apps or things will get too fragmented. IMO we can rely on some non-coders on this forum to help test the apps against updates to avr32lib before merging pull-requests modifying that code. I like good ol’ tags for marking known-working versions of, e.g white-whale.
Just to reiterate I really think we would benefit from abstract reusable control-blocks - e.g control-block called a white-whale that can be either wrapped in a bees module, a standalone app or minimal linux test-bench. Might have to revise this view, depending how I get on with bees refactoring. In my current thinking, the core of bees network & operators should also migrate to ‘control-block’ directory, along with the static-compiling of fixed networks, giving a glimpse of something that could provide benefit from aleph back to euro-modules…
so if i understand correctly using submodules ties them to a specific commit? and if something changes in libavr32 i’m not forced to update my app unless i need those changes? going to read up on submodules too…
but if those are breaking changes you’re just trading dealing with it at the point where changes are made to having to do it later, so less work now, but potentially more work later. also, if i’m making potentially breaking changes i would want to make sure everything still builds and works now, as opposed to forcing other devs deal with it later. in any case such changes would go through a PR and have more eyes on them so we should be able to catch any potential issues early on.
i don’t have a preference and trust your expertise, just trying to understand the benefits of one model over another.
not sure it makes a difference for somebody wanting to try writing a new app - they could still simply fork it and build it against that version, any changes in libavr32 they can make in their own fork and only merge from upstream when needed?
abstracting app logic into ‘control-blocks’ or whatever you would call them is definitely something that i’m very interested in, this would depend on having aleph and modular firmwares run off the same library though, so perhaps once that’s done we could try a simple prototype app?
since we need a decision first & foremost I shouldn’t be objecting to this. In fact I believe avr32lib won’t see much action - I rarely had to touch anything in there for bees hacks & tweaks I made were pretty trivial…
The bee in my bonnet is about making the guts of existing modules (including bees!) more explicitly reusable…
You can still have separate repos to just track issues and releases. There are plenty of empty GitHub repos out there just to take advantage of the issue tracker. I wouldn’t let that swing things one way or the other.
Or never… if you don’t want to, but yes that’s very true. But which is better…?
I don’t disagree with this at all. One quick warning, when building the modules, you need to make clean when switching between them. Regardless of what we choose, I think getting a travis-ci script up and running in either the monolithic repo, or in the libavr32 repo would be a good idea. The monolithic case is pretty straightforward, we’d have to be a bit clever with submodules, but it’s definitely doable.
So what are we doing here? I guess the final decision is down to @tehn, but it doesn’t feel like we’re at consensus. My own position is that the greatest weight of opinion should be given to those that are willing to do the work and support it, I’m happy to do the work to break up the modules into separate repos, but I can’t really support it for more than the next couple of months… so someone else will have to be the primary advocate for this approach.
I really like the “control-block” approach that @rick_monster advocates, particularly as it will brings us closer to getting multiple apps in a single module firmware, which I’d really like to see, and I’d be as supportive as I can if someone wants to step up and make that happen.
simplify/unify libavr32 (which has been done, thanks!) between all modules
facilitate firmware mods on for the modules (ie orca)
on the aleph front:
make aleph and module libavr32 the same
develop a system to wrap the euro code to be run on the aleph
now, realistically, the euro goals can be accomplished with not much effort, and very quickly (or longer if git history is to be preserved). one point i didn’t raise for split repositories, is that it makes it so that monome doesn’t need to “own” the repo of a mod-- ie, scannerdarkly/orca is just fine given it submodules monome/libavr32 yeah? this facilitates less organizational management, though presently it’s not a hassle.
as for the ambitious aleph developments, i must be honest that i’m already swamped and don’t expect to be able to invest real time in the near future. so my priorities are simply to make some changes that will improve the present state of things.
i’m good keeping separate repositories for simply issues/releases, if there’s a general feeling that we should maintain some sort of encompassing repository as now. however, i do not want to mix the entire aleph project (which includes blackfin code, avr8 code, emulators, etc) into the same repo as the modular stuff, resulting in a mega-monome repo-- people shouldn’t need to grab the whole aleph tree just to tweak a WW, etc.
if the eventuality is a shared libavr32 with aleph, i’m still leaning towards submodules.
lastly, i’m not terribly worried about libavr32 breaking old work, as it shouldn’t be modified very much-- and any modifications to it shall be heavily scrutinized-- and as such, attention can be paid to which modules might be affected.
also @sam amazing that you got the ASF down to 4mb. and excellent submodule tutorial!
so-- that’s my vote, but please don’t take it as absolute-- i’ve listened closely to all suggestions and am still up for something different given it makes sense for the broader goals we have.
I’m just going to carry on with splitting the repos up in that case. Even if all the stuff I do gets chucked, it’s still kinda of fun. I’ve decided that I enjoy the process more than the product.
So, a status update:
ASF pruning is done.
I’ve got a bash script to split all the repos, keeping all the git history.
Next up, I’ll script some changes to the libavr32 repo, currently it’s only got the skeleton folder in it, I’m going to rename that to src and copy over the asf folder from my diet-asf script. Should I move the conf folder up to the root? (As the Aleph repo does it.)
Then, it’ll be time to submodule libavr32 in and try my hand at some sed magic to update all the config.mk files with the updated directories.
I’ve got some free time on Thursday, so hopefully I’ll get some stuff posted on my GitHub account for people to test and suggest improvements. The ASF version has gone up from 3.17 to 3.30, so it’ll all need testing on hardware too.