Installed Debian in Virtualbox now, everything works, compiled the firmware successfully.

I noticed that the link http://www.atmel.com/tools/atmelavrtoolchainforlinux.aspx in the monome/libavr32 readme.md doesn’t work anymore, it redirects to microchip.com, where I couldn’t find the required files. I downloaded the necessary toolchain directly via http://www.atmel.com/Images/avr32-gnu-toolchain-3.4.3.820-linux.any.x86_64.tar.gz and http://www.atmel.com/Images/avr32-headers-6.2.0.742.zip

Maybe that could just be added as wgets to make things easier.

1 Like

This still seems to be the case that Ubuntu under windows 10 fails to build avr32-tools with the same errors. Has anyone made progress with the noah95 version or otherwise?

I’ve managed to get Avr32 and now Ansible building using the avr32 version hosted at https://github.com/denravonska/avr32-toolchain

2 Likes

(sad face)

Looks like the Atmel provided avr32-gcc doesn’t work on Arch Linux anymore…

$ make
CC      ../module/main.o
as: loadlocale.c:130: _nl_intern_locale_data: Assertion `cnt < (sizeof (_nl_value_type_LC_TIME) / sizeof (_nl_value_type_LC_TIME[0]))' failed.
cc1: internal compiler error: Segmentation fault
Please submit a full bug report,
with preprocessed source if appropriate.
See <http://www.atmel.com/avr> for instructions.
avr32-gcc: Internal error: Aborted (program as)
Please submit a full bug report.
See <http://www.atmel.com/avr> for instructions.
make: *** [../libavr32/asf/avr32/utils/make/Makefile.avr32.in:405: ../module/main.o] Error 1

$ LANG=C make
CC      ../module/main.o
cc1: internal compiler error: Segmentation fault
Please submit a full bug report,
with preprocessed source if appropriate.
See <http://www.atmel.com/avr> for instructions.
make: *** [../libavr32/asf/avr32/utils/make/Makefile.avr32.in:405: ../module/main.o] Error 1

Unless anyone has any tips or ideas, I’ll have a go compiling my own from one of the myriad of avr32-toolchain repos (I’ll probably start with @scanner_darkly’s fork).

would it be a good idea to look into setting up a virtual machine of some sort for avr32 toolchain? it’d be great to solve this once and for all, and then we wouldn’t need to worry about OS etc… i don’t know much about this side of things so hoping you might have the time for it, and i’ll be glad to help.

1 Like

It can be done with either Vagrant (virtual machines) or Docker (containers) or any number of similar techs. But the workflow gets a bit rubbish. Vagrant would probably be best, but that isn’t as easy to set up on Linux (e.g. you have to use AUR on Arch Linux). Docker is much easier to install (on all OSs) and a lot more commonplace, but it’s a bit of an odd fit.

I don’t think it’s something to worry about too much at this stage. But at some point in the future it can be done if needed.

The other more pragmatic thing to do, might be to set up some tooling to build and zip up the compiler and host the zip file on the GitHub releases page of monome/avr32-toolchain. It’s a pain though as we have 4 platforms to support:

  • Windows (WSL)
  • macOS
  • Debian / Ubuntu
  • Arch Linux
  • (Red Hat / Fedora?)

Anyway, I’ve got denravonska/avr32-toolchain built on my computer, I went with that repo as it had been updated recently. It didn’t take too long to build (yay hexacore CPU!).

I’ll try and put a PR in for the libavr32 README.md in the next few days with updated Linux instructions for if the Atmel provided tools don’t work (and update the links @x2mirko).

1 Like

I think the #1 priority would be to get some centralized canonical documentation of the build process somewhere – ideally merged to master on the monome-org repos. Every time I set it up on a new computer I have to do some thread digging to remember which branch is the most up-to-date one.

Beyond that, setting up a Docker avr32 build container would be a great idea, it’s maybe slightly less elegant than vagrant for local use, but having that container definition means we could add build + test to PRs, post releases automatically, etc. etc. I have been thinking for a while about setting some stuff up in Travis or Appveyor, but the upcoming Github Actions stuff they just announced yesterday will make things even easier.

1 Like

There is documentation on the README.md for libavr32. I do keep it up to date whenever I have issues, but a lot of the time things are reported on here without any corresponding PRs being opened when the fix is found.

The biggest issue IMO is that no one developer has practical access1 to WSL / Linux / macOS to keep the toolchain building scripts working. I think the Linux and WSL ones can be the same, but the macOS one requires different versions of some of the dependencies.

If we want to track it better we could make some changes to the monome/avr32-toolchain repo, perhaps some new branches:

  • master: just has a README.md explaining the other branches, a link to a thread on lines, and which user is responsible for each branch
  • macOS (or osx): version of the build script for macOS
  • linux: same but for Linux
  • wsl: you get the idea by now

Ideally we’d tag releases on master and then upload zipped avr32-tools directories onto the release page on GitHub for easy download.

I think for the time being just strongly encouraging people to keep the libavr32 README.md up to date is the easiest solution.


It’s definitely doable, but you’re still back with the issue of ensuring that the workflow is good for users of all 3 OSes. The Dockerfile itself would be pretty trivial I think.

There is already a .travis.yml file in most of the module repos, so PRs are already tested. I think build artefact uploading is a bit more complex, at least it was last time I looked at it. Admittedly the TravisCI workflow is horrid2, and one based on a Dockerfile would be much better.

FYI, for Teletype, running make release in the project root will run all the tests as well as zip up a firmware and all the documentation ready for uploading.


1 I can run Windows in a VM, but I don’t really know what I’m doing…

2 I really really hate writing .travis.yml files…

it is but if it could be used on any OS maybe it’s worth it?

i’ve pulled the latest changes from denravonska/avr32-toolchain into my fork (which is linked in the readme) so hopefully this brings the windows section up-to-date. although the whole point of forking was to have a working copy that would be protected from other changes, so if that doesn’t guarantee that things won’t break we could probably just link to denravonska/avr32-toolchain directly.

this is one more reason to have a vm (or do what you suggest as a workaround). or is there some other way we could just freeze the toolchain? we don’t really have any reason to keep updating it, the only reason i can think of is being able to pull in bugfixes but at this point i doubt we have to worry about it.

another OS headache… i can’t do this as this requires the doc toolchain which requires python 3+, and the latest wsl was only at 2.7 (need to check if this has been updated). another reason for a vm/docker - we have lots of dependencies by now other than avr32.

Have you tried running the build again to check it works?

Most of the updates seem to relate to either, updated URLs, or compiling on more modern Linux. (e.g. texinfo updates, or maybe glibc update), it’s these updates that have caused the schism between Linux and macOS, as the GNU libs on macOS are ancient.

Can you type lsb_release -a in a WSL prompt for me, assuming it’s Debian/Ubuntu based you should definitely be able to get an up to date Python 3.6 installed.

no (tbh i’m afraid to touch either my desktop or laptop as both work fine right now) but i assume it should be okay as @mugsy reported being able to build on windows using denravonska/avr32-toolchain (my fork was only 3 commits behind btw)

will check WSL when i get home tonight!

1 Like

To be clear my build worked in Ubuntu for windows.

2 Likes

yep, the windows section where it’s linked is for the WSL set up. it’d be interesting to try setting it up on windows direct (i had it set up on my desktop at some point and it still works but i had to use the WSL route for my laptop).

I’m sorry @sam, my earlier comment was poorly worded. The current README is great and I didn’t mean to ding the hard work that went into it, apologies. Thumbs up on encouraging PR’s to the repo and the README for OS support updates.

It’s definitely doable, but you’re still back with the issue of ensuring that the workflow is good for users of all 3 OSes. The Dockerfile itself would be pretty trivial I think.

That’s the great thing about containerizing the build process – everyone is essentially using the same Linux VM at that point. There’s a little work to make sure everyone has instructions on how to install Docker and invoke the containerized build, but hopefully that should be fairly stable. With a containerized build, individual commits could update the compiler version, and everyone would automatically get it when they pull the commit (or if they backtrack to that commit later from a future version with an even newer compiler.)

I will try and hack something up to better explain what I mean.

2 Likes

The prerequisite work for maintainers is that we add a Dockerfile for avr32-toolchain, build an image, and upload it to Dockerhub. This image contains all the dependencies, mounts the current directory, and invokes the compiler in its entrypoint, then exits.

Once that’s in place, the instructions to compile any of the libavr32 firmwares on a fresh system become:

The build script has to be maintained in bash and Windows flavors, but they contain the same single docker run command. That docker run command pulls down the image from Dockerhub, runs the compiler in a container, and produces the .hex file on the host disk. Then you just have to program it.

I’m handwaving a little here around the host mounting but I believe that’s pretty close to how it would work.

2 Likes
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 16.04.5 LTS
Release:        16.04
Codename:       xenial

Python 2.7.12

You should be able to upgrade from Ubuntu 16.04 to 18.04.

For Ubuntu, the upgrade command is sudo do-release-upgrade. This is recommended because it has the ability to handle system configuration changes between releases.

Source:

You can also Google “wsl upgrade to 18.04”.

Once done, you’ll need to:

sudo apt install python3 python3-pypandoc python3-pytoml python3-jinja2

after that you should have Python 3.6.5 installed, along with the dependencies to run the Teletype scripts.

You should probably have the required Latex files already (due to the dependencies of pandoc), but if not let me know and I can tell you what to install.

FYI you’ll need to run python3 to use it, python will still run 2.7, all the Teletype scripts have a #! line requesting python3.

1 Like

FWIW I recently watched a talk that mentioned dockcross, which superficially looks to be a similar layer to cross-compile with docker. It seems (again, just from a quick look) to allow relatively easy switching between building with a locally installed compiler and a containered one.

1 Like

I’m pretty sure it will work. I’ve used Docker for a bunch of similar things. But… make sure you test it on Linux rather than just Docker for Mac. File system permissions work differently (a.k.a. properly) with Docker on Linux. If you’re not careful you’ll end up with the build outputs all being owned by root.

I think the main issue here is actually a social one, I don’t think there is much point in maintaining another build option if no-one uses it as the instructions will bit-rot. As it is it’s hard enough to keep the instructions up to date given how few ‘core’ Teletype devs there are. I think the real reason why the current build scripts are a pain is that so few people end up using them, and I can imagine this will end up happening with whatever solution we choose.

Even though the avr32-toolchain script is a pain to set up, it’s probably still my preferred solution. I’m also a little concerned about asking people to install Docker when they don’t really know what it is. It does eat disk space, and on macOS/Windows there is the VM that is always running too. On Linux if you add yourself to the docker group that is a security issue that it’s important to understand, and constantly typing sudo if you’re not is also problematic.

I’m glad that options like Docker/Vagrant/etc exist for when the current solutions stop working, but for now I’m happy to stay with the status quo. But… I’m not exactly doing a lot of Teletype dev anymore, so I’m also happy to go with the flow whatever is decided.

thanks for the detailed info! started it but it warned it might take a few hours, so i’ll have to try it next week.

not sure i understand, does it mean i’ll need to modify my make files?

but isn’t the whole point of going with docker to freeze the toolchain and protect it from OS and other tooling changes, which means we shouldn’t ever have to worry about the instructions going stale? i don’t really know docker though, so maybe i misunderstand the benefit of switching to it. at this point my main worry is that at some point either my desktop or laptop toolchain will stop working and i won’t be able to set it up again. it’s already different between the 2 machines (i can build from windows on desktop but only from bash on laptop) and i’m afraid to try anything that might potentially break things.