Super cool, and hopefully I’m understanding the intent of engines correctly but I mean it’s a linux box too so! I’d be excited to try to port astrid/pippi to a norns platform. The intention being for it to run asynchronously in the background and push buffers to a realtime thread.
Astrid, which is the interactive front end to pippi I use for performance etc already works like this and uses JACK so from the little I have read about norns it shouldn’t be too crazy to integrate directly?
Selfishly if astrid/pippi ran on norns and I can just run norns on a laptop or NUC or whatever if I don’t care about the GUI or I/O this would be even more fun to me.
Anyway, I’m excited about these developments which I am late to the party on!
just voicing some vague interest in these since I don’t think anyone else has yet - I would love a way to write sample-level DSP on norns (C++ or faust would be fine) and I feel like it would be the best platform for building larger modular-type engines.
fwiw, its fairly straightforward to build and use custom supercollider UGens at the moment. you can use Faust and the faust2sc helper script if thats yr jam. (if you are looking to build something ‘modular’ it’s hard to see the benefit of reinventing the graph management wheel… i guess fun/ownership/learning are reasons.)
I haven’t tested it on a raspberry pi in a long while. It’s slower, for sure. It would work best for async renders. (Eg events that can be delayed by seconds or more, not anything time sensitive.)
Edit: I just realized it’s possible to call cython from C, so I would assume I could create a lua interface for the library and skip the python runtime altogether… that could be fun and a better way to go about using pippi on raspberry pi.
This may be out of the scope of this thread, but one thing that I would like is a way to define dependencies on things like node/npm or python/pip with minimum versions there - I’ve got some ideas (and some scripts) sitting on my norns that are more easily executed in something that makes doing a whole pile of http calls easily, and wraps them up in a nice interface. I don’t have a great way of distributing them though, since I don’t know what people will have installed on their norns.
Specific scripts I have in mind:
radio tuner that just jumps through random mp3 streaming radio
radio tuner that downloads a pile of junk via youtube-dl
one button upload of latest tape recording to soundcloud
Many of the things of that list have been pulled into existing releases and others are still in flight - so no I wouldn’t say from my perspective that 3.0 is on hold indefinitely. That wiki page represents an aspirational target which just doesn’t have a specific time line attached to it. Much of the energy of late has gone into reacting to component shortages.
no it would not. maybe it seems like the pace is too slow, but that’s how it is for those of us for whom norns is not an actual job - and particularly since the end of 2019 there are often more pressing concerns. we work at it as we are able and there is always a tendency to respond to the issues and needs of others (the loudest users) rahter than to our own desires for using or improving the platform. (i have never really had a chance to use the platform creatively, and i am constantly aware of all the ways it could be redesigned more efficiently if we had even a single full-time audio/DSP/realtime programmer.)
like greg says, many of the topics from that discussion have been folded into other releases. some have been deemed unimportant, from a combination of lack of expressed interest and the fact that even user/developers with the neccessary skills have not found time to assist with their development.
sometimes we look at a feature and deterermine that it is acheivable if inconvenient with the given tools, and building out more ergonomic ways to incorporate it is maybe not worth the effort. for example:
there are very few actual limitations on what can be done in run time sclang code on norns; even though engines must be defined at compile time, there are plenty of options to either (1) use engine as a trampoline and passthrough, (2) set up OSC channels outside of the engine structure. not too mayne people have explored the limits of this but we see some examples in projects like R and mx.synths.
similarly, there is actually nothing stoping a script from requiring puredata or mod-host to be installed, launching an instance, loading a patch, connecting OSC and connecting JACK. it’s probably not a great way to do things, but it’s possihble. and from watching people explore those limits we have a better undertanding of what a proper API should suport (e.g. we are now preparing some API support for JACK connections, and supporting other DSP hosts directly is not far behind.)
we implemented a system to select screen/IO backends at runtime (matronrc.lua) and to implement new screen/IO subsystems. (with SDL as a provided example.) there was a lot of noise from people about wanting to do this, but guess what? AFAICT, zero users have taken advantage of this effortful feature, despite my dropping hints about it in any appropriate thread. (are we not doing a good enough job documenting its usage/declaring its existince? is the UI/API/feature set we provided different from what was needed? i have no idea, since nobody has said anything.)
somre thigns are just challenging:
this is a foundational change that will support a lot of the other ones. it was planned in 2019 and is just now being done. here is my changelist to implement it:
i am absolutely not going to make promises about when/if this will land on stable/beta until the “technical steering committee” has taken plenty more time to poke at it, but it is definitely an area of active work representing many many valuable work hours and many thousands of source edits.
similarly, other puzzle pieces have received some attention (such as parsing LV2 metadata in lua) but norns has become a fairly complex piece of software with a lot of layers to it, and changing stuff about how the plumbing works has lots of implications. (like, a big chunk of my work in past months was just about brining our audio performance metrics up to speed, so that code contributors to audio paths would not be working blindly w/r/t whatever improvement/degradation they are bringing to performance. this has become super extra important since we knowingly split the official hardware base into very very different performance buckets with pi4 - this has other implications that we still need to handle perhaps.)
e.g., the recent image and its hotfix release represent a great deal of changes at one low level (kernel and OS.) this of course is one of the items from our 2019 plan (in fact we have skipped an entire linux version!). like greg says, the immediate pressure for working on that release didn’t come because “we” (meaning the 3-5 people who have been making substantive codebase contributions in last 3 years) wanted cool new features, it came because people wanted/needed to run the norns stack on rpi4 with “official” support.
presenty we’re very tentatively planning a release between 2.7.x and 3.0 that will probably have a lot more architectural refactoring (this time in userland) and a few features, but largely paving the way for more.
if you are particularly concerned about specific features, then speak up and call them out. (for me, just saying “xyz would be great” is some kind of datapoint. but… its only minimally useful. for example, saying “i want norns to host LV2 plugs” is nice, but at one level (the audio infrastructure) such support is pretty easy to add. what is totally non-trivial (and actually horrifyingly complex) is specifying and then implementing a means by which users can load and save more-or-less arbitrary chains of plugins and their parameters. (doesn’t sound fun to do this with norns menu UI, so probably a text-based configuration, but that’s not so accessible. maybe some arbitrary parameters can be exposed? hm. etc.)
so what’s really useful to me is actually feedback about existing features, and identified gaps between whats on offer and what is wanted. soimetimes (as is seen above) probing those gaps reveals that they are not so large and can be addressed in maybe more efficient ways than adding features to the main codebase.)
like OK to really necro this
@trickyflemming i have no idea if you retain any interest in this, but who would contribute to this “curated list” of c++ DSP? (would you want to?) i sort of meant the reverb and compressor to be examples for further collaborative expansion all along. they are basically single lines of FAUST. (but also, there is no nice way of integrating them besides a lot of manual glue code.) i don’t exactly want to be a “curator” of the norns effects suite - i already have this kind of job for another company and i don’t think i’m in the right mindset to predict what “typical” norns users will want. the two FX in there are two examples of what i think of as bread-and-butter end of chain.
if you or anyone else wants to work on a C++ library, what i am happy to do is clean up the interface API or codegen or whatever.
As the one who has been interested in adding basic support for using an LV2 based synth or LV2 based effect as part of the output chain I’d like to emphasis the above. Contributing such a feature is ultimately based on available time, making something “work” is often much different than making it usable and maintainable. I’ve recently returned to thinking about the design of this potential feature, whether it gains momentum at this moment time will tell.
Does anyone working on things like this need/want any help? I would love to get more familiar with these kinds of codebases and development environments. I know enough C/C++ to be dangerous, but I’m really interested in learning more and I have free time right now. Probably a naively big ask, but figured I’d throw it out there.
i’ll go ahead and be a little blunt about this upfront.
when it comes to using the norns platforms to make scripts, all levels of experience are welcome and we are happy to help. if you need to drop down to a lower level than you’re comfortable with (custom UGen, custom C event plugin, etc) people here will be happy to help with that too.
whewn it comes to implementing big or even small changes in the C/C++ layers of norns, we cannot afford to approach it with a pedagogical or training mindset. we simply aren’t operating like a software development company with enough cycles train people up via pair programming or keep up an extensive review process.
so we would absolutely love to accept contributions. but:
i think the first place to look for ways to contribute would be through existing issues, like
when it comes to bigger changes that aren’t on our issue list, idk. firt thing is to maybe open an issue (or at least a conversation) so that we can explore how best to approach the change or if its really needed, before you embark on implementing chjanges, opening PRs and requesting review.
all that is to say, yes absolutely i’m happy to assist new contributors as long as they respect my time and have a realistic assessment of their own ability to contribute. (answering questions: can you read and understand the syntax of the code modules to which you are contributing? what about the architectural decisions therein?)
and for like, specific milestones i would say that to contribute to the audio c++ side of things i would like you to know:
what is a c++ templates, when/why to use one
what is constexpr, …
what is an atomic variable, when/why to use one
what is a mutex, …
what is cache coherence / memory locality…
what is bandwidth expansion
what is aliasing
what are some dangers of floating point roundoff error
in a nutshell: if any of these specific features seem compelling and also approachable to you, please reach out to me directly.
As someone with little programming background but a large amount of love for Norns, I appreciate learning more about the underpinnings of the device, how that comes about and the ways in which you all work towards the future.
I think new engines will open new avenues for the more industrious and accomplished. But until then, continuing to learn how we arrive where we eventually do, is interesting and gives me a better understanding of the system I use frequently in my music and sound making.