Check out the grid tutorials https://monome.org/docs/norns/study-4/ and API https://monome.org/docs/norns/api/classes/grid.html

2 Likes

I don’t know if this has already been discussed, but I’m guessing I’m not alone in having problems remembering which key/encoder does what in each application?

Obviously this comes with the territory of a universal machine which is designed to be repurposed and there is a balance to be struck between allowing for abstraction and different interaction models on one hand, and forcing conventions or standards on the other. It’s important to retain what we find appealing about Norns and for me that is partly about retaining a sense of fun and wonder.

Having said that and including the disclaimer that I am a big fan of Norns in its current form, including the community ecosystem, it would be interesting to hear thoughts on how to make using each app easier.

Do people use a wide range of Norns apps or stick to a few core ones?
Do people have strategies for remembering how different apps work, or does it just happen naturally through use?

Some tentative thoughts I had:

  1. In Maiden, in the list of installed apps, include the option to display (unfold/fold) the README file from GitHub and/or the script header with the basic instructions.

  2. Have a conventional key combination on Norns to display the help text from script header (does this already exist?)

Or, in a hypothetical ā€˜future Norns’:

  1. Have a dedicated ā€˜info’ key/button which toggles a help page.

  2. Have a more explicit relationship between display and buttons. E.g. four buttons under or above the screen, with dynamic functions, each of which is labelled on the screen by the current app. Often called ā€˜soft keys’. (No I don’t have a Moog One :slight_smile:)

Might require a larger screen I suppose. Might also reduce some of the Norns charm, but then again, app developers wouldn’t be forced to implement apps in this way.

2 Likes

this is what I do when I’m using a new app for the first few times and I am getting used to the control scheme…

you can navigate to an app, hit key 3 to launch. most (not all) apps have a small info page (text file) that has basic instructions that shows here. you then need to press key 3 again to actually really launch the app.

since key 1 short press toggles between the app and the rest of norns navigation, you can (after launching the app), simply navigate back to that info page for the app you’re using. now key 1 acts as an info toggle.

7 Likes

Thanks for this! I’m missing something obvious in the ā€˜navigate back to that info page’ part.
EDIT – I see – you mean via SELECT? That is useful, though if you want to then edit settings you have to navigate back again I suppose.

yes, it’s not a permanent solution, but it should be long enough for some familiarity and muscle memory to develop.

for bigger norns apps, I still think documentation off norns is going to be necessary (I really do like your idea of linking github manuals/script headers/etc. through maiden). it took me a long time to be able to understand and navigate cheat codes and I’m still intimidated by arcologies. I don’t think anything on device is going to shorten that gap significantly.

2 Likes

In recent days I found myself casually looking for libraries which could render markdown file in the browser. The thinking was to be able to display documentation in a more rich fashion within maiden.

While many scripts do have documentation embedded along with their source code some place it elsewhere. To make something like this work there would likely need to be another property added to the script catalog for each entry which points to the docs.

Further suggestions and/or code which helps with accessibility are certainly welcome.

2 Likes

Thanks, going through tutorials right now, very cool. Do you know if there is a higher level lib do draw objects on the grid (like lines, circles, etc)?

In recent days I found myself casually looking for libraries which could render markdown file in the browser. The thinking was to be able to display documentation in a more rich fashion within maiden.

Sounds good! I’m using markdown-it in a web app, and it works really well, so that would be one option. I can contribute some of my experience with that if it’s useful.

2 Likes

idk how interested most would be, but i would love some editing features for tape recordings: ability to move playhead during playback, edit file names, move files, etc. would simplify and streamline things for me personally at least!

i dont know exactly what this means, but isn’t it cool how the ai takes the reduced sample artifacts from this audio clip and uses those parts to generate the rest of the track? i have been trying to think of how to extrapolate that idea into some kind of simple script for the past 5 minutes, but i can’t code, even lua, and i’m not as smart as you people so i’m gonna leave this here (samples 1& 3 especially highlight this effect)

6 Likes

This thread gets into using AI on Norns and @dianus started this thread on neural network powered module.

If you’re looking for ā€œimmediateā€ gratification, I suggest looking at something like Magenta Studio which offers what you’ve posted (albeit for Midi) as individual apps or M4L Ableton plugin.

1 Like

@coolcat I don’t own a grid so I haven’t had a reason to dig in, but this grid recipes thread from @dan_derks might be of interest.

1 Like

Brian Eno, the composer of the Windows 95 startup sound would be very happy about this I’m sure.

2 Likes

i saw this question elsewhere but not the answer:

is it possible to map a midi key to a params:trigger?

it would be useful for me - then i can add triggers to one-shot things in scripts (effects, start/stop) that can easily be controlled by other keys on a plugged in midi keyboard.

The code is here, fwiw:


I haven’t tried to analyze it, but from those demos it sounds just like it is smashing a song through the qualities of the source material. An article mentions the ā€œAIā€ works on a per-sample level for the analysis.
More useful progress could probably be done with norns considering the existing norns ecosystem of examples. If you look at something like barcode, imagine if the loopers stretched in such a way that the resulting pitches followed the skeleton of an existing song, and that skeleton was chosen based on the pitches of the recorded sample. Of course the song skeletons would have to be turned into data and preloaded but only simple things like chord progressions (and verse/chorus timing?) would be enough considering the complexity of the input material would generate interesting variations while stretched and looping.
2 Likes

i do understand how the open jukebox ai works, at least in a very abstract way. but that’s not really the main aspect of that experiment that interested me. i didn’t and still dont have a fully formed idea of what the potential of that example of specifically the digital artifacts as the skeleton could be used for, but i think that could be the interest thing to expand on. i guess we already have noise synthesis but i was thinking more of something like how we make granular by multiplying pieces of a frequency spectrum and smearing them across the fft algorithm; maybe there could be something (generative or more just building blocks of a synthesis technique) that focus on digital artifacts to either create some sort of smart effect or something. i dont know what im talking about. i’m probably misusing this thread for my barely coherent misunderstandings of how these things work

that is a very interesting extrapolation of my take though. i love the barcode based idea. i am a fan of turing machine-esque rhythm and melody generators and i think that a form of that for sample looping is worth exploring. something like a ā€œsmartā€ chase bliss mood,. or marbles & microcosm combined

is it possible to map a midi key to a params:trigger?

currently no. @andrew also introduced the ā€œbinaryā€ type with the prospect of midi mapping.

check out lua/core/menu/params.lua if you’d like to have a go at adding support!

2 Likes

thanks, binary looks perfect. will try adding midi support

4 Likes

A question/feature request… no idea how complex this would be to implement (or if it would have any other knock on impacts), but would it be possible to have more control over the stereo image of incoming audio? Currently ā€œmonitor modeā€ can either be mono or stereo - I’m imagining a ā€œspreadā€ control to move between those two extremes? Not sure. Just throwing it out there.

Use case (at least for me) would be running cocoquantus (which hard pans output from the two loopers/delays) direct into norns. Typically I go via a mixer to control stereo image to taste, but would be a nice option to go straight in and be able to dial back the hard panning a little…

1 Like

no, it would not be hard to replace the switch with a ā€œwidthā€ parameter.

implementation note: i would be careful to skip any panning computations when width was set to minimum or maximum.


re: openAI jukebox: AFAICT, all components of this project are unsuitable for porting to GPU-less computers - e.g., requiring specialized CUDA kernels even for the pre-trained upsampling stage. for training, forget about it - it takes 3 hours to sample 20 seconds of music using these transformations on a top-tier GPU. (tesla V100 with 16GB RAM.)

similar limitations apply to most such projects. personally, i think ML is interesting, and data-driven brute force training/classification/synthesis tasks come up a lot in my day jobs, but i don’t see much overlap with the needs of a box like norns. the purpose of norns is to expose the power of fairly simple audio processes through accessible scripting.

there are other, lower-bandwidth techniques and process in the big umbrella of ML, many of which we use all the time without much thought. (digital compressors and IR convolution have close analogs in ML.) simple neural networks and component transforms (PCA, RBF) have had interesting applications in synthesis, which have been explored in the academic community for many decades. (https://ccrma.stanford.edu/~slegroux/synth/pubs/UCB2002.pdf). the capabilities of a computer like norns have caught up to these kinds of models. they are not yet capable of playing with large-scale DNNs. those explorations are more suitable for the high-level tools and environments available to the other computers on your desk.

if you really want to access these kinds of processes via norns, i suppose the best way would be highly asynchronously through an internet API. (norns would be well equipped to capture audio, upload it, and then download or stream some synthesis artifact - seconds, minutes or hours later.)

5 Likes