In recent days I found myself casually looking for libraries which could render markdown file in the browser. The thinking was to be able to display documentation in a more rich fashion within maiden.

While many scripts do have documentation embedded along with their source code some place it elsewhere. To make something like this work there would likely need to be another property added to the script catalog for each entry which points to the docs.

Further suggestions and/or code which helps with accessibility are certainly welcome.

2 Likes

Thanks, going through tutorials right now, very cool. Do you know if there is a higher level lib do draw objects on the grid (like lines, circles, etc)?

In recent days I found myself casually looking for libraries which could render markdown file in the browser. The thinking was to be able to display documentation in a more rich fashion within maiden.

Sounds good! I’m using markdown-it in a web app, and it works really well, so that would be one option. I can contribute some of my experience with that if it’s useful.

2 Likes

idk how interested most would be, but i would love some editing features for tape recordings: ability to move playhead during playback, edit file names, move files, etc. would simplify and streamline things for me personally at least!

i dont know exactly what this means, but isn’t it cool how the ai takes the reduced sample artifacts from this audio clip and uses those parts to generate the rest of the track? i have been trying to think of how to extrapolate that idea into some kind of simple script for the past 5 minutes, but i can’t code, even lua, and i’m not as smart as you people so i’m gonna leave this here (samples 1& 3 especially highlight this effect)

6 Likes

This thread gets into using AI on Norns and @dianus started this thread on neural network powered module.

If you’re looking for “immediate” gratification, I suggest looking at something like Magenta Studio which offers what you’ve posted (albeit for Midi) as individual apps or M4L Ableton plugin.

1 Like

@coolcat I don’t own a grid so I haven’t had a reason to dig in, but this grid recipes thread from @dan_derks might be of interest.

1 Like

Brian Eno, the composer of the Windows 95 startup sound would be very happy about this I’m sure.

2 Likes

i saw this question elsewhere but not the answer:

is it possible to map a midi key to a params:trigger?

it would be useful for me - then i can add triggers to one-shot things in scripts (effects, start/stop) that can easily be controlled by other keys on a plugged in midi keyboard.

The code is here, fwiw:


I haven’t tried to analyze it, but from those demos it sounds just like it is smashing a song through the qualities of the source material. An article mentions the “AI” works on a per-sample level for the analysis.
More useful progress could probably be done with norns considering the existing norns ecosystem of examples. If you look at something like barcode, imagine if the loopers stretched in such a way that the resulting pitches followed the skeleton of an existing song, and that skeleton was chosen based on the pitches of the recorded sample. Of course the song skeletons would have to be turned into data and preloaded but only simple things like chord progressions (and verse/chorus timing?) would be enough considering the complexity of the input material would generate interesting variations while stretched and looping.
2 Likes

i do understand how the open jukebox ai works, at least in a very abstract way. but that’s not really the main aspect of that experiment that interested me. i didn’t and still dont have a fully formed idea of what the potential of that example of specifically the digital artifacts as the skeleton could be used for, but i think that could be the interest thing to expand on. i guess we already have noise synthesis but i was thinking more of something like how we make granular by multiplying pieces of a frequency spectrum and smearing them across the fft algorithm; maybe there could be something (generative or more just building blocks of a synthesis technique) that focus on digital artifacts to either create some sort of smart effect or something. i dont know what im talking about. i’m probably misusing this thread for my barely coherent misunderstandings of how these things work

that is a very interesting extrapolation of my take though. i love the barcode based idea. i am a fan of turing machine-esque rhythm and melody generators and i think that a form of that for sample looping is worth exploring. something like a “smart” chase bliss mood,. or marbles & microcosm combined

is it possible to map a midi key to a params:trigger?

currently no. @andrew also introduced the “binary” type with the prospect of midi mapping.

check out lua/core/menu/params.lua if you’d like to have a go at adding support!

2 Likes

thanks, binary looks perfect. will try adding midi support

4 Likes

A question/feature request… no idea how complex this would be to implement (or if it would have any other knock on impacts), but would it be possible to have more control over the stereo image of incoming audio? Currently “monitor mode” can either be mono or stereo - I’m imagining a “spread” control to move between those two extremes? Not sure. Just throwing it out there.

Use case (at least for me) would be running cocoquantus (which hard pans output from the two loopers/delays) direct into norns. Typically I go via a mixer to control stereo image to taste, but would be a nice option to go straight in and be able to dial back the hard panning a little…

1 Like

no, it would not be hard to replace the switch with a “width” parameter.

implementation note: i would be careful to skip any panning computations when width was set to minimum or maximum.


re: openAI jukebox: AFAICT, all components of this project are unsuitable for porting to GPU-less computers - e.g., requiring specialized CUDA kernels even for the pre-trained upsampling stage. for training, forget about it - it takes 3 hours to sample 20 seconds of music using these transformations on a top-tier GPU. (tesla V100 with 16GB RAM.)

similar limitations apply to most such projects. personally, i think ML is interesting, and data-driven brute force training/classification/synthesis tasks come up a lot in my day jobs, but i don’t see much overlap with the needs of a box like norns. the purpose of norns is to expose the power of fairly simple audio processes through accessible scripting.

there are other, lower-bandwidth techniques and process in the big umbrella of ML, many of which we use all the time without much thought. (digital compressors and IR convolution have close analogs in ML.) simple neural networks and component transforms (PCA, RBF) have had interesting applications in synthesis, which have been explored in the academic community for many decades. (https://ccrma.stanford.edu/~slegroux/synth/pubs/UCB2002.pdf). the capabilities of a computer like norns have caught up to these kinds of models. they are not yet capable of playing with large-scale DNNs. those explorations are more suitable for the high-level tools and environments available to the other computers on your desk.

if you really want to access these kinds of processes via norns, i suppose the best way would be highly asynchronously through an internet API. (norns would be well equipped to capture audio, upload it, and then download or stream some synthesis artifact - seconds, minutes or hours later.)

5 Likes

I’m having trouble changing just the brightness of one particular line on the screen … My thought is to use something like

screen.level(l)
screen.move(x1, y1)
screen.line(x2, y2)
screen.update()

Am I missing something? This either changes the whole screen or nothing at all.

would have to see more to know why it’s one or the other, but after that line is drawn be sure to set screen level to whatever you want the next element to be

1 Like

Also don’t forget to call screen.stroke(), could be part of the issue

4 Likes

woke up with a dumb idea and no idea where to start

image

6 Likes

you know… for kids!

3 Likes