Trying to figure out the best way to get audio data to create a waveform on the norns display. Ideally (I think) just getting an array of integer values for each sample and returning that to lua.

I tried this in Python last night but the parsing is quite slow.

Would sox be a better solution? And then… any suggestions on how to return that data to lua? (using stat I guess?)

Or - would this be best as an addition to Crone somewhere (not sure where it would belong)?

you mean, thumbnails / waveforms for soundfiles on disk? (easier)

or, thumbnails / waveforms for softcut buffers? (harder, because it’s necessary to keep track of dirty regions; the buffers are pretty big.)

either way, you absolutely should not read every sample into lua.

you could use something like sox or ffmpeg to export a soundfile as a float array, after (massively) downsampling it to an appropriate resolution.

but as a real solution for this in norns, i would do it in C code in matron. we already have snd_file.h/.c module, which uses libsndfile to inspect a file. from there it is a small step to reading the file as an array, applying some stride / max / average, and returning a table to lua. (by adding FFI glue to weaver.c calling your new routine from snd_file.h/.c.)


if you are in fact talking about wanting to get the softcut buffers, we have some of the plumbing in place for doing this (binary poll / event type), but have not implemented anything that tracks “dirty regions” and computes thumbnails. there is a closed GH issue where we discuss some of the stride / averaging options. these options also apply to building waveforms in C from float arrays:

but this was closed becase it’s frankly a lot of work for an issue in which i have limited interest, compared to working on the audible qualities of the system.

Well the former for sure, but I think the latter was where I was headed eventually.

I was imagining a waveform display on the norns screen that would track with the softcut buffer showing position, loop points, etc. But… even just a “sample inspector” would be a nice addition.

So yeah… (as you say) I get that this is way easier with a sample from disk, and way more difficult with sound coming from the inputs.

I’ll take a look at snd_file.h/.c and see if I can make sense of it.

Hah… yeah, I learned that lesson… this is suuuuuper slow. :

So I’m looking at this and being a huge C noob this is not such a small step (for me) so I’m gonna embarrass myself and post some potentially janky code and ask how it should be done properly.

So far I’ve got something like the following (added to snd_file.c), but don’t understand how to allocate for the array - since that needs to be big enough to hold all the sample data

#define MAX_FRAMES 10000  // made up number
static float samples[MAX_FRAMES];

static float snd_file_readf_frames(const char *path, float *samples) {
    SF_INFO sfinfo = {.format = 0};
    SNDFILE *sndfile = sf_open(path, SFM_READ, &sfinfo);
	sf_read_float(*sndfile, samples, min(sfinfo.frames, MAX_FRAMES));
	return samples;
}

This part really confuses me as I can’t find any examples in weaver.c that return an array

Additional question. Is it a problem that snd_file_inspect() does not call sf_close() after it’s done with the file?

you don’t need to read all the samples at once, and probably don’t want to. libsndfile has a streaming interface.

let’s say you want to render a waveform for the whole soundfile on the norns screen. the screen is only 128 pixels wide.

i would try something like this to generate the waveform (an array of max amplitudes):

#include <malloc.h>

#define SND_FILE_WAVEFORM_PIX 128

// returns a newly-allocated array of size SND_FILE_WAVEFORM_PIX
// caller must free this eventually!
float* gen_waveform(const char *path) {
    SF_INFO sfinfo = {.format = 0};
    SNDFILE *sndfile = sf_open(path, SFM_READ, &sfinfo);
    if (sndfile == NULL) {
        return NULL;
    }
    
    const int frames = sfinfo.frames;

    // this could be large if the soundfile is large...
    // might want to use a smaller streaming buffer and another layer of loop nesting
    const int frames_per_pix = frames / SND_FILE_WAVEFORM_PIX;
    float* buf = malloc(sizeof(float) * frames_per_pix);

    // allocate buffer to return
    float* pxbuf = malloc(sizeof(float) * SND_FILE_WAVEFORM_PIX);

    // compute pixel values (max amplitude per stride)
    for (int px=0; px<SND_FILE_WAVEFORM_PIX; ++px) {
	// this assumes mono buffer (1 sample per frame)
        sf_readf_float(sndfile, buf, frames_per_pix);
	// for each pixel, take the maximum amplitude over the last stride of frames
	float max = 0;
	float amp;
	for (int frame=0; frame<frames_per_pix; ++frame) {
	    amp = buf[frame] > 0.f ? buf[frame] : -(buf[frame]);
	    if (amp > max) { max = amp; }
	    pxbuf[px] = max;
	}
    }
    sf_close(sndfile);
    // remember to free the streaming buffer
    free(buf);
    return pxbuf;
}  


This part really confuses me as I can’t find any examples in weaver.c that return an array

i guess there aren’t any. but it’s not too complicated and you can see similar examples.

lua_newtable() to create a table on the stack
then foe each element:

  • lua_pushnumber to add next array value
  • lua_rawseti to set that value’s index in the table

once everything is on stack, that’s a good time to free the waveform array.

here’s an example
http://lua-users.org/lists/lua-l/2007-07/msg00277.html


ha! well… yes, that is a pretty major problem and a memory leak. woops! thank you for catching it.

this could be behind recent crash reports for e.g. MLR, and any other script that uses fileselect with audio files - i think that hits this function a lot. we need to do some memcheck runs on matron.

4 Likes

Oh cool! glad I could find something useful in my otherwise confused state. :slight_smile:

Thanks very much for the example. I’ll go through this stuff tomorrow and may likely have an additional question or two then.

Hey everyone,
Just got my Fates yesterday. I have a question regarding regular keyboard mapping. I’m in france and i want to play with Orca. The problem is the default mapping of norns is in qwerty. I tried to change via ssh and sudo raspi the mapping but it doesn’t change anything.
Do you guys have an idea to make it work ?

Def a question for the Orca thread.

(This is likely a script level thing not a system level thing)

i’ll move this to the Orca thread (or somewhere), though it is sort of a general limitation in norns.

the problem is sort of on the norns system level, but indeed it is not particular to Fates.

norns isn’t using a linux system layer (like a virtual console, Xorg or Wayland) to get symbols from the kernel. it is going directly for keycodes from the HID events stream. that’s why setting the keyboard layout for linux virtual consoles / window managers, will not help.

the mapping of keycodes to symbols does indeed happen here, in a library specific to the Orca script:

but is making use of the named enumeration of HID codes that we made here, in the norns system HID layer:

these enumeration names do assume US keyboard layout, i think. that would probably be the best place to define other layouts for different regions. (because then scripts like Orca could continueto use the named enums without tracking the keyboard layout.)

contributions would definitely help here. IIRC the tool that made these mappings from libudev was rather ad-hoc, but i could try and clean it up and make sure it is available in the norns repo for anyone who would like to contribute other layout data.


opened GH issue:

Oh excellent. Thanks for the followup.

I was on my phone earlier and was about to look all this up in the code to confirm. My comment above should have been “it’s not a linux system level thing”.

I’ll do some reading later about how the international keyboards map the keycodes/enumerations/etc.

Let’s continue it on GH, where I’ve already put a couple more thoughts… hoping someone more familiar with virtual console integration can chime in. (if possible, it would be more elegant to allow linux user to change layout instead of reinventing the wheel and putting layout selection stuff in norns menu or something…)

2 Likes

It’s definitely something i’m not able to do on my own. I wasn’t ready for this

sorry, i didn’t mean to imply that you should.

i just meant to let you know that international keyboard support is not working for now, but also to put it on the radar as something we can (and should) add

Ahah don’t worry it was a bit ironic :slight_smile: I’m curious though. I’ll follow the lead

1 Like

I’m curious if there’s been thought or recommendations for mixing an additional audio input stream into norns.

Plugging in a USB ADC dongle will cause it to appear in ‘Devices’ as a ‘USB PnP Audio device’, but it’s unclear what the most expected behavior is for managing the resulting audio stream.

It isn’t too hard to modify the jackd config to define additional input ports for the sc engine and wire the additional audio inputs there, which seems like the natural way to expose this, but it seems like a bit heavier of a lift to try to do that automatically in response to device change events. possibly it could happen when configuring in system devices though?

there’s been some thought. if you have a solution allowing multiple capture devices in JACK2 without clock drift, and know how to plumb into supercollider, then an engine that allows this (and does all mixing in SC) is probably acheivable. in designing such an engine i would expose the device management stuff to lua through the “commands” system.

for cards that aren’t at the same rate as the onboard device, the easiest would be to set up a jack 2 audioadapter to resample as the stream gets into jack. Are you hoping to avoid the need for a buffer/resample there?

avoiding resampling would be better, but it’s probably the most feasible way to end-run around possible problems with drift.

so, i don’t know much about doing this, and documentation seems prettty sparse, here is a thread:
https://linuxmusicians.com/viewtopic.php?p=95025#p95025

i should clarify a little:

i doubt this is something we are going to want to put norns “core development” cycles into. one obvious problem is that having more I/O channels encourages fundamentally different applications.

what i’m saying is, if you know your way around this use-case with Jack2 (and sounds like you do,) then it would probably be most helpful for norns community if you could investigage needed changes to our systemd service scripts to allow it with an arbitrary device. with that in place as a POC, we could think about whether/how to plumb it into the norns system (likely just at the sueprcollider level, i think) or simply provide some clean documentation for people who want to customize norns in that way.

any application that assumes something besides 2x2 audio I/O is going to be basically “non-canonical” though.

1 Like

Is it possible to add a shortcut to record to tape? Sometimes I’d like to quickly record something without having to go to the menu.

I’m wondering if I could add something like a key shortcut (something like K3 + K2 + K1) to a script.

1 Like

https://monome.org/norns/classes/audio.html you could probably whip it together pretty easy! something like:

local f = io.open(norns.state.audio..'tape/index.txt','r')
local filename = 0
if f ~= nil then
  local a = tonumber(f:read("*line"))
  filename = a or 0
  f:close()
end
filename = string.format("%04d",filename) .. ".wav"
audio:tape_record_open(filename)
audio:tape_record_start()

then later

audio:tape_record_stop()

if i’m reading the docs correctly :smile:

Edit: this doesn’t increment the tape index.txt so you’d want to add that so you don’t overwrite recordings the second time you use it

4 Likes