that would work. but you could also just process your files directly with ffmpeg or sox ?

Oh right, that would be simpler. Can I rely on either or both of those being preinstalled on Norns so that I can run them dynamically from my app? I’d like to be able to support arbitrary WAV files for users of Beets without requiring them to do the transcoding themselves.

I think you could just include them in your script’s folder and call them from lua.

Dumb question(?) - for softcut.position (voice, value) what are the units for value?

Seconds?

ye 'ol seconds is correct

Dreaming of a softcut function to get audio sample data as an array/table. (That could then be drawn on display)

I hacked the following together In python tonight, but it’s pretty slow to load and barfs on some samples.

5 Likes

Have you seen Reaper’s ReaPeaks? It’s a pretty simple format for storing peak information. It zooms easily, can be saved alongside a sample, and if you use Reaper, it can generate those for you.
http://reaper.fm/sdk/reapeaks.txt

yes, but coming up with a mipmap format is not really an issue. we have different requirements than Reaper, which produces mipmaps of entire media files on import or record. the issue is deciding when to build the waveforms. (or mipmaps, though i’m not convinced they are needed or even helpful in the norns context - requiring interpolation for drawing as well as for mapping.)

have given this a good amount of thought.

softcut client has an offline buffer processing class, which would be a natural place to render waveform data. my current thinking is that these renders should happen by request over arbitrary regions, without any attempt to “optimize” re-rendering. thus, we need not place any more computational burden on the audio thread; but since computation is still in the jack-client process, we don’t need to share the buffer memory directly with the client (which could also be a performance issue.)

passing the data back to the client is not really a problem. we already have a data pipeline for 128-byte packets from crone to lua. that’s as many pixels as can fit horizontally on the norns screen, and more than enough resolution per X position. (exactly enough, if we pack signed 4-bit min/max values.)

another consideration is what tradeoffs to make in the peak computation itself, for efficiency, vs. accuracy, vs. the possibility of missing peaks altogether. reaper is a DAW and its peak rendering is designed for offline computations on fast machines - it finds the true peak over each interval. we are dealing with a more constrained environment, and our peaks can be lower resolution. &c.

it’s possible for the softcut processing classes to keep track of dirty regions, but i’m not sure it’s realistically feasible.


more explicitly, i’d propose:

  • client requests rendering waveform with N points, of region [a, b] in seconds.
  • (we can assume N = 128, horizontal resolution of norns screen)
  • samplerate sr
  • duration d = b - a
  • audio buffer is discrete-time signal x = {x[n]}, n \in [a * sr, b * sr]
  • waveform is another signal y = {y[m]}, m \in [1, N]
  • then y[m] = max(\{ x[w]: w = sr * (a + (m-1) * d/N), ... , sr*(a + m * d/N) \})

the main little issue being the execution time to compute the true peak depends on d. (how many terms in ellipses above.) that’s why precomputing mipmaps is helpful in the first place - each lower resolution can use the peaks from the next-higher resolution. but i’m not really seeing a way to do this that is suitably dynamic and performant.

anyways, i’m happy to provide the functionality described above, and leave it up to the script to set performance-appropriate limits on usage. we could even have the peak-finding index w advance by some stride that is >1 for long regions; a crude downsampling which will certainly add error to peak estimation, but i’d guess not significantly so if the SR divisor is kept fairly small.

7 Likes

dumb question of the day:

Is the softcut.position(voice,value) value the position in the entire softcut buffer, or just within the segment for that particular voice?

I guess another way to ask that - is the position value relative to the voice or looking at the whole buffer?

nope, just two big buffers for 6 voices

if you like segments per voice though, i made a thing

3 Likes

relative to the voice or looking at the whole buffer?

looking at the whole buffer. multiple voices addressing the same audio data, opens up various use cases.

(i resisted even adding two buffers for a while, restricting the addressable duration of each voice, but i lost that one.)

1 Like

quick question: is it possible to simultaneously route adc to softcut voice 1 and route engine to softcut voice 2?

from what I can tell, the answer is no since audio:level_eng_cut doesn’t specify voice.

why would I do this? I’m interested in making a pitch follower and playing an engine synth simultaneously with the micced instrument. I don’t necessarily need to have them routed to different softcut voices but i thought it’d be neat to control their individual panning.

if two’s all you need you can maybe use left channel for one, right for the other (softcuts can be planned after the fact)

might mess up your dry signals though since they’d need to be hard panned

1 Like

brilliant. that works for me. thanks @andrew!!

in general this is true (if we assume stereo signals.) softcut is a 2x2 jack client. ADC, tape and engine are individually mixed to a stereo bus which in turn feeds all the softcut voices.

there are three jack clients on norns: softcut, scsynth, and a 6x6 mixer.

routing inside the softcut client looks like this:
softcut-routing.pdf (50.9 KB)

routing in the mixer client looks like this:
crone-process-routing.pdf (49.3 KB)

1 Like

Hello,

Norns scripting newbie here.

Am I right to say that we can’t pipe softcut output to a supercolider engine?

I’ve seen lazzarello’s PR that didn’t get merged.
Also, diagram crone process routing in last post seems to confirm this assumption.

I want to apply a 3 band EQ (such as the one from pedalboard) to a sped up in-memory recording.

What are my other options? Forget using softcut and do the recording / speed up in pure supercolider?

My use-case is an app to use norns as a companion to a hardware sampler.

Click for more details

One common technique to get more grit and punch is to speed up the recording, apply some EQ on this sped up signal, record it in the sampler and then slow it down to the original speed/pitch.

The old school way to do this is by playing a 33RPM vinyl at 45 or 78RPM and use the EQ of a DJ mixer.

My idea is to use a norns as a speed up + EQ effect to reproduce this technique on any input signal.

1 Like

yes. in any case, using softcut is kind of overkill if you don’t need the SOS recording capabilities it offers. you might be looking for something closer to timber.

we would also take a PR that made a more complete implementation of the feedback path. (e.g., adding the ability to turn it off, which is quite important.) this PR is a good example of the scope of changes needed.

Wow, thanks for the super quick reply.

Thanks for pushing me in the direction of timber. As far as I understand it only support sample playback but not record, so I’d need to find some examples on how to record in a “variable” in SuperColider.

There is another approach I just thought of. As in my use case recording and playing back sped up is a 2 step process, I could do the recording with softcut, save on a tmp file (ideally on a tmpfs location) and play it back with a supercolider engine such as timber. Does that seem “sane”?

Regarding the feeback path I understand that the proper implementation is not just adding jackd routing but instead adding mixer channels and registering them at different level of API abstractions. Doesn’t seem super hard but I’m too much a newbie with norns (in general and especially its core) to undertake this right now :sweat_smile:.

I’ve done this recently (until I have time to learn how to do it on the SC side). not ideal but works for now.

fileselect = require 'fileselect'

function init()
  -- make tmp directory in init
  if not util.file_exists(_path.audio..'tmp/') then util.make_dir(_path.audio..'tmp/') end
end

function save_cut()
  saved = "tmpfile-"..string.format("%04.0f",10000*math.random())..".wav"
  cutsample = _path.audio.."tmp/"..saved
  -- buffer_write_mono (file, start, dur, ch)
  softcut.buffer_write_mono(cutsample,1,4,1)
  clock.sleep(2) -- or however you want to allow time to save/load
  load_cut(cutsample)
end

function load_cut(smp)
  -- use specific engine command, in this case glut
  params:set("1sample", smp)
end
1 Like

Thanks for the snippet, it really helps a lot.

I guess I will use this trick as well.

EDIT: as the file we’re saving is temporary, I would suggest writing to "/dev/shm/" instead of _path.audio. This will write in-memory instead of on disk and help preserve the NAND chip.

1 Like