actually i would call this a mistake; it doesn’t match the intended routing and doesn’t seem particularly useful… thanks for pointing it out.

simple change, here it is

3 Likes

i’m trying to setup automatic looping of multiple samples loaded into the same buffer with the following code (using bundled samples in the example below):

local samples = {
  first = {
    path = _path.audio .. "/tehn/drumilk.wav",
    duration = nil,
    start = nil,
  },
  second = {
    path = _path.audio .. "/tehn/drumilk.wav",
    duration = nil,
    start = nil,
  },
  
}

local voices = {
  second = 1,
  first = 2,
}

local function load_samples()
  local start = 0

  for key, sample in pairs(samples) do
    local channels, frames, samplerate = audio.file_info(sample.path)
    softcut.buffer_read_mono(sample.path, 0, start, -1, 1, 1)
    
    sample.duration = frames / samplerate
    sample.start = start
    
    print('loaded', key, 'at', start, 'duration', sample.duration)

    start = start + util.round_up(sample.duration)
  end
end

local function setup_voices()
  for key in pairs(voices) do
    local voice = voices[key]
    local sample = samples[key]

    softcut.enable(voice, 1)
    softcut.buffer(voice, 1)
    softcut.level(voice, 1)
    softcut.play(voice, 1)
    
    softcut.loop(voice, 1)
    softcut.loop_start(voice, sample.start)
    softcut.loop_end(voice, sample.start + sample.duration)
    
    print('setting voice', key, 'loop', 'to', sample.start, sample.start + sample.duration)
    
    softcut.position(voice, sample.start)
    softcut.level_slew_time(voice, 0)
    softcut.rate_slew_time(voice, 0)
    softcut.level(voice, 0.25)
  end
  
  softcut.level(voices.first, 0)
end

local function phase(voice, phase)
  if voice == voices.first then
    -- print('phase', voice, phase)
  end
end

function init()
  load_samples()
  setup_voices()

  softcut.event_phase(phase)
  softcut.phase_quant(voices.first, 1/10)
  softcut.poll_start_phase()
end

function redraw()
  screen.clear()
  screen.update()
end

the following output is produced in maiden, which seems correct (with the exception that util.round_up rounds 4.8 up to 6 for some reason).

loaded	first	at	0	duration	4.8
loaded	second	at	6	duration	4.8
setting voice	first	loop	to	0	4.8
setting voice	second	loop	to	6	10.8

but i hear audible pause just before the sample is looped. am i doing something the wrong way here?

UPD: nvm, figured it out. I’ve been using 44.1kHz samples in my tests and after trying to resample them to 48kHz i got nice seamless playback. this doesn’t seem to be a documented point, though.

1 Like

you beat me to it. you’re right, 48khz requirement needs to be more prominently displayed. (or of course we could just resample on import.)

2 Likes

perhaps it’d be best under the API docs? currently, it’s under all the usability docs (tape, fileshare > audio, and the general faq).

2 Likes

You just made me realize that my code would be much simpler if I was resampling my drum loops (which can be a mix of sample rates depending on the file) on import for Softcut playback, rather than modifying the rate on a per-loop basis.

Is there a simple way to do this right now? I can only think of loading a file into a temporary spot on a buffer, then playing back at a modified rate to record onto the destination buffer.

that would work. but you could also just process your files directly with ffmpeg or sox ?

Oh right, that would be simpler. Can I rely on either or both of those being preinstalled on Norns so that I can run them dynamically from my app? I’d like to be able to support arbitrary WAV files for users of Beets without requiring them to do the transcoding themselves.

I think you could just include them in your script’s folder and call them from lua.

Dumb question(?) - for softcut.position (voice, value) what are the units for value?

Seconds?

ye 'ol seconds is correct

Dreaming of a softcut function to get audio sample data as an array/table. (That could then be drawn on display)

I hacked the following together In python tonight, but it’s pretty slow to load and barfs on some samples.

5 Likes

Have you seen Reaper’s ReaPeaks? It’s a pretty simple format for storing peak information. It zooms easily, can be saved alongside a sample, and if you use Reaper, it can generate those for you.
http://reaper.fm/sdk/reapeaks.txt

yes, but coming up with a mipmap format is not really an issue. we have different requirements than Reaper, which produces mipmaps of entire media files on import or record. the issue is deciding when to build the waveforms. (or mipmaps, though i’m not convinced they are needed or even helpful in the norns context - requiring interpolation for drawing as well as for mapping.)

have given this a good amount of thought.

softcut client has an offline buffer processing class, which would be a natural place to render waveform data. my current thinking is that these renders should happen by request over arbitrary regions, without any attempt to “optimize” re-rendering. thus, we need not place any more computational burden on the audio thread; but since computation is still in the jack-client process, we don’t need to share the buffer memory directly with the client (which could also be a performance issue.)

passing the data back to the client is not really a problem. we already have a data pipeline for 128-byte packets from crone to lua. that’s as many pixels as can fit horizontally on the norns screen, and more than enough resolution per X position. (exactly enough, if we pack signed 4-bit min/max values.)

another consideration is what tradeoffs to make in the peak computation itself, for efficiency, vs. accuracy, vs. the possibility of missing peaks altogether. reaper is a DAW and its peak rendering is designed for offline computations on fast machines - it finds the true peak over each interval. we are dealing with a more constrained environment, and our peaks can be lower resolution. &c.

it’s possible for the softcut processing classes to keep track of dirty regions, but i’m not sure it’s realistically feasible.


more explicitly, i’d propose:

  • client requests rendering waveform with N points, of region [a, b] in seconds.
  • (we can assume N = 128, horizontal resolution of norns screen)
  • samplerate sr
  • duration d = b - a
  • audio buffer is discrete-time signal x = {x[n]}, n \in [a * sr, b * sr]
  • waveform is another signal y = {y[m]}, m \in [1, N]
  • then y[m] = max(\{ x[w]: w = sr * (a + (m-1) * d/N), ... , sr*(a + m * d/N) \})

the main little issue being the execution time to compute the true peak depends on d. (how many terms in ellipses above.) that’s why precomputing mipmaps is helpful in the first place - each lower resolution can use the peaks from the next-higher resolution. but i’m not really seeing a way to do this that is suitably dynamic and performant.

anyways, i’m happy to provide the functionality described above, and leave it up to the script to set performance-appropriate limits on usage. we could even have the peak-finding index w advance by some stride that is >1 for long regions; a crude downsampling which will certainly add error to peak estimation, but i’d guess not significantly so if the SR divisor is kept fairly small.

7 Likes

dumb question of the day:

Is the softcut.position(voice,value) value the position in the entire softcut buffer, or just within the segment for that particular voice?

I guess another way to ask that - is the position value relative to the voice or looking at the whole buffer?

nope, just two big buffers for 6 voices

if you like segments per voice though, i made a thing

3 Likes

relative to the voice or looking at the whole buffer?

looking at the whole buffer. multiple voices addressing the same audio data, opens up various use cases.

(i resisted even adding two buffers for a while, restricting the addressable duration of each voice, but i lost that one.)

1 Like

quick question: is it possible to simultaneously route adc to softcut voice 1 and route engine to softcut voice 2?

from what I can tell, the answer is no since audio:level_eng_cut doesn’t specify voice.

why would I do this? I’m interested in making a pitch follower and playing an engine synth simultaneously with the micced instrument. I don’t necessarily need to have them routed to different softcut voices but i thought it’d be neat to control their individual panning.

if two’s all you need you can maybe use left channel for one, right for the other (softcuts can be planned after the fact)

might mess up your dry signals though since they’d need to be hard panned

1 Like

brilliant. that works for me. thanks @andrew!!

in general this is true (if we assume stereo signals.) softcut is a 2x2 jack client. ADC, tape and engine are individually mixed to a stereo bus which in turn feeds all the softcut voices.

there are three jack clients on norns: softcut, scsynth, and a 6x6 mixer.

routing inside the softcut client looks like this:
softcut-routing.pdf (50.9 KB)

routing in the mixer client looks like this:
crone-process-routing.pdf (49.3 KB)

1 Like