Approaching: norns

norns

#2164

here’s es-passersby.lua updated


#2165

Thanks @tehn! I just updated the other two gists:


#2166

So as the next round of grid and norns availability is fast approaching I gathered some questions from fellow non programming creatives who are intrigued by monome but have yet to take the plunge. Im not sure if this is the best thread to ask them but If anyone can shed light on these queries I will share the information and continue to evangelize this amazing community in hopes of helping it grow.

  1. While running MLR on norns, can it resample itself? Its understood that there is recording to hard disk or “tape” as its been referred to, but what can be done in real time? ie. having access to incoming audio and/or + prerecorded samples + resampling what is currently playing?
    (Surprisingly there is very little information about mlr in general, and no tutorials online to outline its workflow concerning where/how samples are sourced or edited, only videos of people playing their grids)

  2. Another MLR question: Can norns set audio recording to be triggered by an incoming audio signal? Can you use a decibel threshold to trigger the recording of a live audio sample on an “armed” track? handsfree recording of samples is of interest to a variety of live instruments that need both hands to play.

  3. Can you send Osc out of norns? For instance, to affect real time visual animations on a laptop running an openFramworks app or something listening for osc?

  4. What is the monome return policy? do they have one?

  5. What is the sonic range of norns? can you get moog-ish dirty growls out of it? Almost all the demo/examples are of very clean sine like sounds. Its understood that the sounds are created in supercollider, but what is possible with the engines currently available?

  6. Is automation possible? Its understood that parameters are exposed for manipulation with scripting in Lua and then adjusted with the encoders and keys. If you want a parameter to change over time, for a simple example: the sweep of a filter, is this something that can be coded with lua? Will modulation be mappable in a roughly similar fashion to a modular workflow with lfo’s and vca’s controlling parameters and or effects?

A huge thanks in advance for anyone willing to tackle any of these questions it would really help a lot of people get a better understanding of these tools and how they might augment their play styles.


#2167

I use the passersby engine which growls


#2168

Oh wow, yeah thats dirty! Can you use this with the sequencers? Are the sequencer’s “tied” to specific synth engines or can you mix and match?
I really appreciate the example you gave, it helps to hear it instead of just getting a written reply.


#2169

That synth can be controlled by a midi keyboard or if you look a few posts up, it has also been modified to work with the Earthsea sequencer which is a pattern recorder type sequencer. But essentially engines are tied to specific sequencers or midi controlled. It’s something that has been requested and discussed about selectable switching engines for sequencers - not implemented yet.


#2170

These are 8 band pass filters on very short looping samples at high bpm using 1 step on a Euclidean sequencer called Foulplay


#2171

Thanks for clarifying. Do the other synth engines have adjustable wave shapes? like the basics, sine square saw triangle? Theoretically you could “dirty” up the synths that are hard coded into the other sequencers by adjusting filters and wave forms?.. just saw the last post, thats done in the “foulplay” script, so a different synth engine from the previous one, but you’re manipulating the filters to get these crazy sounds?


#2172

Yes, there’s a “parameters” menu we’re you can change the synth characteristics and also assign midi cc. That Foulplay sequencer is a sample based drum sequencer were you import your samples per track - no synth engine (sample engine) but you can pitch, filter (lp, bp, hp etc) and bend each sample. Some of these sequencers now have a midi out so you can play your external synths with them.


#2173

Very interesting! I understand that this is in the very early stages and functionality will continue to evolve over time. How beginner friendly do you feel scripting in lua is? Say I want make a “macro” knob, that manipulates a couple parameter? I know thats not an easy question to answer, without knowing someones skill set or background. In conversations its debated wether norns is basically in “beta” and not really consumer friendly yet. Hence some of the questions i posed in the earlier post.


#2174

I am not a coder and for me it’s a steep learning curve - even trying to do the most basic stuff as there are many coders on this forum who are really helpful and it’s second nature for them, but not me :slight_smile: . I wouldn’t say it’s beginner friendly on the coding side. I tend to stick with other people scripts. I never used Supercollider or Lua before so it’s tricky. The best advice I can give is to follow the studies. It’s early days and as more studies are released (there’s 5 so far) I’m sure it will become easier.

https://monome.org/docs/norns/study-1/

helpful post here on Supercollider and Lua basic setup


#2175

You’ve been really helpful @mlogger I really appreciate it! Thanks for the blog post link, I haven’t come across that yet. Will def be doing the studies as well.


#2176

No currently MLR can’t resample but it’s a feature I believe will be added as it did have that ability on the original laptop edition
When playing live instruments real-time into MLR, think of it as playing into RAM of a looper pedal. Currently the RAM is lost on power down and your recordings are lost. Again a feature for saving your individual live track recordings has been requested within MLR but not implemented yet.
You can prerecord samples to Tape offline using other engines / sequencers and import into MLR or import samples from a laptop and load into a channel of MLR. Generally MLR works better on short samples, not whole songs.
MLR has assignable midi cc for setting input / record level, overdub and speed levels but not triggering. You clear buffers and setup play/record channels pushing buttons on the grid.
In MLR, no, audio can’t be triggered by incoming audio. I believe, theres currently nothing for handsfree stuff yet.

Here is me playing live into MLR from scratch - I work down recording into 4 channels then start playing with the track speeds and pattern recorders. Current MLR has 6 channels but I believe it’s going to be reworked back to 4 channels again.


#2177

This clears up a lot thank you. I was very unclear about how MLR functions in norns and how similar it would be to the laptop version. I feel like norns is such a pliable device that in the coming months all sorts of things could be possible. Thank you for helping me know whats currently available and what has yet to be implemented.
Love the video, samples with objects is such a great way to get original sounds!


#2179

Can you send Osc out of norns? For instance, to affect real time visual animations on a laptop running an openFramworks app or something listening for osc?

yes, see https://monome.org/docs/norns/study-5/

What is the monome return policy? do they have one?

7 days if new condition. this is unofficial, as monome is basically two people (i’m one of them), and we do accommodate various situations.

norns is still in early development and is growing quickly. but keep in mind this also means the system is not fully-featured the way most commercial products feel the necessity to have all of the features of their competing products.


#2180

the features you describe are generally supported by the system and more-or-less-easily “enabled” in the scripting environment.

(NB: code examples are not tested/complete, only for illustration)

While running MLR on norns, can it resample itself?

mlr.lua doesn’t expose this by default, but the engine routing matrix supports it:

engine.play_rec(2, 1, 1.0)

output of voice 2 is now sent to input for voice 1.

since you mentioned panning on another thread: in MLR the panning of each voice is defined by

engine.play_dac(1, 1, 1.0) -- voice 1 -> L output level
engine.play_dac(1, 2, 1.0) -- voice 1 -> R output level
-- &c

and certainly this could be modulated.

Can norns set audio recording to be triggered by an incoming audio signal?

sure, something like

local isRecording = false
local threshDb = -20
poll.set("amp_in_l" function(x) 
   if x > threshDb and isRecording == false then 
       isRecording = true
       engine.reset(1)
       engine.start(1)   -- assuming that voice 1 is "armed" already
   end
end)

Can you send Osc out of norns

yes: OSC.send(host, path, args)

(of course the other machine needs to be on the same network… that aspect is potentially more annoying)

If you want a parameter to change over time, for a simple example: the sweep of a filter, is this something that can be coded with lua?

sure, though fast modulations are better handled on the SC side. many engines provide slew, ramp, and/or envelope elements for many or all synth parameters. “automation” could mean a lot of things; recording sequences is pretty straightforward since you can access the system time from any event handler.

certainly,

local param1value = 1
local param2value = 10

function enc(n, delta) 
  if n == 1 then 
    param1value = param1value + delta
    param2value = param2value + (delta * 10)
    engine.param1(param1value)
    engine.param2(param2value) 
  end
end

(param1 and param2 are both swept by encoder 1, param2 sweeps 10x faster. you could go crazy and have an array of 100 arbitrary mapping functions triggered on each encoder update.)

passersby for example is a script with some LFO and control routing abstractions. Ack is a fully modular engine. but modular patching by menu-diving a 128x64 screen is not a normal idea of fun. (even by the standard of synthesis nerds.) strong argument for text as better interface.

i know you say you aren’t interested in programming. but this is a small object with limited UI real estate and expansive functionality. so the strategy is to provide powerful scripting environment for UI customization. IMHO the system lends itself better to small focused things than to kitchen-sink applications. (my kitchen sink may not be your kitchen sink, and the limited UI bandwidth means dense functionality comes at the cost of intuitive use.)

its a computer; “yes” for some value of “-ish” :slight_smile:

i’m admittedly suprised to find the demos using “polysub” described as sine-like. they sound juno-ish to me. the engine uses antialiased primitive oscillators (xfade between sine,triangle,saw,pulse, with variable pulse/saw width), plus noise, through a modelled moog filter, in classic(/boring) subtractive style.

i guess the “cleanness” is a musical choice. (though also, there is no attempt in polysub to model drift, saturation, &c - this could be a fun experiment for someone.)

other engines use FM, feedback, wavetables, SVFs, gendys, &c. the available palette in supercollider is as varied as it gets. (but again palette != style)


#2181

Thank you @tehn I appreciate the clarification. I think there is a consensus with most of us who have been waiting in the wings and watching monome grow over the years, that the trajectory of this community is a direction we all find incredibly exciting. Is there a thread for brainstorming and proposing new functionality in norns?

@zebra BOOM! laying down some knowledge! Much appreciated bro. Exactly what i was hoping to see, a few lines of code to customize the scripts. Just to clarify, I am not opposed to learning programming but am voicing a shared concern of some friends of varying experience on just how steep the learning curve would be. The studies help for sure. I think you are absolutely spot on with approaching norns as focused tool, leaving the “kitchen sink” in the laptop so to speak, and let norns augment the process.

I was deemed the spokes person to get these questions out and I think everyone will be pleased to have some answers. I will make sure my friend checks out an example of polysub, in case they missed it.


#2182

meta: i’m enjoying the spokesperson role here because the book I’m currently reading is told in the first-person plural (as a wandering ambient band - https://www.goodreads.com/book/show/11241026-ambient-parking-lot)


#2183

Oh nice! will have to check that one out, the only book I’ve read with that perspective is Anthem by Ayn Rand.


#2184

I’m a fan of Anathem by Neil Stephenson.