Beautiful update, thanks mucho!
I’ve been following the updates on instagram and I’m very glad to see this out in the world! Can’t wait to try it out for myself.
Best news I’ve heard all day! Thank you!
I’m new to the iOS software world but one of the first things I tried was Borderlands, I love it. I’m even more stoked with update. So I think its so intuitive to use, that it’s got me thinking about potentially replacing my Morphagene, using my iPad for this and other effects. I’ve already been posting on the iOS thread, but since this is Borderlands specific, I thought I’d ask here: Does Borderlands work with external interfaces for the iPad? I’ve been using it with a crappy passive guitar interface to get sound in and out, it works fine, but the AD converter on the iPad doesn’t sound that great. I don’t see any options in the prefs to select an audio device. When I use Borderlands in AUM, I of course can select a device out but no way to select an input to route to Borderlands. If I just try to pull the audio in while I have AUM running it actually distorts the incoming audio drastically, almost sounds like there’s a ring mod on it. I guess before I go down the rabbit hole of buying an interface I’d like to know if it even works with Borderlands? My goals is to sorta use it like Clouds and Morphagene: processing live audio on the file from the modular.
Look at the MOTU M-4!
Yes. I use a Zoom U24 and Borderlands in an FX node with Rec In running to wonderful results. The update (waterfall recording) has taken it to a new level.
And then add Spectrum for your Clouds.
This may be a dumb question, but with the M4, since there’s only a USB-C connection on the back, if you aren’t using the bus power on ar laptop, what you do you do with an iPad? Use the Lightning to USB 3 Camera Adapter, then plug that into the usb power supply?
Yes. Cords galore, but works. Plus the zoom can run on batteries.
thanks, nick! hope you enjoy it!
yes! it should work with any class complaint usb audio interface. it’s possible that whatever apps you are using are bogging down performance and the noise you are hearing is audio dropouts. what model ipad do you have? what apps are you using at the time? if you want to follow up via dm or email (carlsonc at ccrma.stanford.edu), i’d be happy to help troubleshoot!
Thanks so much.
I don’t want to bother you too much, as I’m still in the discovery phase here. But, So far I have no official audio interface, but I’m using essentially a passive splitter that splits the audio in jack on my iPad, from the input to the headset in and the stereo output. I’m just trying out what it would be like to have my iPad as part of my modular setup, before I jump in to purchasing a dedicated interface. I have a 3rd gen iPad Air, from 2019. So no real processing issues there.
I’m only really running into an issue when I run Borderlands in AUM. When I open it up as an audio interchange app, I can get sound out, but Borderlands doesn’t want to process the sound in. Its drastically distored. It’s probably the shortcoming of using the current setup I have, I’m guessing.
I’m guessing that would be alleviated with a dedicated interface.
Right now, my modular setup is pretty lean and tight, with minimal setup, just 7u 104hp. I’m weighing the idea of adding the iPad because on one hand I can drastically expand my effects processing with not too much more investment, but then I’m introducing more cables and more potential points of error. But, I really like the Borderlands app, a lot. So just asking questions before I jump in.
I downloaded Stria because of this thread. It sounds very bonkers. The inclusion of a “randomize” function wins my heart every time. Hitting the “randomize” button has produced future-proof sonics which I have rarely heard rivaled by man or machine.
For bonkers sonics, it rivals Animoog and perhaps even Nave. When one considers how bonkers many iOS synths are, one sees in what high esteem I hold Stria even having only spent a few hours concocting future-proof sonics with it.
I’ve got 6Ux104, and I use the ipad for effects (AUM, eventide blackhole, spectrum plugins etc). I was using a zoom u-24, but have moved to the expert sleepers es-8 for greater flexibility, and fewer cables; I can route ipad synths through the modular and back, or modular out to fx; with full control over panning and levels, as well as multi channel recording/looping. It all works quite well.
I’ve used it with the MOTU M2 (requires a powered hub with my old Air 2 iPad). It works rather quickly and easily. With the new waterfall feature you can create all sorts of interesting delay sounds. I do wish I could separate the input and output routing, which might be easier on a 4 track system. As is, I get cello into chan 1, out of chan 1&2, then all the grains also out on chan 1&2. It works, but I can’t mix them separately after the fact.
Hey there, what are you using for Looping in the iPad?
Is there a way to sync those loops to clock signals in the modular realm?
If it’s OK with the moderators I’d like to start a thread dedicated to this amazing granular synthesizer iOS app.
I’ve spent a couple of days going as deep as I can this last week, and the deep is pretty deep.
It has given me a new way of looking at music and playing music (I know I’m late to this party). I’m very excited to go further and deeper.
There is no manual, so to speak, and some aspects/elements of the UI aren’t entirely intuitive (to me, and I know I’m not alone), but most of it is, once you get past the first day or two.
I checked with the creator of the app before posting this and he’s happy to be part of the discussion, here. Welcome Chris Carlson @modulationindex, and congratulations on Borderlands!
I now see there is already a Borderlands 2.1 thread. Maybe these two threads can be merged so that with each new update we won’t need a new thread?
thanks, lloyd! looking forward to discussing / answering any questions here!
Question number one - is recording of automation undo-able or editable?
b - is there a limit to the length of the automation loops?
c - is every single recorded automation discrete from all others?
d - if the cloud is tempo synced, can the automation loops be quantized?
a: unfortunately not at this point but would love to get there in the future. only way to undo is to re-record. there are two ways to cancel out automation for individual items - 1 - touch and hold the automated cloud / param / sound and the automation record button turns into “delete automation” 2 - arm the automation button and tap the item you want to clear. unarm and the automation is gone (any time the system detects less than 1 automation “frames” have been recorded, it clears out the automation frame). oops… i lied. 3 ways - it’s also possible to clear all param automation for a single cloud by opening the cloud for editing (double tap), holding the center of the cloud so the automation button turns into delete, and pressing the delete automation button.
b - no limit! just limited by your system memory. the parameters, however, do instantly start looping as soon as you release. this was a change i made shortly after version 2 because i didn’t like the extra frames added to the automation in the time it took to movefrom param to automation button…
c - yep. totally asynchronous. HOWEVER, if you record a bunch of automation for one cloud, open that cloud for editing, and then double tap on the screen to duplicate the cloud, all of the automation is copied and in phase in the new cloud (including the phase of the ring mod and vibrato, which will remain in sync with the source cloud).
d - not currently, but this is also an area i hope to improve.
all of these q’s get right at a bunch of stuff i hope to work on in the future around more options for fine tuning automation. adjusting automation phase, quantizing and trimming, adding retriggering options like trigger from ADSR or trigger per grain, individual automation speed controls, and “linked” automation - making one param adopt the automation assigned to another, etc… basically an “automation” menu is on the list down the road. at the moment everything is asynchronous and dependent on user timing. useful to a point but not as powerful as it could be.
one more tip - i snuck this in at the last min for this version, but haven’t shared it anywhere yet - if you have a bunch of clouds on screen, you can move through the automation queuing (when things turn white) to start recording all of their position/ ADSR automation (turning red) simultaneously if you hold down the automation button. this is how the Borderlands Presets > “BG - Scale Played” example was created…enabling the same recorded automation duration for all clouds on screen / keeping the chord progression from drifting each loop.