Monobright 64 ideas


#21

I agree with @Andrew_Sblendorio.

For building future proofed apps nothing beats well adopted open source tools.

I consider Pd very well adopted. SuperCollider is a niche environment for people willing to invest time in learning its language and mindset.

I was into Max/MSP and Pd before moving to SuperCollider. I’m a programmer and while dataflow languages are nice for audio patching IMO such languages do not scale well for non-audio stuff (persistance, logic, UI). Refactoring in dataflow languages is a nightmare, at least for me coming from a programming background.

Recently, however, as my Ruby port of Grrr has matured I’ve considered Pd as audio backend for Ruby apps with some kind of asynchronous UI approach (there is a lot of work to get there).

As a side note: the SuperCollider language (SCLang) is basically just Ruby with a C-syntax. SCLang compiles classes and is more suited to realtime work, though.

Now, to get back to the original question “How are people using their old grids nowadays?”

A work-in-progress app I’m using is mono-bright even though the device I’m using is vari. This is due to me using Grrr. I’ve found that instead of varying led brightness I’ve had to flash leds (shut them off and then on after a short delay) to provide necessary feedback.

I think mono-bright grids are great. There is even a charm to mono-brightness. I still hook up my monome 40h to a 9 year old MacBook running Snow Leopard from time to time. If I get the time to add vari-brightness support to Grrr I will definitely keep mono-support.


#22

Vanilla Pd GUI is terrible. I have hopes in improved Pd GUI in Purr Data.

SuperCollider is just fine, I think. What are you missing in SuperCollider’s GUI framework? Qt-based, crossplatform, rapid scripting once you get the SCLang.


#23

Could be that I’m simply ignorant about it!


#24

I’m currently using my 64 walnut with Pages.
Which has honestly been very difficult setting up stable configurations, and denies the use of the whole tilt functionality. But the pattern recorders are rad. I can run re:mix and dj64fx via external app and trigger midi stuffs similarly to a push.
I only use the 64 and my laptop and I play acoustic guitar and sing and use the monome for live sampling into re:mix. I’ll record a pattern of button pressed on remix, then move to my next page and lay down maybe drums, then keys, and I can get into my dj64fx page at anytime to do some random fxs.
Something that may fit into your workflow: I put the input of re:mix on a resmapling track. Great for familiar craziness.
I haven’t really figured out how to run multiple m4l apps without using pages…
Ideally I’d be able to run Rodrigo’s party van before everything, but we don’t live in a perfect world. I’m just a guy that uses stuff other people have already made, and I’m still making this a performance live settup. I’m open to suggestions and ideas. This community is a vast well of knowledge. HMU!


#25

Sure. In my opinion there’s a couple key issues. Just as background I created the app @igormpc referred to in Max, Pure Data, and SuperCollider (Animalation- live-sampler for Grid, Arc, and SuperCollider).

The biggest thing that worried me about PD was the lack of a unified distribution. There’s PD vanilla, PD-extended (which is no longer maintained) and PD-L2Ork, and within PD-L2Ork there are two separate versions. If I wanted to send someone who’s never used a programming environment before my application, I believe getting a working version of Pure Data would be very difficult for them. Getting a working version of SC is easy, you just download it and install it on windows/mac, or on gnu/linux you can either download an older packaged version or compile it from source (which someone on gnu/linux is more likely to be comfortable with).

Another reason would be (my perceived) ease of long-term maintenance, as @jah started to bring up. I’ve read and seen myself, when you look back on something you’ve programmed after time has passed there is a certain amount of rediscovery about how and why you made the decisions that you made in your code. That rediscovery process can be made a lot more difficult (in my opinion) by having to search around in a sub-patch of a sub-patch of a sub-patch (like in PD). Not that it’s impossible, but I’ve noticed it’s a lot easier for me to come back to SC code than PD code.

I can’t comment on backwards compatibility because I haven’t been in the game long enough.

As an aside, are grid apps out of style? Seems like most development focus is on eurorack and to a smaller degree aleph.


#26

When I was working on an interactive installation a little while ago I encountered this very situation : which PD version to use if I wanted the system to be easily reproducible ? In the end I chose PD vanilla to be sure that no external would be missing when building the same system again on a different machine. But then I missed the improved ergonomics of PD-L2Ork distributions…


#27

Whenever I find the time, I try to document my not-so-obvious patching decisions as much as possible. This is not the most exciting part of the work, but I’m so glad to have done it when I come back to the system a few months later… Reverse-engineering your own work can be so frustrating!

Also I tend to use a lot of “Comment” objects (at least in Max and Reaktor6 - I can’t remember if there is such an object in PD vanilla, maybe only a “text” object ?) and try to name subpatches so that I don’t have to open them to know what they’re here for - at the expense of a lot of horizontal space taken by these “pd” objects.


#28

I’d say that most development focus has been on Teletype and Ansible lately, and I can understand why! The customizable-apps-within-generic-hardware concept is very appealing in my opinion.


#29

Only to play devils advocate here, I’d say a computer is generic hardware! Now we have to talk about the spirit of the modules versus the computer…oh brother.

Yes commenting is huge, as I’m learning more and more. Thinking back on some stuff I’ve already learned, it seems redundant to comment about how something works, because the clearest answer to the how question is the code itself. It seems more important to comment about why you did this or that, or what a piece of code does.

I would argue in this way SC is a little better at self-documenting because values aren’t just stored in generic message boxes, they’re stored in named variables. With named variables, any time that value gets used you know where it came from and if the code is clear, what it’s doing…leaving a space to comment about why it’s doing whatever it’s doing. That to me is easier than tracing patch cables around.

I think that’s oversimplifying a lot.

Either way, both have strengths and weaknesses, and I’m just expressing my opinion based on my experience with the two platforms.