The zrna “software defined analog” unit looks pretty interesting. I hadn’t heard of a FPAA before. This project created a Python library to program it. Haven’t found a circuit drawing or Kicad file so not sure if it’s open hardware.
This is fascinating tech! It has been around for a long time but I’ve found only found practical applications recently. I guess it is a combination of miniaturization and simpler frameworks. I know a few Eurorack companies are developing on similar FPGA platforms, like Intellijel’s Rainmaker and LZX’s Memory Palace. A hybrid approach paired with an ARM MCU seems to be the special recipe.
I think this will really take off when FPGA, FPAA become even slightly more accessible to program. I envision something like a combination of PD/EESchema/SPICE for designing and modeling analog circuits in schematic mode. I know Arduino has been working on a visual scripting language for their Vidor boards
Here is another FPGA polyphonic synth released recently called XFM. It’s code is open source and is built on a more ubiquitous board the Xilinx Spartan 6
Not sure if i fully understood it this. In terms of analogue audio the board has a mono in and a mono out? and then usb out? What are the actual benefits of it over digital?
Isn’t the value in using analog circuits in a) that the underlying devices don’t behave linearly, adding character to the signal; and b) that the choice of components and circuit used to implement makes a big difference.
In the zma, the available modules are specified as perfect functions. If the modules are meant to implement something akin to conventional analog circuits - where are the controls and parameters for them?
As I understand it the main advantage of an FPGA is that threading and priorities are not an issue like on a traditional CPU/MCU, because every “circuit element” on an FPGA works as an independent and discrete unit. This allows for some very complex and computationally intensive applications. But you shouldn’t expect it to have the sound of an analog circuit, “analog warmth” or whatever you call it.
Totally agree. I work with some people who use LabView to program all kinds of scientific measuring hardware and it looks like a version of MAX/MSP from the 90s.
Hey @lazzarello, I appreciate you posting a link here. I’m the developer behind the project. I’ll try to address some of the comments above:
- My intent is to open source as much as possible. Schematics first, firmware later. What’s available right now about the hardware is on the shop and hardware quick reference pages. The firmware depends in a complicated way on manufacturer code that may not be directly GPL compatible etc., I’m working to try to resolve that legal question before open sourcing.
- The signal chain is purely connected to the FPAA hardware. USB is for power and data. By default there is no DAC/ADC happening in the signal chain at all.
- @mzero The specified module transfer functions are ideal. Nonlinearities will occur in the analog signal chain just like normal components if you create them, i.e. saturation will occur, the underlying opamps are not ideal, etc. Each module has options and parameters that can be controlled in software. See https://zrna.org/docs/module/filter-biquad-lowpass for example. The hardware attempts to realize the parameters you request, but doesn’t do so in a perfectly ideal way. You can think of that realization process as finding values for underlying analog components rather than evaluating a function and computing samples, etc.
Let me know if you have other questions.
In this case it is a FPAA - so reconfigurable on the fly - but analog components. Whereas FPGAs have 1000s of gates, FPAAs have relatively few components. In the unit used by zrna, just 8 op amps, 32 capacitors, 4 comparators.
I think the point of using a FPAA is indeed to get “analog warmth”, since even a Raspberry Pi Zero can emulate at the circuit level far more than 8 op amps, etc…
I notice in the hardware specs you mention audio, ultrasonic, and differential I/O; I’m guessing of these only the differential channels are DC-coupled? As a CV source/processor I think this kind of thing seems really amazing, though definitely as a synthesizer as well. Having a Python interface for all this is awesome, though I wonder if you can comment on what if any “bare metal” programming interfaces are available/planned? Am I reading too much into the ConfigurationByteStream message type by guessing you can bang a raw reconfiguration packet at the FPAA over this gRPC interface? Because that sounds rad.
A key part of the appeal for me is definitely that you can reconfigure the circuit at runtime. Not all FPAAs support this, but it looks like the AN231E04 does, and I assume this functionality is how ZRNA works. This suggests all sorts of interesting possibilities that I haven’t the slightest clue of the feasibility of: self-restructuring generative sequencers, structure-variable filters, … which are of course all things you can do in software (or field-programmable digital hardware), but software is like, kind of a different medium?
I love FP*As and I’ve wanted an excuse to get a dev board for one of these chips for years just to fart around with it. I feel like despite making it through numerous electronics classes I never got any kind of intuition for Ye Analogue once nonlinear elements were involved. Breadboarding stuff in my free time always seemed like such a pain, and I feel like I’d gotten so used to the safety of software that blue smoke rites of passage seemed unappealing. Also now that I think of it, maybe some level of angst about whether integrated circuits feel pain? I guess probably I was supposed to figure out what the hell I was doing with EDA tools. What I think I’m saying is electrical engineering pedagogy could benefit a lot from modular synthesizers. Anyway, that’s a definite appeal of such a device for me, is having a sandbox where I can stumble around with software-defined circuits without worrying as much about, um, hurting it.
I am thrilled to be able to say that I have this device, it is extraordinary. Besides the fact that it means the old TV in my studio picks up The Acid Channel, I am incredibly excited about the platform. You have 5 inputs and 4 outputs, in minijack, that will do DC to >10 MHz! That device is theoretically capable of stimulating, like, a respectable fraction of my entire sensory input surface. LZX has said they’ve considered open-sourcing the firmware, but the VHDL would remain fixed. It’s possible this is also Intellijel’s position on Rainmaker, not sure. Which I get I guess, that stuff represents a lot of NRE costs, but as an on-again-off-again wannabe-pro digital hardware hacker it’s disappointing. FPGAs are exciting to me intrinsically, maybe even more so than things they can do, because massively parallel thinking is exciting, and dynamic reconfiguration (the body of the machine rearranging itself??) is really exciting.
For a while now they’ve been doing this on the same die with the gate array, this is the case with the Zynq system-on-chips used by the Memory Palace, but not the Cyclone series used on the Rainmaker’s DE0 board. SoCs like Zynq will usually also include some peripherals like gigabit transcievers (for ethernet, PCIe, …).
I believe LabView is able to do this if you also buy their (pricey) FPGA board, but I really personally can’t stand LabView and that stuff is in a pretty unattainable price range to (most, I suppose) bedroom hackers. It looks like Anadigm (the company that made this FPAA) also has some drag-and-drop functionality in their design tools. Not sure what their licensing costs or whatever are like either. I also am not sure what kind of other programming options are available for this part in particular, but high-level synthesis has been a really cool active research area for some time. Lots of these approaches try to use C (or MATLAB? eesh), that is probably more practical but perhaps less Neato than research like embedding a hardware description language in Haskell with the intent of compiling declarative high level circuit/DSP descriptions to register-transfer logic.
@csboling Thanks for the feedback. None of the IO is explicitly AC-coupled by the hardware. You have to arrange for that in your design or with external components. See https://zrna.org/demo/midi-to-cv-converter which sends DC out of the main output in response to USB MIDI messages.
You’re spot on with ConfigurationByteStream. It’s definitely possible to bang configuration data over gRPC directly to the device. That message is generally used only for debugging though. Zrna’s firmware generates configuration data as needed in response to API calls coming in and doesn’t usually send it back out to a client unless that’s requested. The problem is: how do we get valid configuration data? That’s the magic of the device; it’s worth pondering how the problem might be solved.
As for low-level stuff, can you elaborate on what you’d like to see? Bindings for the API can be built for proto compatible languages. The Python client right now is just a bit of convenience stuff around the bindings. Internally the firmware uses nanopb for C bindings (edit: so with these bindings you could easily make requests from other embedded systems). The plan is to expose more and more low-level stuff through the API as we go. Also, I’m planning to do some higher-level stuff like a patcher-style GUI, etc.
That’s awesome! If you can do raw device interactions over the API then I reckon the sky’s the limit, sounds like quite a neat design. I don’t know what specifically I’m looking for because I know very little about actually programming these things, I worked a few years ago with an Actel chip that had some programmable analog bits but I don’t believe I interacted with them directly. Guess I’m mostly just trying to get a rough sense of what the layers of the stack are like, which I think your comments have given me. Quite interested to see this develop, reckon I’ve got some documentation to read, including that MIDI/CV example I missed. Thanks a bunch for showing up here to answer questions.
I put up a discord server here https://discord.gg/xeztjzh
Jump in if you want to track what’s happening with the Zrna projects. It’s a good place to reach me if you want to chat. Currently working on an enhanced Axoloti-compatible DSP board in the downtime while we wait for more Zrna modules to arrive from manufacturing.