GuitarML Project

I’d like to share a project i’m involved with:

GuitarML is a open-source effort with its goal set to advance machine learning technology in order to accurately reproduce the tone of guitar amplifiers and pedals

our GitHub: GuitarML (Keith Bloemer) · GitHub

the models are mathematically indistinguishable from their originals
you can train your own digital models of amplifiers and pedals with tools provided by us (for free of course)

if you want to have a broader overview of the problem, consider watching the interview with NeuralDSP researcher Lauri Juvela: Neural Networks as Guitar Amps (with Neural DSP interview) - YouTube

lastly, i want to emphasise that we’re fully free and open-source, community-driven effort and rely on donations. If you’d like to see more cool things from us in the future, consider supporting us!

do you think software could ever replace tube amplifiers? let’s have a conversation!
Mish, contributor

11 Likes

we also do have a YouTube channel with demos: GuitarML - YouTube

interesting fact: the neural network runtime we use (RTNeural) was created by Jatin Chowdhury

you may know him for Analog Tape Model and Surge Synthesizer and if you’re lucky enough to drive a Tesla he’s worked on audio there too :wink:

1 Like

Fantastic work. I’m looking forward to test driving these in Colab.

Would it be viable to explore larger models that could include a range of effects/simulations arranged into classes for training, like reverb, amp sim, distortion, etc? Having individual simulations or audio character traits saved as vectors via pkl would potentially offer morphing between unique effects in real-time.

Thanks for your feedback!

Yes, Keith is currently working on exactly that and has added possibility to train conditional parameters using what’s called Transfer Learning

you can learn more the process from his medium post: Transfer Learning for Guitar Effects | by Keith Bloemer | Oct, 2021 | Towards Data Science

in short: once a model is fully trained, you can create versions of that model without having to start from scratch

the advantage of this method is that we can cut the training time, but most importantly, it gives you the ability to accurately model the knobs, where as other solutions only take a snapshot and then use traditional DSP methods to mimic the knobs

coming back to your question: it is possible to capture the whole signal chain

the approach used is blackbox modelling, meaning we don’t need any information about the electrical circuit and are simply learning which input causes which output, 44100 times a second, CD quality (you can adjust that)

now, let’s say, you record a signal of your amp connected to a pedal with a cabinet

then you can train a condition, for example if you want to adjust how much of that pedal effect is coming through
once you have couple samples, the model would be able morph from 0% to 100% in real-time, like you would expect from real hardware

note that time-based effects such as reverb and delay can not be modelled at the time