I want to be excited about the google NSynth, but it appears to be essentially a sample pack:
Rather than generating sounds on demand, we curated a set of original sounds ahead of time. We then synthesized all of their interpolated z-representations. To smooth out the transitions, we additionally mix the audio in real-time from the nearest sound on the grid.
The synth itself is trained around the concept of a note, which seems like an oddly boring place to start from…
On the subject of synthesizers based on neural network simulations, David Tudor’s neural synth experiments are super cool:
https://davidtudor.org/Articles/warthman.html