i assume you’re already familiar with the architecture for NSynth (gah, for all i know you’re a Googler), but as a short recap (like, for others):
-
the “NSynth Super” is basically a “dumb client” that does multisampled playback. it is built on openFrameworks. you could certainly build an analogous instrument in SuperCollider with ease. (oF is probably not a great fit alongside the existing norns stack - it’s quite heavy if all you wanna do is a bit of sample playback.)
-
the “special sauce” in NSynth is not in the client, it’s in the audio creation pipeline that is implemented in TensorFlow and run on a phat GPU server. basically you give it some number of training sounds and it spits out a big matrix of interpolated sounds.
personally, i find it a crazy use of resources [*], an unimaginative application of ANNs [**], and not paritcularly interesting musically. (sorry to be the naysayer.) but if people are into it then i think it would be pretty straightforward to adapt the instrument interface glue on NSynth (GPIO, touchscreen, patch format) to OSC, and re-implement the fairly trivial sample playback engine in SC (it’s an ADSR volume envelope and a scanning wavetable oscillato: VOsc.ar(buf, hz, mul:EnvGen.ar(Env.adsr(...)))
and that’'s about it.) oop, i take it back. it’s not just single-cycle, but xfaded and looped sample playback. similar deal. here’s the source for the synth guts.
(i guess you would also want to make some kinda client for the audio pipeline, i dunno)
what i would like to see is maybe some interesting applications of all the stuff that SC already has to do arbitrary signal and sequence generation, including some with ANNs and other reinforcement-learning structures. e.g., brian heim made a simple client-side ANN class and there is a server-side demand-rate ANN UGen as well. meanwhile nick collins has been making SC interfaces to ML audio stuff for decades.
i have not played with the two ANN classes above, but i’ve certainly used SC to make music with markov chains and weird dynamical systems, it’s easy and fun.
of course it’s all about what you do with it - training and output mapping. and these are primitive and small ANNs, not like the many layers of embedding that Magenta uses.
[*] e.g., for the demo video it took 8 NVidia K80 GPUs working for 36 hours to produce the table of audio files between which the playback interface interpolates. that’s a lot of energy investment for something that sounds and plays kinda like a Korg Wavestation. 
[**] why do i say unimaginative? well, cause it’s just playing back short looped samples. (not single-cycle but still.) there is already a tremendous palette of synthesis techniques for generating interesting timbres, in sample form or algorithmically. short-loop playback isn’t by itself interesting, surprising or expressive regardless of what’s in the loop (IMHO obviously,) music happens elsewhere…
er, on a more positive note, i would like to see a really nice multisampled scanning wavetable synth done in SC. i don’t think it would necesarily require making a plugin…
by this, i mean that it would interpolate in two dimensions; one for pitch. you could then convert single-cycle waveforms (captured or generated) into arrays of successively brickwall-lowpassed waveforms for antialiased playback. (i believe the Waldorf wave has a feature that can do this on the fly with captured audio.) i’d be happy to contribute the brickwalling part. with that building block, you could start doing some pretty interesting stuff with fewer resources that doesn’t “sound digital” in the way that non-antialiased WT scanning does.