i assume youāre already familiar with the architecture for NSynth (gah, for all i know youāre a Googler), but as a short recap (like, for others):
-
the āNSynth Superā is basically a ādumb clientā that does multisampled playback. it is built on openFrameworks. you could certainly build an analogous instrument in SuperCollider with ease. (oF is probably not a great fit alongside the existing norns stack - itās quite heavy if all you wanna do is a bit of sample playback.)
-
the āspecial sauceā in NSynth is not in the client, itās in the audio creation pipeline that is implemented in TensorFlow and run on a phat GPU server. basically you give it some number of training sounds and it spits out a big matrix of interpolated sounds.
personally, i find it a crazy use of resources [*], an unimaginative application of ANNs [**], and not paritcularly interesting musically. (sorry to be the naysayer.) but if people are into it then i think it would be pretty straightforward to adapt the instrument interface glue on NSynth (GPIO, touchscreen, patch format) to OSC, and re-implement the fairly trivial sample playback engine in SC (itās an ADSR volume envelope and a scanning wavetable oscillato: VOsc.ar(buf, hz, mul:EnvGen.ar(Env.adsr(...))) and thatā's about it.) oop, i take it back. itās not just single-cycle, but xfaded and looped sample playback. similar deal. hereās the source for the synth guts.
(i guess you would also want to make some kinda client for the audio pipeline, i dunno)
what i would like to see is maybe some interesting applications of all the stuff that SC already has to do arbitrary signal and sequence generation, including some with ANNs and other reinforcement-learning structures. e.g., brian heim made a simple client-side ANN class and there is a server-side demand-rate ANN UGen as well. meanwhile nick collins has been making SC interfaces to ML audio stuff for decades.
i have not played with the two ANN classes above, but iāve certainly used SC to make music with markov chains and weird dynamical systems, itās easy and fun.
of course itās all about what you do with it - training and output mapping. and these are primitive and small ANNs, not like the many layers of embedding that Magenta uses.
[*] e.g., for the demo video it took 8 NVidia K80 GPUs working for 36 hours to produce the table of audio files between which the playback interface interpolates. thatās a lot of energy investment for something that sounds and plays kinda like a Korg Wavestation. 
[**] why do i say unimaginative? well, cause itās just playing back short looped samples. (not single-cycle but still.) there is already a tremendous palette of synthesis techniques for generating interesting timbres, in sample form or algorithmically. short-loop playback isnāt by itself interesting, surprising or expressive regardless of whatās in the loop (IMHO obviously,) music happens elsewhereā¦
er, on a more positive note, i would like to see a really nice multisampled scanning wavetable synth done in SC. i donāt think it would necesarily require making a pluginā¦
by this, i mean that it would interpolate in two dimensions; one for pitch. you could then convert single-cycle waveforms (captured or generated) into arrays of successively brickwall-lowpassed waveforms for antialiased playback. (i believe the Waldorf wave has a feature that can do this on the fly with captured audio.) iād be happy to contribute the brickwalling part. with that building block, you could start doing some pretty interesting stuff with fewer resources that doesnāt āsound digitalā in the way that non-antialiased WT scanning does.