no, it would not be hard to replace the switch with a “width” parameter.
implementation note: i would be careful to skip any panning computations when width was set to minimum or maximum.
re: openAI jukebox: AFAICT, all components of this project are unsuitable for porting to GPU-less computers - e.g., requiring specialized CUDA kernels even for the pre-trained upsampling stage. for training, forget about it - it takes 3 hours to sample 20 seconds of music using these transformations on a top-tier GPU. (tesla V100 with 16GB RAM.)
similar limitations apply to most such projects. personally, i think ML is interesting, and data-driven brute force training/classification/synthesis tasks come up a lot in my day jobs, but i don’t see much overlap with the needs of a box like norns. the purpose of norns is to expose the power of fairly simple audio processes through accessible scripting.
there are other, lower-bandwidth techniques and process in the big umbrella of ML, many of which we use all the time without much thought. (digital compressors and IR convolution have close analogs in ML.) simple neural networks and component transforms (PCA, RBF) have had interesting applications in synthesis, which have been explored in the academic community for many decades. (https://ccrma.stanford.edu/~slegroux/synth/pubs/UCB2002.pdf). the capabilities of a computer like norns have caught up to these kinds of models. they are not yet capable of playing with large-scale DNNs. those explorations are more suitable for the high-level tools and environments available to the other computers on your desk.
if you really want to access these kinds of processes via norns, i suppose the best way would be highly asynchronously through an internet API. (norns would be well equipped to capture audio, upload it, and then download or stream some synthesis artifact - seconds, minutes or hours later.)