Which is not the case in modular, since the instrument is designed according to your specifications. That’s what I was getting at.
If I’m not mistaken, there’s contradiction in what you wrote. From one hand you point that modular sound synthesis architecture offers complex control possibilities, while on the other hand you consider the plug-and-plug format as arduous, and not amenable to real-time performance. I see this as contradictory; the open architecture is exactly what provides lots of opportunities for performability (sic).
My main point from my first post on the subject was that this is very much personal. The options are there and the way you approach the modular is so custom, that any limitations you’ve found by watching videos or listening to music coming from modular systems are not inherent in the instrument (although I hate considering the modular as an instrument type, but rather as a category, explicitly because of limiting associations), but to the approach of that particular performer/musician/composer.
It is practically quite easy to create a modular instrument with a large amount of expressivity (sic). Something that would require practice to master.
I get that the format lends itself to sound exploration more than playability, but the amount of options categorized as Controllers is a testament to the demand for haptic control of the sounds. Lets not forget that a common answer from modular users when asked why they transitioned from software is the physical aspect of the format.
Not all UI encourage hands-on control, but one of the biggest perks of modular-as-an-instrument approach is the option of meta controls, following the paradigm of acoustic instruments.
I can heartily suggest you take a look at the Hardware Physical Modelling thread for more inspiration. PM as a synthesis method can be seen as being focused around this premise.
My personal experience in this area, is that it is ripe with possibilities.