This is a bit of a pointless discussion since subjective opinions about sound can’t be meaningfully debated, but I do want to point out that you posts are not completely coherent on the subject of digital and analog. What companies put their names on has nothing to do with whether something sounds different.
I am writing my dissertation on analog simulation, and, again, there are objective differences between analog and digital devices. You can see them on a scope, you can hear them on a recording, etc. They will never be the same, because they are not constructed in the same manner.
With all of that said, we have had over half a century in improvements in digital audio, every year this difference is going to decrease. I would say the difference in oscillators only matter in certain circumstances. As analog technology has improved (great stability, purer waveforms, improved tracking), the difference between the two can be truly minimal. The problem is that, of course, the best analog synths are not simply reproducing waveforms, or reproducing the transfer functions of ideal filters.
This brings us back to the work being done with simulation. It doesn’t only matter for digital synths either. Much of the sound of prophet 5 rev 4 comes from the digital control of the analog circuits, which is modeled on the behavior of the old systems.
I listen to my digital Hydrasynth, and find that it sometimes sounds more “analog” than my prophet 12, which is a hybrid digital analog poly, but there are sounds my hydrasynth will never be able to reproduce, simply because of the behavior of the analog filters/distortion. The various filter simulations that are included sound vaguely similar to what they are trying to simulate (I recognize the character of the MS-20 filter, even if it doesn’t subjectively sound the same).
Point being: analog means a variety of things and some of them are more complicated than others.
It is obviously theoretically possible to completely simulate the characteristics of an analog synth, but it currently requires a lot of programming skill, a lot of attention to detail, access to a lot of analog equipment to reference and extensive knowledge of the components involved. You also need quite a bit of computing power to be able to reproduce these circuits. The person I work with has to use supercomputing clusters for days to render a few hours of the output of modeled analog chaotic circuits, to give one example. Things like filters that use feedback are very sensitive to minute rounding errors, etc.
Anyway, if you record an analog sound into your computer, obviously it’s possible to reproduce that since every part of that waveform has been quantized, it’s just that getting from the theory, and abstract modules to something that sounds convincing is not that simple.
There’s a reason my fairly powerful MacBook Pro (6 cores, 2.6ghz, hyper threading, etc.) starts to chug when I run VCV.
Also, unrelated, but there’s a lot of work being done with analog co-processors, and I suspect that computers will begin to integrate more analog processing power for faster differential equations, etc. Ironically we might be able to simulate analog circuits more easily with integrated analog computing.