Something that always strike me about the implementation of generative systems for composition is how loosely coupled the algorithm/signal source is with the representation of that data in sound.

If one is quantizing a signal (in time or value) from some system, are we even “listening” to that system any more? Results and not this question arguably matter more.

I wonder how 2D state of CA could be represented in sound in a way that didn’t abstract away its information into other structures that no longer represent the process.

2 Likes