I’ve been thinking more about the ways in which the modern coding paradigm, rooted in 17th and 18th-century notions of the clockworks, is actually regressive, and that the older “analog” style of computing actually surfaces more holistic and ecological ways of thinking. My purpose is not to propose one way of thinking as necessarily superior to another, just to promote awareness that there are different approaches and that to have one approach dominate (in the sense of what counts as engineering), risks crowding out all the rest.
[For an extreme example, which is totally is beyond the scope of this post, imagine if Descartes’ idea of ‘meditation’ was replaced with the completely opposite idea put forth in the 14th century Cloud of Unknowing treatise, how that would have led to completely different conceptions of technology and so forth. The point is always to be aware of and question the fundamental narratives and conceptual oppositions that give rise to predominant technological paradigms.]
Anyway, as an exemplar of systems thinking, take a look at this article from 1966, which focuses on building comparators and hysteresis elements out of ideal saturation circuits using positive and negative feedback.
http://www.philbrickarchive.org/1966-07_v14_no1&2_the_lightning_empiricist_01.htm
This article was in George Philbrick’s Lightning Empiricist journal which was meant for a lay audience, basically his customer base, not academic in any sense. Philbrick’s business from the early 1950’s was based on system modeling through self-contained, modular circuit blocks, apparently a form of modular analogue computing different from monolithic “analogue computers” – this no doubt had a strong influence on Moog’s synthesizer.
Anyway, in terms of the article (it’s worth a slow read), purely symbolic approaches actually don’t explain the circuit behavior, they lead only to equations that have either no solution or multiple solutions. Thinking like a programmer, in other words, results either in head-scratching or complete nonsense – nothing shows up at all. One must at the very least, think creatively like a mathematician, for instance by introducing an infinitesimal time delay, an element nowhere found in the logical or symbolic reduction, in order to arrive at an accurate prediction of circuit behavior. Basically, as is often the case with feedback, the unmodelable, non-ideal part (the ‘gremlins’ in the non-idealities of circuit physics) completely overtakes the ideal parts and wholly determines what the circuit will do.
(Sure, at a lower level the infinitesimal delays or integrator-lags do admit some explanation in terms of the physics. But the Turing paradigm with its multirealizability – its independence from physics, in other words – elides such considerations. The point is to think simultaneously at the level of parts and wholes, and to negotiate in the moment precisely what even “counts” as a symbol – this is the hallmark of ecological thinking.)
To design with such technologies requires one to be open to these conceptually unassimilable parts, which are often at first simply discovered through practical experience (through hacking, in other words) – and only then rationalized in ways that require the full creativity of mathematicians – by for instance introducing infinitesimals, epsilons that locate themselves precisely nowhere, signifiers without a signified, entities that in the end that must be taken to vanish. But it could be argued as well that the circuit simply performs the reframing, the circuit itself is what acts like the creative mathematician. The circuit on its own solves what 60+ years of AI research still cannot: the dreaded “frame problem”. The idea that a self-driving car should suddenly give up on its plan to get from A to B because there’s a funny smell coming from the engine; in other words that it may explode if it doesn’t reframe what are inputs, outputs, objective function in light of what is threatening to surface from the background-- this lies still beyond the scope of computational thinking, for which there is only foreground.
Purely rational thinking, indeed, admits no openness. It’s what reduces all ethics to trolley problems in which someone somewhere will always get killed. Virtue ethics in this light, becomes invalid because it’s uncomputable. One must already know what the program will do, one must already have unit tests and so on according to current best practices. Otherwise one is simply hacking, not coding.
In fact the very concept of “lightning empiricism” was couched in opposition to what Philbrick and Paynter saw as an excess of rationality, a hypertrophy of reason emerging in the immediate postwar culture, one that was also being challenged by the late 1940’s Macy conferences on cybernetics which helped bring forth an entirely new, holistic, system-oriented way of thinking.
By “lightning” it seems Philbrick and Paynter simply meant what we do by the terms enactive, embodied and so on. Curiously, it seemed in the 1950’s and 1960’s that everyone including engineers understood this. Where is this understanding today?