The Potential in a Century of Uncertainty


Just a few years from now in 2027 will begin the 100th anniversary of a series of discoveries in physics and mathematics which describe limits on human — or any! — knowledge and ability. This decade-long centenary is a carrier wave which will be loaded with opportunities in the form of various celebrations, panels, writings, and more, a predictable series of opportunities to accelerate the broader Western culture’s understanding of limits. I believe that fully establishing this understanding in our worldview would dramatically improve the odds for humanity, and the living systems we depend, to survive our current conflicts & crises and thrive going forward.

The cover of Douglas Hofstadter's book, Godel, Escher, Bach. Two intricately carved blocks hang suspended above one another. Lights shining through them cast shadows of a G and E to the left, an E and G to the right, and a shadow B below from a light shining directly down through both blocks.
The cover of Douglas Hofstadter’s book, Gödel, Escher, Bach

First, credit where credit is due — it was from Douglas Hofstadter that I learned to see these limitative theorems as a set. He brought them together for all of us in his 1979 book, Gödel, Escher, Bach:

All the limitative Theorems of metamathematics and the theory of computation suggest that once the ability to represent your own structure has reached a certain critical point, that is the kiss of death: it guarantees that you can never represent yourself totally. Gödel’s Incompleteness Theorem, Church’s Undecidability Theorem, Turing’s Halting Problem, Turski’s Truth Theorem — all have the flavour of some ancient fairy tale which warns you that ‘To seek self-knowledge is to embark on a journey which … will always be incomplete, cannot be charted on a map, will never halt, cannot be described.’

Part II, Chapter XX (p. 697), Gödel, Escher, Bach

In the late 1920s & 1930s, there were a series of discoveries in physics, and mathematics & logic, which ended any hope for a single, coherent model which could explain everything – even in theory. Before these discoveries, many science-minded folks imagined that once we found our ultimate goal of an all-encompassing model, only limits in our ability to measure things would keep our predictions from being perfect. After these discoveries, all such hope was lost. (As with nearly all things, there are detractors from this interpretation. Some of them are worth engaging — after all, we can never be fully certain we are right. ;-)

The most well-known and first of these discoveries was Heisenberg’s Uncertainty Principle, a result coming out of quantum physics in 1927, which says that at any given moment, the more precisely you know a particle’s position, the less precisely you can know its velocity (speed + direction). And vice-versa — the better you know something’s velocity, the more uncertainty there will be in your measurements about its location. That is, Heisenberg proved that it is impossible, not just in practice but in theory, to have complete knowledge of these two seemingly simple attributes.

Less well-known is the next, Gödel’s Incompleteness Theorems of 1931, which establish that any sufficiently complex model (what he means by “complex” notably includes any model which includes the model within itself) cannot be both complete and accurate. That is, if it is 100% accurate then it will be incomplete, unable to describe some aspects of the system. Or, if it is 100% complete, addressing every aspect of the system, then some of its statements about that system will turn out to be incorrect. (I’m using theory and model more or less interchangeably here. The second theorem Gödel developed that year established that a theory that is self-consistent cannot even prove its own consistency.)

Timeline (and please let me know of others!):

1927 – Heisenberg’s uncertainty principle
1931 – Gödel’s incompleteness theorems
1933 – Tarski’s undefinability theorem
1936 – Church’s undecidability theorem, and Turing’s halting problem

I suggest that, even if only indirectly, the implications of this shift in understanding have been unfolding across academia (and society more generally) over the past century in manifold ways. Without having conducted a thorough search for such, I would count Margaret Mead’s work as a cultural anthropologist, and specifically her influence on the emergence of second-order cybernetics in the 1960s, notably delivering a presentation on the “Cybernetics of Cybernetics” at the First Annual Cybernetics Symposium of the American Society of Cybernetics. Also consider reflexive awareness emerging more broadly in social sciences in the 1980s, with anthropologists for example considering how their presence affects studies they are undertaking, and turning their critical eye to their own methodologies.

More recently, the replication crisis has shaken first the social sciences, as attempts to reproduce many lauded studies of the past failed to replicate their finding. Those from the hard sciences may have initially chuckled and shaken their heads, only to see the crisis extend to their own fields as experimenters attempted and sometimes failed to replicate their “well-established” findings as well. I would expect further research to turn up quite a few more pearls of related developments strung all through the past century.

Why do I see it as a big deal how quickly and deeply we incorporate uncertainty into the foundations of our world view? I believe it gives us a better chance to have effective working understandings of any situation. It brings openness and humility, which are both key to learning. And we have a lot of learning to do, indeed an endless amount. I’m far from being the only one with this take on things. Being able to hold one’s understandings (or “paradigms”) lightly is Donella Meadows’ highest leverage point. And Tom Atlee and the Co-Intelligence Institute understand it as perhaps the most important truth, which applies in all situations: “There’s more to it than that.”

If you would like to do something with this upcoming decade-long carrier wave, please reach out! It’s not too early to start doing research on who has events or is likely to have big events in 2027. Or if this stirs anything in you at all, I’d love to read it. You can comment publicly below or on Mastodon (or twttr), or email me privately at


3 Responses to “The Potential in a Century of Uncertainty”

  1. Great essay, John. I’m so happy you’ve put it in writing. I’ve added links in the Resources lists for the wise democracy patterns “Wise Use of Uncertainty” (94) and “Dancing Among Clarity, Inquiry, Mystery…” (23)

  2. Thanks, Tom. Curious also for your thoughts on the possibilities with this carrier wave for amplifying co-intelligent perspectives and practices?

  3. Claude Shannon’s master’s thesis proving that it is possible to use arrangements of relays (the going thing at the time, before being replaced by vacuum tubes, and later transistors ) to solve Boolean algebra problems.
    He went on to become Vannevar Bush’s (of 1945 How We May Think) graduate student and continued to write ground-breaking stuff on information theory, develop error correction, coin “bit,” etc.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: