Fighting entropy
Posted: 2023-06-19
Jonathan Blow is a game designer and occasional curmudgeon who gave a 2019 talk on technological decline. The crux is an observation that software is getting worse for reasons that might eventually cause secular stagnation and, in the worst case, societal collapse. It’s a pretty heady topic for a game development conference!
I found myself thinking about the talk a lot after I first watched it. It’s a concrete way of framing a vague feeling that I had (and maybe some of you share!) about the way we should design software systems.
Entropy increases
I was briefly a technology consultant for Accenture. Part of the initial training was an overview of the business. The instructor mentioned, almost offhand, that they had some people working with large mainframe systems. The training was focused at recent college graduates, so after the session I asked how Accenture staffed those clients. The gist of the answer was that they didn’t: nobody new ever joined those projects. The people who knew how to manage critical older systems could name their price, but it wasn’t worth it to Accenture to train any of us to replace them.
I found that disappointing and interesting. Disappointing because I had hoped to learn old systems as well as knew ones through my work. Interesting because, while I was still too inexperienced to concretely identify the poor incentives, I could tell that both Accenture and the clients were in a local maximum.1
What if, once the greybeards retired, nobody was left who knew how to support the clients’ applications? What if a client chose to migrate to more supportable hardware, but nobody could answer Chesterton’s fence questions along the way?
Probably, nothing. Old businesses with old ideas may have run their useful course. But some of those clients were probably government agencies.2 What if some socially important task got stuck or collapsed?
It is worth watching Blow’s talk in its entirety. After some opening anecdotes of ancient and classical technologies that were lost, he says that the story of software is one of “local technological improvements… with overall inertia or degradation in the rest of the field.”
That degradation comes from quality-reducing complexity in the support infrastructure of the software we care about, because too little effort is put into keeping it simple, understandable, and effective. Entropy increases in a closed system. By analogy if too little attention is paid to infrastructure, software and otherwise, it becomes disordered. For sufficiently important infrastructure, we will all suffer if it fails.
Entropy and forgetting
Knowledge is lost all the time. It’s harder to learn something than to ignore it, and it’s harder to do something than to hope someone else will. There are pop-science examples of lost technology like the Antikythera mechanism, and more practical examples like ancient Roman construction techniques. It’s epistemologically difficult to know what we’ve forgotten in the recent past,3 but it seems likely that knowledge is being lost right now: even in the 20th century, we regularly lost films.
Life is engaged in a never-ending fight against entropy.4 Stepping back from the cosmic scale, technology systems become more disordered when we stop asking questions and teaching people how to ask questions. We need knowledge to solve new problems, and we need to remember old knowledge to avoid starting from a Platonic or Socratic or Cartesian “nothing” state for each challenge.
Fighting entropy and remembering
One problem with ever-increasing complexity is that every topic feels overwhelming. When you start to dig into how, for example, keyboard input handling works, you learn how many layers there are and how complex all the machinery involved is. So you move on to something more straightforward, like saving a file, only to learn that “specifications of how disk state is mutated in the event of a crash are widely misunderstood and debated.” But it’s worth persevering! Computers can be understood, and only by forcing ourselves to understand5 can we design robust systems with sound underpinnings.
The existing pool of knowledge would decay on its own, but it’s not a closed system. Every time we humans make the effort to maintain or improve our understanding of a topic, we make that knowledge more stable for society. But we have to make the effort. A stylized example6 from software is that after Internet Explorer won the browser wars, it got worse and worse until Google put a lot of resources into Chrome, which then got worse and worse until Apple put a lot of resources into Webkit. One can guess what will happen next, absent a paradigm shift. For the lower level systems that worry Blow, the incentives are, if anything, worse. Browsers are targeted at people who are not inclined to make excuses for an overcomplicated interface. Not so for proprietary GPU APIs.
Foundation
As Blow notes, widely shared knowledge is more robust against decay than narrowly held knowledge. Simpler systems are easier to understand than complex systems, so they are more likely to be widely shared. It’s therefore important that fundamental systems are simple and documented enough to be widely shared and robustly remembered.
Each of us can practice understanding important systems and concepts. Society allows us to outsource some of that knowledge to others, but as a whole we cannot let critical knowledge disappear without consequences. Simplifying only requires will.
--Chris
Appendix: disagreements
While I like this talk a lot and agree with its thesis, I don’t agree with all the problems Blow identifies. The following two jumped out at me, in no particular order.7
He claims the OS is something we “mostly don’t want.” That is specific to games or other applications that think of themselves as the “full system.” The user of the computer probably wants a filesystem and network stack and privilege isolation, or they would have bought a PS5. There is debate on the best way to write a kernel and there is growing space for domain-specific machines and chips, but most people want their computer to be flexible and general-purpose.
He also has a “kids these days” section lamenting that newer programmers only learn Unity and Unreal, so they can’t craft artisanal pixel sequences to the frame buffer. I think that Unreal and similar systems are papering over excessive complexity in the layers below. To me it’s more interesting to look past the developers to the hardware and kernel designs. If we can incentivize those lower layers to simplify and document the reasons behind their choices, it becomes tractable for generalists to learn them in detail. Many software engineers like to take systems apart, so if it’s possible, someone will crack open Unity.
Appendix: the messenger’s motives
Finally, let’s review the messenger. Why is Jonathan Blow, game designer and developer, talking about fundamental technology knowledge?
Blow is someone who tells interactive stories. Fundamentally, there isn’t a reason that the specifics of how computers work is important to that task. Computers are just the medium he chose for telling his stories. His personality demands that he is able to take full advantage of the tools he uses, and to maximally control the experience of his players. In other words, he wants to reduce entropy in his environment. The strength of that personality trait is the difference between choosing an expedient tool, or understanding the fundamental technology, or advocating that others also understand the fundamental technology.
In particular, I wasn’t yet primed to see collective action problems and path dependence everywhere. ↩︎
I don’t remember if we were told who those clients were. This is a guess based on the type of organization which both hires lots of consultants and has outdated infrastructure. ↩︎
Examples of lost knowledge tend to focus on physical objects which we still have, or on rediscovered techniques which we can use in the present. If we forget even more thoroughly, we may not even know if something was once known. ↩︎
Is it ultimately futile? Or do we hope that the universe is not a closed system? ↩︎
I also think it’s a mistake to “stay in your lane” with a narrow scope of expertise. You can know things strives to explain medical details to a non-medical audience, and in my opinion does a pretty good job. I suspect that exploring cross-domain concepts in detail helps people both better understand existing knowledge and better develop new ideas. ↩︎
Yes, this turn of phrase is from Matt Levine. ↩︎
I also quibble with a segment on availability, where I agree with his overall point that complex systems are unreliable. But he brings up “5-nines of availability” which triggers my grumpy opinion that N-nines targets are mostly an excuse for mischief. ↩︎