Tom MacWright

tom@macwright.com

Recently

Some founders are conspicuously absent. This a theme of The Great Google Revolt, Twitter Owner Wants Full-Time CEO, and Larry and Sergey say goodbye.

Jack Dorsey

Page and Brin’s Google was a historical triumph. But there’s little that feels triumphant about their sudden departure. The heat turned up on Google, and they decided to head for the exits. Google’s outsized success will dominate stories about their legacy. But the way they left — bored and mostly absent in a time of crisis — is part of their legacy, too.

And

Twitter’s chief executive officer, Jack Dorsey, is also the CEO of another public company and plans to move to Africa for a while, apparently mostly to work for the other company.

I can see how some CEOs want to withdraw from the public eye in this age of Twitter and instant analysis of their every word. But their withdrawal from company-internal matters doesn’t have a ready explanation. This generation of leaders were people who I admired early on because they seemed to have character and principles. But now, they’re either absent, or they’re recruiting seasoned executives who focus on a far narrower vision of the future.


The idea that technology is stalling out is pretty popular in my intellectual bubble.

Haddam Neck Nuclear Reactor

There’s Tyler Cowen’s viewpoint that he lays out in The Great Stagnation, which lines up pretty closely with Patrick Collison’s viewpoint. Cowen cues you into things like Eroom’s law (Moore’s, reversed), on declining returns in drug discovery. There’s Peter Thiel’s ideas about The End of the Future. Then there are Alan Kay’s ideas, which are kind of well summarized in his lunch with Steve Krouse. Alex Danco’s Progress, Postmodernism, and the Tech Backlash is an incisive take about how today’s ‘innovations’ are just different combinations of the same ingredients. Even Bret Victor’s latest talk is a run-on joke about how the interfaces of the 1970s were corrupted by the coming decades.

Okay, so a lot of people I respect buy into the core premise. Maybe Stephen Pinker is the opposite, and I’ve seen him speak: his explanation starts and the graphs and extrapolates from there. He shows economic expansion through certain measures and then hopes that their measurements have something to do with someone’s lived reality. It was unconvincing then and after another year of an economic melt up rooted in no clear innovation, even less convincing.

But from the premise that technology is stagnating, everyone draws their own conclusions. Some sloppy summarizations:

  • Cowen thinks that it’s about low-hanging fruit. We made processors faster and faster, and are now hitting hard physical limits that make our computing power stagnate and more. Breaking through will require some big new thing.
  • Alan Kay thinks that people aren’t thinking hard enough, creative enough, broadly enough, all at the same time. They need to try harder and do better, like the late greats.
  • Thiel thinks that the concentration of ‘productivity’ in a few companies is driving investors to make smaller bets on startups that are mostly aiming for acquisition or smaller prizes.

So, this month I read Michael Hanlon’s opinion in Aeon on the same topic, and it strayed closer to my preferred explanations: talking more about the concentration of wealth, the reduction of risk, and, crucially, the role of public funding for technology that was the main driver of a lot of 1950-1970s technology.

Sidenote that at least Cowen, Hanlon, and Thiel are some flavor of ‘conservative’, and harbor some alarming views besides these. Until 2010, Hanlon denied global warming. Cowen has some unsettling views about meritocracy and ultra-wealth. And you can google Thiel.

But anyway, I think the effective use of government money was a crucial part of the hyper-productive 1950s. The yawning income gap and the increasing financial precarity of most Americans explain a sharp decrease in the kind of creativity that requires expensive equipment and time.

There’s also something to be said about what the big advancements of the 1950s were. For example: they were hugely limited. Programs were written for a single computer, in a language specific to that computer, geared toward its processor and hardware. Portable languages (what we use today) were a huge advance that also added complexity everywhere. In exchange for that complexity, there can be many computers running the same software.

Computers today are also expected to work in the ‘real world’, so they have many incredibly deep concepts that you only need if you care about the larger world. Consider character encoding: computers like the UNIVAC would use character encodings like Fieldata, which supports the English language, only. Modern computers use UTF-8 and similar encodings, capable of encoding over a million different characters. There are committees to decide how Thai and Devanagari are represented, and remarkable input methods for languages like Japanese. Put together, typing a character just in English, in 1950, could probably be implemented in a few short instructions. Typing a character in 2020, not so much. But in exchange for that complexity, everyone can use computers.

So through one lens, you can think about the stagnation (in terms of computers) being people expecting a continuation of the 1950s - that computers would be flexible, powerful, hackable things for people to realize their greatest dreams. The people who loved computers in their early days - teenagers in their bedrooms learning BASIC, professionals who had picked up some programming to make a database - are missing a much-improved version of what they had before. Computers today are less hackable, less flexible, less interesting than they used to be.

But if you think about people first, well - huge percentages of the developed world have access to the internet and a computer or a phone, which is now a computer. This is enabled by things that are completely unrelated to how impressive the technology is, or whether it unleashes human creativity. This is about compatibility, availability, and price - the Internet, IP, operating system compatibility, text compatibility, file formats - internationalization, and ease of use. There are a lot of other technologies that have increased in capability but not in reach.

I do think that there’s a role of culture. Being severely limited in programming efficiency by long compile cycles, punchcards, shared resources, and so on, must inspire people to be rigorous with their first efforts. Most programmers being mostly mathematicians and not yet infected with flawed Computer Science programs probably helped.

Or is it something else? I wondered about this on Twitter a year ago and a Stanford Computer Science professor called it incredibly clueless, so there’s that. Wasn’t going to get a Ph.D. anyway.

Of all the explanations, the idea that the “programmers of the golden age just had some pizazz” is the one that I like the least. As a millennial, respect for specific generations of elders was such an element of growing up that we called on the ‘Greatest Generation.’ I wouldn’t reiterate what’s now a pretty common cultural refrain: younger generations are inheriting their forebear’s mistakes, and doing a damn good job of existing given the circumstances.


Oh, and the usual. I’ve been staying at home, reading The Dreamt Land and trying to watch current shows but falling back to Seinfeld and episodes of Rainbow Quest on the Internet archive.

Hang in there, folks.