Tom MacWright

Web technology optimism hour

Computer with growing vines

It’s too easy lately to get into a very pessimistic mood about technology. Between the developer energy wasted on crypto, which has produced negative real-world value, the wider downturn in tech stocks, and the often-antagonistic interactions between developers on Twitter and elsewhere, the vibes can be bad.

But, amongst the chaos, there are some positive trends right now. Here are a things about software development for the web that are improving.

The backend adopted web standards

In the not-so-recent past, when I was writing server-side code, I’d have to use a module like request to handle HTTP requests, the querystring builtin or the qs module for decoding query strings, and url.parse() for parsing URLs, and moment to format dates in English.

Today, none of those problems require, or even really benefit from, using Node.js-specific modules. We have fetch() on the backend, and the URLSearchParams object for query strings, the URL object for URL parsing, and the Intl system for formatting dates, numbers, and lists.

These platform APIs are really good: they handle almost every need while also being user-friendly. And their standardization is a win overall: it means that, whether you’re writing code for a Cloudflare Worker, bun, or deno, you can use a lot of the same classes and methods.

Web frameworks fixed some of their performance problems

In the two and a half years since I wrote second-guessing the modern web, framework developers have been chipping away at those problems.

Both Remix, which went from a paid-product indie startup to a VC-backed startup to an acquisition by Shopify, and Next have rolled out techniques to render more of the application on the backend. Next uses React Server Rendering, a very confusing but promising technology. Remix is pushing people to use patterns with React that can work without client-side JavaScript, in a way that mirrors some of the techniques from Ruby on Rails, like “progressively enhancing” form submissions to use AJAX instead of full-page refreshes.

Astro is focusing on the “much less interactive” kinds of websites like CMS-driven blogs or marketing sites, for which React has always been absolute overkill. It lets those sites use sprinklings of React or other frontend framework tech. Mostly-static websites have always mostly suffered from being implemented in React, and they make easy targets for the anti-JavaScript-frameworks folks, but if people still want to use React or a similar tool for implementing them, better to do it in Astro than not.

There are also many React alternatives, like Solid, Svelte, and Quik, which aim to improve upon React’s performance and bundle size, but it’s hard to say how that’ll end. So far I don’t think any of the alternatives have enough momentum to become the new standard. They’re great technology, but it’s a long tail of solutions outside of React and Vue.

Rust and Go based tools are making TypeScript development faster

Modern web applications are written in TypeScript but the tooling around their development is increasingly tools in Rust and Go. We have esbuild (Go) that replaces Babel and Webpack for Remix and other frameworks, SWC (Rust) that replaces Babel for Next.js and (Rust) that might eventually also replace Webpack. Rome (Rust) is trying to replace prettier for formatting files and, eventually the rest of the stack. The creator of SWC is also working on a replacement for the TypeScript compiler in Rust.

Developing TypeScript applications is basically getting better in every way: cycles are faster, the tools are more helpful, and less configuration is needed to get a good result. The era of Webpack and Babel were essential stepping stones to getting here, but this era is better.

Remix, RPC and better approaches to doing CRUD with web apps

What I think bugged me most back when I wrote about the problems with single-page apps was the absence of a plan for data: how frontend code would interact with the backend and do basic CRUD operations. It seemed like the stock answer was that there was a “backend team” that produced APIs that somehow the frontend team was able to “consume” in frontend applications, but in most cases this just never worked. The two pieces could never be fully isolated, and a lot of times this backend/frontend distinction inspired a waterfall development methodology of designing APIs and hoping that they were the right ones, with expensive and slow cycles to update them when they inevitably didn’t fit the application’s needs.

This dynamic is certainly still happening, but we’re seeing some movements away from it. Remix integrated data loading and updating into each route in a way that, again, distinctly reminds me of Rails’s controller system. We’ve also got a resurgence of RPC systems for loading and updating data with Blitz and trpc. I’m rooting for both, and I’m using Blitz for Placemark, though trpc is the one gaining steam.

PaaS is back in fashion

I think there was an era recently when startups were strongly encouraged to use AWS, Google Cloud, or Azure, but at the same time those cloud services were an exceptionally bad fit for most startups. When you were building your platform using ClickOps (clicking around the AWS console), Kubernetes, or CloudFormation, or whatever - it was negative ROI.

Companies were spending precious time building custom server setups that were worse than Heroku defaults. Heroku - a Platform as a Service, just let you run applications and not worry about networking or operating systems. It was a good level of abstraction, but it was nearly abandoned by its creators and wasn’t keeping up with web technology.

Thankfully, right as Heroku eliminated their free tier and got hacked, a new group of companies entered the ring to offer a similar level of abstraction but updated for 2022. Render might be the most similar to Heroku, then there’s Railway and Fly with more emphasis on “edge compute”, and Flight Control offering a thinner layer on top of AWS.

I think it turns out that configuring security groups and making decisions about regions and Ubuntu versions is not a good use of developers time, and that the AWS/Kubernetes standard was only really attractive to large companies with dedicated devops teams.

The Apple M1 fixed computers

One more update. In 2019, I wrote “Something is wrong with computers”, about stagnating CPU, disk, and memory specifications for laptops. At least for Apple laptops, the subject of that post and the kind that I use, this has really changed.

In 2019, the maximum RAM ever configurable in a MacBook was 16GB. It’s now 64GB in the MacBook Pro with an M1 Max chip. The most storage configurable, ever, was 750 gigabytes, and in 2019 that number had decreased to 512GB. You can now get a MacBook pro with an 8TB disk. Clock speed is still around 3Ghz, but the M1 chips are dramatically faster than 2019’s, with better battery time.