On not using copilot
Thorsten Ball wrote a really thoughtful piece about LLM-based coding assistants. A lot of accomplished, productive engineers use them. Adaptability is important for software engineers. Why not dive in?
I'm not passionately against coding assistants, but neither do I use them on a daily basis in my code editor. I use Claude occasionally to write some shell script or tell me how awk works. A lot of folks I respect are using these coding assistants all day. There's obviously something there.
Without laying out the full cheesy analogy to the recent political events, needless to say we should all be vividly aware that people are bound to make decisions based on a cloud of emotions, prejudices, assumptions, cultural narratives, and reckons.
So, taking Thorsten's well-argued piece as true, which I think it is - LLMs are pretty good at a lot of things and are a legitimate labor-saving device used by professionals - let me poke around at some of the narratives.
LLMs as acceptance and lubricant for the crushing weight of complexity
One of the primary vibes of modern-day engineering is the absolutely overwhelming complexity of everything. If you work on servers, it's the sprawling mess of Kubernetes. Or you work on client-side applications and it's React. Or you swear off React and embrace the web platform, which relies on the gigantic Chromium codebase. Did you know that V8 has three different optimizing compliers within it? If you've held on to the simplicity of CSS, well, it's now a much more complex language. CSS has nesting and variables and it's getting the masonry layout thing soon, too. Even if you're working on low-level compiler and language design, you might have to use LLVM, which is gigantic and would take a lifetime to fully understand.
I mean, are we skidding toward a Butlerian Jihad or is this more of a 'late capitalism' feature? I don't know. A topic for a different micro-blog post.
But LLMs are of a piece with all of this. They write both English and code with no shame or attempt at concision. So much of it, too much! You might want to use another LLM to understand the code that you've written. And another to refactor it. Say what you will about the mountains of technical debt at your average startup, but that technical debt was hard-won, written by hand over the course of years. With modern tools, we can greatly accelerate that process. Do we want to accelerate that process?
And not only do LLMs make it easy to write complicated software, putting LLMs into the software lets you write fantastically more complicated user-facing products as well. "The computer telling users to commit self-harm" is a bug that we had not even considered to until recently, but now we have to. Traditional software had bounded abilities and limitations, whereas LLM-integrated software is truly unlimited in both directions.
LLMs as another reason to wonder about where productivity gains go
Some people who are very hyped up about LLMs are excited about how they are such a force multiplier. They are excited to automate their jobs and finish everything 5x faster.
This raises a lot of questions:
- If you are able to work a lot faster, can you work less?
- For the few folks whose wages are based on their output rather than the time they spend at a job, will those wages go up when their output goes up? What about after everyone else adopts the same labor-saving device?
The answers seem iffy here. There's lots of interesting research and a lot of headlines about how automation might cut labor demand. At the very least, once a tool like LLMs becomes ubiquitous, it's not a unique advantage, by definition.
Remember how John Maynard Keynes thought that we'd work 15 hours a week because productivity would make long hours unnecessary?
LLMs as GPS units of the future
My sense of direction, already weak, is atrophied because I constantly use GPS. I use it when I'm walking around my own neighborhood. It's silly.
What else is like this? Freeform writing: thanks to this blog, I write for the "public" (hello ten people who see this in RSS) pretty often. If I didn't have this, I probably wouldn't be nearly as comfortable with writing. The skill would atrophy.
What would I learn if I used LLMs often? Probably something. What would I forget? Probably a bunch, as well. The kind of refactoring that I do by hand on a daily basis requires some mental exercise that I'd probably forget about if I was coding-by-instruction.
LLMs make you think like a manager
Programming is antisocial. Not entirely, but it's sort of a solitary exercise. It's nice to have focus time, to face the blank sheet of paper and figure out solutions.
LLM assistants are chat interfaces. You beg the computer to do something, it does something wrong, you ask it to fix the problem. You're managing. You're acting as a manager, and chatting.
Sometimes that's a useful arrangement. Heck, ELIZA was occasionally useful even though it was a tiny, scripted experience. Talk therapy can be good, and chat interfaces can be productive. But at least for some of us, the need to keep asking for something, trying to define the outputs we want and clarify and re-clarify: this is managing, and sometimes it's nicer to write in solitude than to beg the computer like an micromanager.
So, in summary: maybe people shy away from copilots because they're tired of complexity, they're tired of accelerating productivity without improving hours, they're afraid of forgetting rote skills and basic knowledge, and they want to feel like writers, not managers.
Maybe some or none of these things are true - they're emotional responses and gut feelings based on predictions - but they matter nonetheless.