← Index

On LLMS and programming

By now (Apr 2026) most software engineers are using LLMs in one way or another. The transformation on our craft is becoming evident, and what the consequences will be is not clear.

This are some thoughts on this, recalling some interesting readings I have collected over time.

Britleness, job replacement, slop, danger

Some argue LLM generated code is brittle and too verbose (slop) and that the time to review it offsets the speed gained from using it. A counterpoint is that human-produced code, when not properly tested and without proper tracking of the requirements will always be brittle, incorrect, or even dangerous[ ].

The same technics used to review and sanction human-written code can be used for the one the LLMs generate. Just now, guidelines for llm-assisted contributions to the kernel state that humans are fully responsibility for the contribution.

That put LLMs in a different place as the one they are tried to be sold from. As a tool, not as a worker (or worker replacement).

Lost skills

Another concern is that the current skillset needed for programming is going to disappear[ ]. Answering that, many ways of the past have been lost (we no longer optimize for drum-memories like Mel, and most of us do not get deep into the intel details to squeeze a 50% improvement with assembly language).

But there we are talking about technics being replaced completely by others. This is, in systems programming you will get 100% of what you would do in assembler by programming in C, thus assembly language art is almost lost but for very specific needs. Some techno-optimists (or so named accelerationinsts) think this is the way it will go. No longer used, no longer needed. Even direct human-to-machine code[ ] translation is proposed.

With LLMs you can get good results most of the time but some % you'll have to step in and debug. The fact is not partitioned or easily avoided, it comes from the very non-deterministic nature of LLMs.

Non determinism

The lack of determinism of LLMs is something software engineers are not used to on their tools, but that is something that human-facing workers have to deal with everyday. Even non-software engineers, dealing with the "analog world" have to deal statistically with reality, using tolerances and such. The 'emergent' (euphemism for unexplainable) properties of LLMs are to be analyzed like human body is studied by physiologists. In my opinion, people expecting LLMs to work like compilers -- that have strong semantics to follow -- and not like 'organisms' or 'analog machinery' are looking at it wrong. Also people arguing that determinism is needed for an engineering tool or technic to work, look around you at the physical -- as in not numerical, not discrete -- world. Everything is not exact, analog, not repeatable, decaying, rusting, on the verge of falling down or crashing and that do not prevents engineers do their work with (or around) it.

Danger

If all deployments are done by an LLM, and given that our brains optimize for not-thinking, will the complacency make us not be ready when that 1% comes? Using the first analogy, if we were writing 99% of the code in C, but still always needed to go for that 1% of assembly, will we still have the skills for it[ ]?

https://aphyr.com/posts/418-the-future-of-everything-is-lies-i-guess-work

Not learning

How would the new software engineers be trained if they can just give their assignments to an LLM and get A's? In my experience, there were already people coming out with Computer Science degrees that somehow were not ready for developing software before LLMS, and miriads of current software engineers do not have formal grades. Is self-motivation and self-learn being hindered by LLMs? Not in my opinion. There will be always people loooking to learn and not just pass (thus, not cheat). Will it devaluate the CS titles in the views of recruiters? Will it change the hands-on evaluation techniques we had during our formation to classic presential exams?

Not fun

I recall an interview with Linus Torvalds in which he remembered the first time he realized that he could tell a computer what to do and the computer would do that and nothing else. Most of us can relate to that ephiphany, and to the joy of learning how to "make the machine do things". Some people (*) argue that LLMs have taken out the fun from the craft of programming, that reviewing code is not as fun as writing it, and that the non-deterministic (again) nature of LLMs makes the human-computer interaction 'not the same'

Of course the industry is not concerned with how much fun we are getting from writing our paid code. Contrarians would talk about a workers' motivation, turnover (but as LLMs are expected to be used everywhere, turn to where?) and well-being. I am going to argue something else here.

If you are like me, you get a boost from learning something by yourself, even better if you learn by doing (maybe the only proper learning, as Feyman[ ] would say) In my opinion there are two sides of this. Not being as fun would mean 'not learning as much' and that is dangerous. Previously, programming something meant that you understood how that something worked, when things went sideways, there was always an implementor that could analyze what was going wrong. This is the untold expectation that maybe LLMs will break. Now we face the danger of building systems without properly grokking how they work. Reviewing the code may not be enough. In that sense, the lack of fun may be a canary in the mine for something. [to the aws failures attributed to llms]

Speed disruption

Many things have changed since I started learning to program back in 1996, after I somehow "choosed" John Carmack as a teenager pop idol. The 10x speedups promised by tools are well known in this industry (and well known to not deliver in the end) Are LLMs just a tool in that continuity of changes, or are they a true disruption with a before-and-after moment? Programming productivity is very difficult to measure. As by nature everything a computer programmer produces can be copied for free, there is no incentive in repeating anything. Good programmers automate everything they have to do more than twice [ ]. So most of the time goalposts are set and times estimated for things that weren't done before. If you can't measure the distance covered, you can't measure speed. How can we know if a speedup comes from the LLM if we can't measure speed in the first place? Thats why we can see "speedup" numbers all over the place..

Arguments against LLMs usually say that speed of writing code has nothing to do with speed of development -- meaning developing of a finished product -- While this is true, experimentation and checking of different paths is easier when code writing is faster. A recurring pattern of good software engineers [ ] is that when faced with two approaches to take, implement both and test. We alredy know that the right abstraction can save a lot of time and simplify understanding of a system, so being able to experiment faster can yield to better systems. This of course requires way more than vibe-coding or just passing tests.