1

AI resistance considered harmful (for programmers)

I can't reconcile 'Imagine having a pair programmer mentor with infinite patience and deep knowledge.' with “Look at this buggy code the LLM produced!”

Well, I can imagine many things which aren't true - but that's not the intent of the essay.

11 hours agoeesmith

So, I am probably the sort of person the author is talking about when they talk about "the AI dunker". I mean, I worked on neural networks in the mid-90's, and looked in on them every 2-3 years since then, and the current wave of hype looks very similar to the hype cycle I went through in my own mind in the late 90's. I think neural networks, including LLM's, are useful algorithms for certain specific tasks, but I think that's a lot less common than AI enthusiasts imagine.

I don't think there's anything wrong with programmers using AI to help code, but I think that it's not usually going to help with the actual rate-limiting step to creating good software. Getting the syntax right, or getting the general shape of a method right, is not what's usually the rate-limiting step, just like a programmer's typing speed is not usually the rate-limiting step.

Figuring out what actually needs doing is the rate-limiting step.

Now, occasionally typing faster or using some other speed-up-the-coding tool can help with that, because it allows you to get through a bunch of wrong approaches faster, so you can get to the right one. But, typically not. In fact, more and more tools to help you quickly Do The Wrong Thing, is most often going to make it take longer to realize what actually needs to be done. To make a contrived example, if you have made a field a string, and then you are able to quickly generate code to make sure it converts smoothly to a numeric value, and then also to generate code to make sure that the number it converts to is within a certain minimum and maximum value, and also that it is either a whole number or a whole number plus one half, this is actually just distracting you from realizing that the field should be an int, where the int is the number of half-values (i.e. 5 -> 2.5).

Now this may seem too stupid for a human to actually miss, and perhaps it is, but only slightly more complicated problems of this type happen all the time. The more tools you have to help you quickly and efficiently generate masses of code, the more of an obstacle you have in the way of writing code which is easy to read (and thus debug or otherwise modify later). That usually requires that you figure out the thing you really need to be coding for, which is not exactly the same thing as you were set to do, and AI is not going to help you with that, and that is the rate-limiting step.

Now none of this is necessarily a reason to never use AI tools, but it is reason to be skeptical that it will actually improve the velocity of your software development, if you measure that in working systems rather than lines of code.

I mean, Microsoft has had early and extensive access to ChatGPT-related tools more than any other software company. Do they seem, to you, to be producing higher quality software lately?

11 hours agorossdavidh
[deleted]