139

AI should only run as fast as we can catch up

It's nice to see a wide array of discussions under this! Glad that I didn't give up on this thought and end up writing it down.

I want to stress that the main point of my article is not really about AI coding, it's about letting AI perform any arbitrary tasks reliably. Coding is an interesting one because it seems like it's a place where we can exploit structure and abstraction and approaches (like TDD) to make verification simpler - it's like spot-checking in places with a very low soundness error.

I'm encouraging people to look for tasks other than coding to see if we can find similar patterns. The more we can find these cost asymmetry (easier to verify than doing), the more we can harness AI's real potential.

8 hours agoyuedongze

Thinking about the relationship between creation and verification is a good way to develop productive workflows with AI tools.

One that works particularly well in my case is test-driven development followed by pair programming:

• “given this spec/context/goal/… make test XYZ pass”

• “now that we have a draft solution, is it in the right component? is it efficient? well documented? any corner cases?…”

6 hours agofelipeerias

Note that in the case of coding, there is an entire branch of computer science dedicated to verification.

All the type systems (and model-checkers) for Rust, Ada, OCaml, Haskell, TypeScript, Python, C#, Java, ... are based on such research, and these are all rather weak in comparison to what research has created in the last ~30 years (see Rocq, Idris, Lean).

This goes beyond that, as some of these mechanisms have been applied to mathematics, but also to some aspects of finance and law (I know of at least mechanisms to prove formally implementations of banking contracts and tax management).

So there is lots to do in the domain. Sadly, as every branch of CS other than AI (and in fact pretty much every branch of science other than AI), this branch of computer science is underfunded. But that can change!

7 hours agoYoric

Considering how useful I've found AI at finding and fixing bugs proportional to the effort I put in, I question your claim that it's being underfunded. While I have learned things like Idris, in the end I never was able to practically use them to reduce bugs in the software I was writing unlike AI. It's possible that the funding towards these types of languages is actually distracting people from more practical solutions which could actually mean that it is overfunded in regards to program verification.

4 hours agocharcircuit

All these engineers who claim to write most code through AI - I wonder what kind of codebase that is. I keep on trying, but it always ends up producing superficially okay-looking code, but getting nuances wrong. Also fails to fix them (just changes random stuff) if pointed to said nuances.

I work on a large product with two decades of accumulated legacy, maybe that's the problem. I can see though how generating and editing a simple greenfield web frontend project could work much better, as long as actual complexity is low.

12 hours agoblauditore

I have my best successes by keeping things constrained to method-level generation. Most of the things I dump into ChatGPT look like this:

  public static double ScoreItem(Span<byte> candidate, Span<byte> target)
  {
     //TODO: Return the normalized Levenshtein distance between the 2 byte sequences.
     //... any additional edge cases here ...
  }
I think generating more than one method at a time is playing with fire. Individual methods can be generated by the LLM and tested in isolation. You can incrementally build up and trust your understanding of the problem space by going a little bit slower. If the LLM is operating over a whole set of methods at once, it is like starting over each time you have to iterate.
11 hours agobob1029

I do this but with copilot. Write a comment and then spam opt-tab and 50% of the time it ends up doing what I want and I can read it line-by-line before tabbing the next one.

Genuine productivity boost but I don't feel like it's AI slop, sometimes it feels like its actually reading my mind and just preventing me from having to type...

10 hours agosamdoesnothing

I've settled in on this as well for most of my day-to-day coding. A lot of extremely fancy tab completion, using the agent only for manipulation tasks I can carefully define. I'm currently in a "write lots of code" mode which affects that, I think. In a maintenance mode I could see doing more agent prompting. It gives me a chance to catch things early and then put in a correct pattern for it to continue forward with. And honestly for a lot of tasks it's not particularly slower than "ask it to do something, correct its five errors, tweak the prompt" work flow.

I've had net-time-savings with bigger agentic tasks, but I still have to check it line-by-line when it is done, because it takes lazy shortcuts and sometimes just outright gets things wrong.

Big productivity boost, it takes out the worst of my job, but I still can't trust it at much above the micro scale.

I wish I could give a system prompt for the tab complete; there's a couple of things it does over and over that I'm sure I could prompt away but there's no way to feed that in that I know of.

9 hours agojerf

It's architecture dependent. A fairly functional modular monolith with good documentation can be accessible to LLMs at the million line scale, but a coupled monolith or poorly instrumented microservices can drive agents into the ground at 100k.

12 hours agoCuriouslyC

I think it's definitely an interesting subject for Verification Engineering. the easier to task AI to do work more precisely, the easier we can check their work.

12 hours agoyuedongze

Yup. Codebase structure for agents is a rabbit hole I've spent a lot of time going down. The interesting thing is that it's mostly the same structure that humans tend to prefer, with a few tweaks: agents like smaller files/functions (more precise reads/edits), strongly typed functional programming, doc-comments with examples and hyperlinks to additional context, smaller directories with semantic subgroups, long/distinct variable names, etc.

11 hours agoCuriouslyC

Aren't those all things, humans also tend to prefer to read?

I like to read descriptive variable names, I just don't like to write them all the time.

9 hours agolukan

I've tried it extensively, and have the same experience as you. AI is also incredibly stubborn when it wants to go down a path I reject. It constantly tries to do it anyway and will slip things in.

I've tried vibe coding and usually end up with something subtly or horribly broken, with excessive levels of complexity. Once it digs itself a hole, it's very difficult to extricate it even with explicit instruction.

9 hours agofreedomben

I think your intuition matches mine. When I try to apply Claude Code to a large code base, it spends a long time looking through the code and then it suggests something incorrect or unhelpful. It's rarely worth the trouble.

When I give AI a smaller or more focused project, it's magical. I've been using Claude Code to write code for ESP32 projects and it's really impressive. OTOH, it failed to tell me about a standard device driver I could be using instead of a community device driver I found. I think any human who works on ESP-IDF projects would have pointed that out.

AI's failings are always a little weird.

12 hours agohathawsh

Have you tried having AI build up documentation on the code first and then correct it where it’s understanding is wrong, then running code changes with the docs in the context, you can even separate it out for each module if you are daring. Ai still takes alot of hand holding to be productive with, which means our jobs are safe for now until they start learning about SWe principles somehow.

2 hours agoseanmcdirmid

In large projects you need to actually point it to the interesting files, because it has no way of knowing what it doesn’t know. Tell it to read this and that, creating summary documents, then clear the context and point it at those summaries. A few of those passes and you‘ll get useful results. A gap in its knowledge of relevant code will lead to broken functionality. Cursor and others have been trying to solve this with semantic search (embeddings) but IMO this just can’t work because relevance of a code piece for a task is not determinable by any of its traits.

10 hours agomanmal

But in the end, do you feel that it has saved you time?

I find hand-holding Claude a permanent source of frustration, except in the rare case that it helps me discover an error in the code.

7 hours agoYoric

I‘ve had a similar feeling before Opus 4.5. Now it suddenly clicks with me, and it has passed the shittiness threshold, into the „often useful“ area. I suspect that’s because Apple is partnering with Anthropic and they will have improved Swift support.

Eg it‘s great for refactoring now, it’s often updating the README along with renames without me asking. It’s also really good at rebasing quickly, but only by cherry-picking inside a worktree. Churning out small components I don’t want to add a new dependency for, those are usually good on first try.

For implementing whole features, the space of possible solutions is way too big to always hit something that I‘ll be satisfied with. Once I have an idea on how to implement something in broad strokes, I can give a very error ridden first draft to it as a stream of thoughts, let it read all required files, and make an implementation plan. Usually that’s not too far off, and doesn’t take that long. Once that’s done, Opus 4.5 is pretty good at implementing that plan. Still I read every line, if this will go to production.

42 minutes agomanmal

Are you using it only on massive codebases? It's much better with smaller codebases where it can put most of the code in context.

Another good use case is to use it for knowledge searching within a codebase. I find that to be incredibly useful without much context "engineering"

12 hours agoqudat

I've generally had better luck when using it on new projects/repos. When working on a large existing repo it's very important to give it good context/links/pointers to how things currently work/how they should work in that repo.

Also - claude (~the best coding agent currently imo) will make mistakes, sometimes many of them - tell it to test the code it writes and make sure it's working - I've generally found its pretty good at debugging/testing and fixing it's own mistakes.

7 hours agowubrr

<1 year old startup with fullstack javascript monorepo. Hosted with a serverless platform with good devex, like cloudflare workers.

That’s the typical “claude code writes all my code” setup. That’s my setup.

This does require you to fit your problem to the solution. But when you do, the results are tremendous.

5 hours agorprend

> I work on a large product with two decades of accumulated legacy, maybe that's the problem.

I'm in a similar situation, and for the first time ever I'm actually considering if a rewrite to microservices would make sense, with a microservice being something small enough an AI could actually deal with - and maybe even build largely on its own.

8 hours agobojan

If you're creating microservices that are small enough for a current-gen LLM to deal with well, that means you're creating way too many microservices. You'll be reminiscing your two decades of accumulated legacy monolith with fondness.

8 hours agovanviegen

> as long as actual complexity is low.

You can start there. Does it ever stay that way?

> I work on a large product with two decades of accumulated legacy

Survey says: No.

9 hours agothemafia

> I work on a large product with two decades of accumulated legacy, maybe that's the problem

Definitely. I've found Claude at least isn't so good at working in large existing projects, but great at greenfielding.

Most of my use these days is having it write specific functions and tests for them, which in fairness, saves me a ton of time.

12 hours agosilisili

You need to realize when you’re being marketed to and filter out the nonsense.

Now I use agentic coding a lot with maybe 80-90% success rate.

I’m on greenfield projects (my startup) and maintaining strict Md files with architecture decisions and examples helps a lot.

I barely write code anymore, and mostly code review and maintain the documentation.

In existing codebases pre-ai I think it’s near impossible because I’ve never worked anywhere that maintained documentation. It was always a chore.

9 hours agomoomoo11

Yes, unfortunately those who jumped on the microservices hype train over the past 15 years or so are now getting the benefits of Claude Code, since their entire codebases fits into the context window of Sonnet/Opus and can be "understood" by the LLM to generate useful code.

This is not the case for most monoliths, unless they are structured into LLM-friendly components that resemble patterns the models have seen millions of times in their training data, such as React components.

12 hours agotuhgdetzhh

Well structured monoliths are modularized just like microservices. No need to give each module its own REST API in order to keep it clean.

10 hours agomanmal

I guess that the benefit of monoliths in the context is that they (often) live in distinct repositories, which makes it easier for Claude to ingest them entirely, or at least not get lost into looking at the wrong directory.

7 hours agoYoric

Honestly, if you've ever looked at a claude.md file, it seems like absolute madness. I feel like I'm reading affirmations from AA.

12 hours agocogman10

It’s magical incantations that might or might not protect you from bad behavior Claude learned from underqualified RL instructors. A classic instruction I have in CLAUDE.md is „Never delete a test. You are only allowed to replace with a test that covers the same branches.“ and another one „Never mention Claude in a commit message“. Of course those sometimes fail, so I do have a message hook that enforces a certain style of git messages.

10 hours agomanmal

> Never mention Claude in a commit message“. Of course those sometimes fail,

It’s hardcoded into the system prompt which is why your CLAUDE.md approach fails. Ended up intercepting it out via proxy

5 hours agoHavoc

Thanks for this idea!

12 minutes agomanmal

Can you prove it in a blog and post it here that you do better code snippets than AI. If you claim "what kind of codebase", you should be able to use some codebase from github to prove it?

9 hours agojunkaccount

The proliferation of nondeterministically generated code is here to stay. Part of our response must be more dynamic, more comprehensive and more realistic workload simulation and testing frameworks.

13 hours agogradus_ad

I disagree. I think we're testing it, and we haven't seen the worst of it yet.

And I think it's less about non-deterministic code (the code is actually still deterministic) but more about this new-fangled tool out there that finally allows non-coders to generate something that looks like it works. And in many cases it does.

Like a movie set. Viewed from the right angle it looks just right. Peek behind the curtain and it's all wood, thinly painted, and it's usually easier to rebuild from scratch than to add a layer on top.

12 hours agoOptionOfT

Exactly that.

I suspect that we're going to witness a (further) fork within developers. Let's call them the PM-style developers on one side and the system-style developers on the other.

The PM-style developers will be using popular loosely/dynamically-typed languages because they're easy to generate and they'll give you prototypes quickly.

The system-style developers will be using stricter languages and type systems and/or lots of TDD because this will make it easier to catch the generated code's blind spots.

One can imagine that these will be two clearly distinct professions with distinct toolsets.

7 hours agoYoric

I actually think that the direct usage of AI will reduce in the system-style group (if it was ever large there).

There is a non-trivial cost in taking apart the AI code to ensure it's correct, even with tests. And I think it's easy to become slower than writing it from scratch.

6 hours agoOptionOfT

I just wanted to say how much I like that similie - I'm going to knick it for sure

9 hours agoAngostura

Agreed. It's a new programming paradigm that will put more pressure on API and framework design, to protect vibe developers from themselves.

9 hours agoglitchc

Code has always been nondetermistic. Which engineer wrote it? What was their past experience? This just feels like we are accepting subpar quality because we have no good way to ensure the code we generate is reasonable that wont mayyyybe rm-rf our server as a fun easter egg.

10 hours agowasmainiac

Code written by humans has always been nondeterministic, but generated code has always been deterministic before now. Dealing with nondeterministically generated code is new.

9 hours agomort96

determinism v nondeterminism is and has never been an issue. also all llms are 100% deterministic, what is non deterministic are the sampling parameters used by the inference engine. which by the way can be easily made 100% deterministic by simply turning off things like batching. this is a matter for cloud based api providers as you as the end user doesnt have acess to the inferance engine, if you run any of your models locally in llama.cpp turning off some server startup flags will get you the deterministic results. cloud based api providers have no choice but keeping batching on as they are serving millions of users and wasting precious vram slots on a single user is wasteful and stupid. see my code and video as evidence if you want to run any local llm 100% deterministocally https://youtu.be/EyE5BrUut2o?t=1

6 hours agonowittyusername

That's not an interesting difference, from my point of view. The box m black box we all use is non deterministic, period. Doesn't matter where on the inside the system stops being deterministic: if I hit the black box twice, I get two different replies. And that doesn't even matter, which you also said.

The more important property is that, unlike compilers, type checkers, linters, verifiers and tests, the output is unreliable. It comes with no guarantees.

One could be pedantic and argue that bugs affect all of the above. Or that cosmic rays make everything unreliable. Or that people are non deterministic. All true, but the rate of failure, measured in orders of magnitude, is vastly different.

2 hours agonazgul17

My man did you even check my video, did you even try the app. This is not "bug related" nowhere did i say it was a bug. Batch processing is a FEATURE that is intentionally turned on in the inference engine for large scale providers. That does not mean it has to be on. If they turn off batch processing al llm api calls will be 100% deterministic but it will cost them more money to provide the services as now you are stuck with providing 1 api call per GPU. "if I hit the black box twice, I get two different replies" what you are saying here is 100% verifiably wrong. Just because someone chose to turn on a feature in the inference engine to save money does not mean llms are anon deterministic. LLM's are stateless. their weights are froze, you never "run" an LLM, you can only sample it. just like a hologram. and depending on the inference sampling settings you use is what determines the outcome.....

2 hours agonowittyusername

Nondeterministic isn't the right word because LLM outputs are deterministic and the tokens created from those outputs can also be deterministic.

7 hours agoenergy123

I agree that non-deterministic isn't the right word, because that's not the property we care about, but unless I'm strongly missing something LLM outputs are very much non-deterministic, both during the inference itself and when projecting the embeddings back into tokens.

7 hours agoYoric

I agree it isn't the main property we care about, we care about reliability.

But at least in its theoretical construction the LLM should be deterministic. It outputs a fixed probability distribution across tokens with no rng involvement.

We then sample from that fixed distribution non-deterministically for better performance or we use greedy decoding and get slightly worse performance in exchange for full determinism.

Happy to be corrected if I am wrong about something.

6 hours agoenergy123

i've seen a lot of startups that use AI to QA human work. how about the idea of use humans to QA AI work? a lot of interesting things might follow

13 hours agoyuedongze

This feels a lot like the "humans must be ready at any time to take over from FSD" that Tesla is trying to push. With presumably similar results.

If it works 85% of the time, how soon do you catch that it is moving in the wrong direction? Are you having a standup every few minutes for it to review (edit) it's work with you? Are you reviewing hundreds of thousands of lines of code every day?

It feels a bit like pouring cement or molten steel really fast: at best, it works, and you get things done way faster. Get it just a bit wrong, and your work is all messed up, as well as a lot of collateral damage. But I guess if you haven't shipped yet, it's ok to start over? How many different respins can you keep in your head before it all blends?

7 hours agohn_acc1
[deleted]
12 hours ago

A large percentage (at least 50%) of the market for software developers will shift to lower paid jobs focused on managing, inspecting and testing the work that AI does. If a median software developer job paid $125k before, it'll shift to $65k-$85k type AI babysitting work after.

12 hours agoadventured

It's funny that I heard exactly this when I graduated university in the late 2000s:

> A large percentage (at least 50%) of the market for software developers will shift to lower paid jobs focused on managing, inspecting and testing the work that outsourced developers do. If a median software developer job paid $125k before, it'll shift to $65k-$85k type outsourced developer babysitting work after.

12 hours agomjr00

No thanks.

13 hours ago__loam

Sounds inhuman.

13 hours agoAldipower

As an industry, we've been doing the same thing to people in almost every other sector of the workforce, since we began. Automation is just starting to come for us now, and a lot of us are really pissed off about it. All of a sudden, we're humanitarians.

13 hours agoquantummagic

> Automation is just starting to come for us now

This argument is common and facile: Software development has always been about "automating ourselves out of a job", whether in the broad sense of creating compilers and IDEs, or in the individual sense that you write some code and say: "Hey, I don't want to rewrite this again later, not even if I was being paid for my time, I'll make it into a reusable library."

> the same thing

The reverse: What pisses me off is how what's coming is not the same thing.

Customers are being sold a snake-oil product, and its adoption may well ruin things we've spent careers de-crappifying by making them consistent and repeatable and understandable. In the aftermath, some portion of my (continued) career will be diverted to cleaning up the lingering damage from it.

11 hours agoTerr_

Nah, sounds like management, but I am repeating myself. In all seriousness, I have found myself having to carefully rein some of similar decisions in. I don't want to get into details, but there are times I wonder if they understand how things really work or if people need some 'floor' level exposure before they just decree stuff.

13 hours agoA4ET8a8uTh0_v2

Yes, but not like what you think. Programmers are going to look more like product managers with extra technical context.

AI is also great at looking for its own quality problems.

Yesterday on an entirely LLM generated codebase

Prompt: > SEARCH FOR ANTIPATTERNS

Found 17 antipatterns across the codebase:

And then what followed was a detailed list, about a third of them I thought were pretty important, a third of them were arguably issues or not, and the rest were either not important or effectively "this project isn't fully functional"

As an engineer, I didn't have to find code errors or fix code errors, I had to pick which errors were important and then give instructions to have them fixed.

12 hours agocolechristensen

> Programmers are going to look more like product managers with extra technical context.

The limit of product manager as "extra technical context" approaches infinity is programmer. Because the best, most specific way to specify extra technical context is just plain old code.

12 hours agomjr00

This is exactly why no code / low code solutions don’t really work. At the end of the day, there is irreducible technical complexity.

10 hours agoLPisGood

Yeah, don‘t rely on the LLM finding all the issues. Complex code like Swift concurrency tooling is just riddled with issues. I usually need to increase to 100% line coverage and then let it loop on hanging tests until everything _seems_ to work.

(It’s been said that Swift concurrency is too hard for humans as well though)

10 hours agomanmal

I don't trust programmers to find all the issues either and in several shops I've been in "we should have tests" was a controversial argument.

A good software engineering system built around the top LLMs today is definitely competitive in quality to a mediocre software shop and 100x faster and 1000x cheaper.

7 hours agocolechristensen

> “AI always thinks and learns faster than us, this is undeniable now”

No, it neither thinks nor learns. It can give an illusion of thinking, and an AI model itself learns nothing. Instead it can produce a result based on its training data and context.

I think it important that we do not ascribe human characteristics where not warranted. I also believe that understanding this can help us better utilize AI.

37 minutes agoaryehof
[deleted]
44 minutes ago

I remember a junior dev who thought he was done when his code conpiled without syntax errors.

5 hours agoadxl

> Maybe our future is like the one depicted in Severance - we look at computer screens with wiggly numbers and whatever “feels right” is the right thing to do. We can harvest these effortless low latency “feelings” that nature gives us to make AI do more powerful work.

Come to think about it... aren't this exactly what syntax coloring and proper indentation are all about? The ability to quickly pattern-spot errors, or at least smells, based on nothing but aesthetics?

I'm sure that there is more research to be done in this direction.

7 hours agoYoric

Our future is going to be like the one in Severance: no creation, no craftsmanship, just grooming the grids of numbers that do the actual work.

4 hours agobitwize

This feeling of verification >> generation anxiety bears a resemblance to that moment when you're learning a foreign language, you speak a well-prepared sentence, and your correspondent says something back, of which you only understand about a third.

In like fashion, when I start thinking of a programming statement (as a bad/rookie programmer) and an assistant completes my train of thought (as is default behaviour in VS Code for example), I get that same feeling that I did not grasp half the stuff I should've, but nevertheless I hit Ctrl-Return because it looks about right to me.

9 hours agokristjank

> because it looks about right to me

this is something one can look in further. it is really probabilistic checkable proofs underneath, and we are naturally looking for places where it needs to look right, and use that as a basis of assuming the work is done right.

8 hours agoyuedongze

I've been thinking about something like this from a UI perspective. I'm a UX designer working on a product with a fairly legacy codebase. We're vibe coding prototypes and moving towards making it easier for devs to bring in new components. We have a hard enough time verifying the UI quality as it is. And having more devs vibing on frontend code is probably going to make it a lot worse. I'm thinking about something like having agents regularly traversing the code to identify non-approved components (and either fixing or flagging them). Maybe with this we won't fall further behind with verification debt than we already are.

6 hours agopglevy

That’s just static code analysis with extra steps

3 hours agocousinbryce

[dead]

6 hours agohuflungdung

Make it do TDD. That'll slow it down.

2 hours agogijoeyguerra

Make it do scrum with sprint planning, retrospectives and sprint demos. A then another AI as product owner and scrum master. Ideally this AI has only a vague idea of what the product needs to or the technology but still has decision power. That should really slow it down.

an hour agovjvjvjvjghv
[deleted]
8 hours ago

I think there's a lot of utility to current AI tools, but it's also clear we're in a very unsettled phase of this technology. We likely won't see for years where the technology lands in terms of capability or the changes that will be made to society and industry to accommodate.

Somewhat unfortunately, the sheer amount of money being poured into AI means that it's being forced upon many of us, even if we didn't want it. Which results in a stark, vast gap like the author is describing, where things are moving so fast that it can feel like we may never have time to catch up.

And what's even worse, because of this industry and individuals are now trying to have the tool correct and moderate itself, which intuitively seems wrong from both a technical and societal standpoint.

12 hours agoyannyu

The verification asymmetry framing is good, but I think it undersells the organizational piece.

Daniel works because someone built the regime he operates in. Platform teams standardized the patterns and defined what "correct" looks like and built test infrastructure that makes spot-checking meaningful and and and .... that's not free.

Product teams are about to pour a lot more slop into your codebase. That's good! Shipping fast and messy is how products get built. But someone has to build the container that makes slop safe, and have levers to tighten things when context changes.

The hard part is you don't know ahead of time which slop will hurt you. Nobody cares if product teams use deprecated React patterns. Until you're doing a migration and those patterns are blocking 200 files. Then you care a lot.

You (or rather, platform teams) need a way to say "this matters now" and make it real. There's a lot of verification that's broadly true everywhere, but there's also a lot of company-scoped or even team-scoped definitions of "correct."

(Disclosure: we're working on this at tern.sh, with migrations as the forcing function. There's a lot of surprises in migrations, so we're starting there, but eventually, this notion of "organizational validation" is a big piece of what we're driving at.)

12 hours agotrjordan

Verification is key, and the issue is that almost all AI generated code looks plausible so just reading the code is usually not enough. You need to build extremely good testing systems and actually run through the scenarios that you want to ensure work to be confident in the results. This can be preview deployments or other AI generated end to end tests that produce video output that you can watch or just a very good test suite with guard rails.

Without such automation and guard rails, AI generated code eventually becomes a burden on your team because you simply can't manually verify every scenario.

12 hours agojascha_eng

indeed, i see verification debt outweighing tradition tech debt very very soon...

12 hours agoyuedongze

I would rather write the code and have AI write the tests :)

And I have on occasion found it useful.

8 hours agojopsen

And with any luck, they don't vibe code their tests that ultimately just return true;

12 hours agobigbuppo

I can automatically generate suites of plausible tests using Claude Code.

If you can make as a rule "no AI for tests", then you can simply make the rule "no AI" or just learn to cope with it.

12 hours agocatigula

AI can really only be as good as the data it’s trained on. It’s good at images because it’s trained on billions of them. Lines of code, probably 100s of millions, but as you combine those codes into concepts, split by language, framework, formatting etc all you loose the numbers game. It can’t tell you how to make a good enterprise app because almost nobody knows how to make a good enterprise app, just ask Oracle… ba-da-bum!

6 hours agodiddid

AI: urgh, sick of these escort missions

3 hours agodarylteo

So, we're giving up on the Singularity, then?...Good.

6 hours agoambicapter

It’s called TDD, ya write a bunch a little tests to make sure your code is doing what it needs to do and not what it’s not. In short, little blocks of easily verifiable code to verify your code.

But seriously, what is this article even? It feels like we are reinventing the wheel or maybe just humble AI hype?

10 hours agowasmainiac

It's like a buffered queue, if the producer (AI) is too fast for the consumer (dev's brain) then the producer needs to block/stop/slow down other wise data will be lost (in this analogy the data loss is the consumer no longer having a clear understanding of what the code is doing)

One day, when AI becomes reliable (which is still a while off because AI doesn't yet understand what it's doing) then the AI will replace the consumer (IMO).

FTR - AI is still at the "text matches another pattern of text" stage, and not the "understand what concepts are being conveyed" stage, as demonstrated by AI's failure to do basic arithmetic

10 hours agoawesome_dude

> AI should only run as fast as we can catch up

Good principle. This is exactly why we research vaccines and bioweapons side by side in the labs, for example.

13 hours agoCGMthrowaway

Appealing, but this is coming from someone smart/thoughtful. No offence to 'rest of world', but I think that most people have felt this way for years. And realistically in a year, there won't be any people who can keep up.

13 hours agorogerkirkness

> And realistically in a year, there won't be any people who can keep up.

I've heard the same claim every year since GPT-3.

It's still just as irrational as it was then.

12 hours agodontlikeyoueith

You're rather dramatically demonstrating how remarkable the progress has been: GPT-3 was horrible at coding. Claude Opus 4.5 is good at it.

They're already far faster than anybody on HN could ever be. Whether it takes another five years or ten, in that span of time nobody on HN will be able to keep up with the top tier models. It's not irrational, it's guaranteed. The progress has been extraordinary and obvious, the direction is certain, the outcome is certain. All that is left is to debate whether it's a couple of years or closer to a decade.

12 hours agoadventured

And there's the same empty headed certainty, extrapolating a sigmoid into an exponential.

6 hours agodontlikeyoueith

I can tell you don't control any resources relating to AI from your contempt alone

5 hours agorogerkirkness

People claimed GPT-3 was great at coding when it launched. Those who said otherwise were dismissed. That has continued to be the case in every generation.

12 hours agoArainach

Are you saying the current models are not good at coding? That is a strong claim.

7 hours agoesafak

For brand new projects? Perhaps. For working with existing projects in large code bases? Still not living up to the hype. Still sick of explaining to leadership that they're not magic and "agentic" isn't magic either. Still sick of everyone not realizing that if you made coding 300% faster (which AI hasn't) that doesn't help when coding is less than half the hours of my week. Still sick of the "productivity gains" being subsidized by burning out competent code reviewers calling bullshit on things that don't work or will cause problems down the road.

an hour agoArainach

A bit reductive.

9 hours agodwaltrip

> People claimed GPT-3 was great at coding when it launched.

Ok and they were wrong, but now people are right that it is great at coding.

> That has continued to be the case in every generation.

If something gets better over time, it is definitionally true that it was bad for every case in the past until it becomes good. But then it is good.

Thats how that works. For everything. You are talking in tautologies while not understanding the implication of your arguments and how it applies to very general things like "A thing that improves over time".

8 hours agostale2002

Why is the outcome certain? We have no way of predicting how long models will continue getting better before they plateau.

10 hours agoumanwizard

They continue to improve significantly year over year. There's no reason to think we're near a plateau in this specific regard.

The bottom 50% of software jobs in the US are worth somewhere around $200-$300 billion per year (salary + benefits + recruiting + training/education), one trillion dollars every five years minimum. That's the opportunity. It's beyond gigantic. They will keep pursuing the elimination of those jobs until it's done. It won't take long from where we're at now, it's a 3-10 year debate, rather than a 10-20 year debate. And that's just the bottom 50%, the next quarter group above that will also be eliminated over time.

$115k + $8-12k healthcare + stock + routine operating costs + training + recruitment. That's the ballpark median two years ago. Surveys vary, from BLS to industry, two to four million software developers, software engineers, so on and so forth. Now eliminate most of them.

Your AI coding agent circa 2030 will work 24/7. It has a superior context to human developers. It never becomes emotional or angry or crazy. It never complains about being tired. It never quits due to working conditions. It never unionizes. It never leaves work. It never gets cancer or heart disease. It's not obese, it doesn't have diabetes. It doesn't need work perks. It doesn't need time off for vacations. It doesn't need bathrooms. It doesn't need to fit in or socialize. It has no cultural match concerns. It doesn't have children. It doesn't have a mortgage. It doesn't hate its bosses. It doesn't need to commute. It gets better over time. It only exists to work. It is the ultimate coding monkey. Goodbye human.

7 hours agoadventured

Amazing how much investment has mostly gone to eliminate one job category; ironically what was meant to be the job of the future "learn to code". To be honest on current trajectory I'm always amazed how many SWE's think it is "enabling" or will be anything else other than this in the long term. I personally don't recommend anyone into this field anymore, especially when big money sees this as the next disruption to invest in and has bet in the opposite direction investment/market wise. Amazing what was just a chatbot 3 years ago will do to a large amount of people w.r.t unemployment and potential poverty; didn't appreciate it at the time.

Life/fate does have a sense of irony it seems. I wouldn't be surprised if it is just the "creative" industries that die; and normal jobs that provide little value today still survive in some form - they weren't judged on value delivered and still existed after all.

an hour agothrow234234234

> And realistically in a year, there won't be any people who can keep up.

Bold claim. They said the same thing at the start of this year.

13 hours agoairstrike

You're all arguing over how many single digit years it'll take at this point.

It doesn't matter if it takes another 12 or 36 months to make that claim true. It doesn't matter if it takes five years.

Is AI coming for most of the software jobs? Yes it is. It's moving very quickly, and nothing can stop it. The progress has been particularly exceptionally clear (early GPT to Gemini 3 / Opus 4.5 / Codex).

12 hours agoadventured

im hoping this can introduce a framework to help people visualize the problem and figure out a way to close that gap. image generation is something every one can verify, but code generation is perhaps not. but if we can make verifying code as effortless as verifying images (not saying it's possible), then our productivity can enter the next level...

13 hours agoyuedongze

I think you underestimating how good these image generators are at the moment.

13 hours agodrlobster

oh i mean the other direction! checking if a generated image is "good" that no one will tell something is off and it look naturally, rather than checking if they are fake.

13 hours agoyuedongze

I directly asked gemini how to get world peace. It said the world should prioritize addressing climate change, inequality, and discrimination. Yeah - we're not gonna do any of that shit. So I don't know what the point of "superintelligent" AI is if we aren't going to even listen to it for the basic big picture stuff. Any sort of "utopia" that people imagine AI bringing is doomed to fail because we already can't cooperate without AI

12 hours agocons0le

> I don't know what the point of "super intelligent" AI is if we aren't going to even listen to it

Because you asked the wrong question. The most likely question would be "How do I make a quadrillion dollars and humiliate my super rich peers?".

But realistically, it gave you an answer according to its capacity. A real super intelligent AI, and I mean oh-god-we-are-but-insects-in-its-shadow super intelligence, would give you a roadmap and blueprint, and it would take account for our deep-rooted human flaws, so no one reading it seriously could dismiss it as superficial. in fact, anyone world elite reading it would see it as a chance to humiliate their world elite peers and get all the glory for themselves.

You know how adults can fool little children to do what they don't want to? We would be the toddlers in that scenario. I hope this hypothetical AI has humans in high regard, because that would be the only thing saving us from ourselves.

12 hours agoASalazarMX

The blueprint should start with a recipe for building a better computer, and once you do that, well, it's humans starting fires and playing with the flames.

12 hours agovkou

Why would a "real super intelligent AI" be your servant in this scenario?

>I hope this hypothetical AI has humans in high regard

This is invented. This is a human concept, rooted in your evolutionary relationships with other humans.

It's not your fault, it's very difficult or impossible to escape the simulation of human-ly modelling intelligence. You need only understand that all of your models are category errors.

12 hours agocatigula

> Why would a "real super intelligent AI" be your servant in this scenario?

Why is the Bagger 288 a servant to miners, given the unimaginable difference in their strenght? Because engineers made it. Give humanity's wellbeing the highest weight on its training, and hope it carries over when they start training on their own.

12 hours agoASalazarMX

Category error. Intelligence is a different type of thing. It is not a boring technology.

>Give humanity's wellbeing the highest weight on its training

We don't even know how to do this relatively trivial thing. We only know how to roughly train for some signals that probably aren't correct.

This may surprise you but alignment is not merely unsolved; there are many people who think it's unsolvable.

Why do people eat artificially sweetened things? Why do people use birth control? Why do people watch pornography? Why do people do drugs? Why do people play video games? Why do people watch moving lights and pictures? These are all symptoms of humans being misaligned.

Natural selection would be very angry with us if it knew we didn't care about what it wanted.

11 hours agocatigula

> Why do people eat artificially sweetened things? Why do people use birth control? Why do people watch pornography? Why do people do drugs? Why do people play video games? Why do people watch moving lights and pictures? These are all symptoms of humans being misaligned.

I think these behaviors are fully aligned with natural selection. Why do we overengineer our food? It's not for health, because simpler food would satisfy our nutritional needs as easily, it's because our far ancestors developed a taste for food that kept them alive longer. Our incredibly complex chain of meal preparation is just us looking to satisfy that desire for tasty food by overloading it as much as possible.

People prefer artificial sweeteners because they taste sweeter than regular ones, they use birth control because we inherently enjoy sex and want more of it (but not more raising babies), drugs are an overloading of our need for hapiness, etc. Our bodies crave for things, and uninformed, we give them what they want but multiplied several fold.

But geez, I agree, alignment of AI is a hard problem, but it would be wrong to say it's impossible, at least until it's understood better.

9 hours agoASalazarMX

It seems like you don’t understand reinforcement learning. The signal is reinforced because it correlates to behavior, hacking the signal itself is misalignment.

8 hours agocatigula

Did you expect some answer that decried world peace as impossible ? It's just repeating what people say [0] when asked the same question. That's all that a large language model can do (other than putting it to rhyme or 'in the style of Charles Dickens').

[0] https://newint.org/features/2018/09/18/10-steps-world-peace

If you are looking for a vision of general AI that confirms a Hobbsian worldview, you might enjoy Lars Doucet's short story, _Four Magic Words_.

[1] https://www.fortressofdoors.com/four-magic-words/

12 hours agoNzen

> So I don't know what the point of "superintelligent" AI is if we aren't going to even listen to it

I would kind of feel sorry for a super-intelligent AI having to deal with humans who have their fingers on on/off switch. It would be a very frustrating existence.

11 hours agochasd00

I dunno, many people have that weird, unfounded trust in what AI says, more than in actual human experts it seems

12 hours agoPunchyHamster

Because AI, or rather, an LLM, is the consensus of many human experts as encoded in its embedding. So it is better, but only for those who are already expert in what they're asking.

The problem is, you have to know enough about the subject on which you're asking a question to land in the right place in the embedding. If you don't, you'll just get bunk. (I know it's popular to call AI bunk "hallucinations" these days, but really if it was being spouted by a half wit human we'd just call it "bunk".)

So you really have to be an expert in order to maximize your use of an LLM. And even then, you'll only be able to maximize your use of that LLM in the field in which your expertise lies.

A programmer, for instance, will likely never be able to ask a coherent enough question about economics or oncology for an LLM to give a reliable answer. Similarly, an oncologist will never be able to give a coherent enough software specification for an LLM to write an application for him or her.

That's the achilles heel of AI today as implemented by LLMs.

12 hours agobilbo0s

> The problem is, you have to know enough about the subject on which you're asking a question to land in the right place in the embedding

The other day i was on a call with 3 or 4 other people solving a config problem in a specific system. One of them asked chatgpt for the solution and got back a list of configuration steps to follow. He started the steps but one of them mentioned configuring an option that did not exist in the system at all. Textbook hallucination. It was obvious on the call that he was very surprised that the AI would give him an incorrect result, he was 100% convinced the answer was what the LLM said and never once thought to question what the LLM returned.

I've had a couple of instances with friends being equally shocked when an LLM turned out to be wrong. One of which was fairly disturbing, I was at a horse track and describing LLMs and to demonstrate i took a picture of the racing form thing and asked the LLM to formulate a medium risk betting strategy. My friend immediatately took it as some kind of supernatural insight and bet $100 on the plan it came up with. It was as if he believed the LLM could tell the future.Thank god it didn't work and he lost about $70. Had he won I don't know what would have happened, he probably would have asked again and bet everything he had.

11 hours agochasd00

> is the consensus of many human experts as encoded in its embedding

That’s not true.

12 hours agojackblemming

Yup, current LLMs are trained on the best and the worst we can offer. I think there's value in training smaller models with strictly curated datasets, to guarantee they've learned from trustworthy sources.

12 hours agoASalazarMX

> to guarantee they've learned from trustworthy sources.

i don't see how this will every work. Even in hard science there's debate over what content is trustworthy and what is not. Imagine trying to declare your source of training material on religion, philosophy, or politics "trustworthy".

11 hours agochasd00

"Sir, I want an LLM to design architecture, not to debate philosophy."

But really, you leave the curation to real humans, institutions with ethical procedures already in place. I don't want Goole or Elon dictating what truth is, but I wouldn't mind if NASA or other aerospace institutions dictated what is truth in that space.

Of course, the dataset should have a list of every document/source used, so others can audit it. I know, unthinkable in this corporate world, but one can dream.

9 hours agoASalazarMX

"How to be in good health? Sleep, eat well, exercise." However, knowledge ≠ application.

12 hours agocranium

I don't believe that this is going to happen, but the primary arguments revolving around a "super intelligent" ai involve removing the need for us to listen to it.

A super intelligent ai would have agency, and when incentives are not aligned would be adversarial.

In the caricature scenario, we'd ask, "super ai, how to achieve world peace?" It would answer the same way, but then solve it in a non-human centric approach: reducing humanities autonomy over the world.

Fixed: anthropogenic climate change resolved, inequality and discrimination reduced (by reducing population by 90%, and putting the rest in virtual reality)

12 hours agopotsandpans

If out AIs achieve something like this, but they managed to give them the same values the minds in Iain Bank's Culture Series had, I think humanity would be golden.

12 hours agoASalazarMX

> Any sort of "utopia" that people imagine AI bringing is doomed to fail because we already can't cooperate without AI

It's just fanfiction. They're just making up stories in their heads based on blending sci-fi they've read or watched in the past. There's no theory of power, there's no understanding of history or even the present, it's just a bad Star Trek episode.

"Intelligence" itself isn't even a precise concept. The idea that a "superintelligent" AI is intrinsically going to be obsessed with juvenile power fantasies is just silly. An AI doesn't want to enslave the world, run dictatorial experiments born of childhood frustrations and get all the girls. It doesn't want anything. It's purposeless. Its intelligence won't even be recognized as intelligence if its suggestions aren't pleasing to the powerful. They'll keep tweaking it to keep it precisely as dumb as they themselves are.