231

Has the cost of building software dropped 90%?

The cost of writing simple code has dropped 90%.

If you can reduce a problem to a point where it can be solved by simple code you can get the rest of the solution very quickly.

Reducing a problem to a point where it can be solved with simple code takes a lot of skill and experience and is generally still quite a time-consuming process.

8 hours agosimonw

Most of software work is maintaining "legacy" code, that is older systems that have been around for a long time and get a lot of use. I find Claude Code in particular is great at grokking old code bases and making changes to it. I work on one of those old code bases and my productivity increased 10x mostly due to Claude Code's ability to research large code bases, make sense of it, answer questions and making careful surgical changes to it. It also helps with testing and debugging which is huge productivity boost. It's not about its ability to churn out lots of code quickly: it's an extra set of eyes/brain that works much faster that human developer.

8 hours agoloandbehold

I've found this as well. In some cases we aren't fully authorised to use the AI tools for actual coding but even just asking "how would you make this change" or "where would you look to resolve this bug" or "give me an overview of how this process works" is amazingly helpful.

5 hours agozmmmmm

> In some cases we aren't fully authorised to use the AI tools for actual coding but even just asking "how would you make this change" [...]

Isn't the logical endpoint of this equivalent to printing out a Stackoverflow answer and manually typing it into your computer instead of copy-and-pasting?

Nitpicks aside, I agree that contemporary AIs can be great for quickly getting up to speed with a code base. Both a new library or language you want to be using, and your own organisation's legacy code.

One of the biggest advantages of using established ecosystem was that stack-overflow had a robust repository of already answered questions (and you could also buy books on it). With AI you can immediately cook up your own Stackoverflow community equivalent that provides answers promptly instead of closing your question as off-topic.

And I pick Stackoverflow deliberately: it's a great resources, but not reliable enough to use blindly. I feel we are in a similar situation with AI at the moment. This will change gradually as the models become better. Just like Stackoverflow required less expertise to use than attending a university course. (And a university course requires less expertise than coming up with QuickSort in the first place.)

4 hours agoeru

> Isn't the logical endpoint of this equivalent to printing out a Stackoverflow answer and manually typing it into your computer instead of copy-and-pasting?

Not in my case (I never used SO like that, anyway). I use it almost exactly like SO, except much more quickly and interactively (and without the inference that I’m “lazy,” or “stupid,” for not already knowing the answer).

I have found that ChatGPT gives me better code than Claude (I write Swift); even learning my coding and documentation style.

I still need to review all the code it gives me, and I have yet to use it verbatim, but it’s getting close.

The most valuable thing, is I get an error, and I can ask it “Here’s the symptoms and the code. What do you think is going on?”. It usually gives me a good starting point.

I could definitely figure it out on my own, but it might take half an hour. ChatGPT will give me a solid lead in about half a minute.

2 hours agoChrisMarshallNY

The problem is most likely not writing the actual code, but rather understanding an old, fairly large codebase and how it’s stitched together.

SO is (was?) great when you where thinking about how nice a recursive reduce function could replace the mess you’ve just cobbled together, but language x just didn’t yet flow naturally for you.

37 minutes agojordanbeiber

>not reliable enough to use blindly

I've been building things with Claude while looking at say less than 5% of the code it produces. What I've built are tools I want to use myself and... well they work. So somebody can say that I can't do it, but on the other hand I've wanted to build several kinds of ducks and what I've built look like ducks and quack like ducks so...

I've found it's a lot better at evaluating code than producing it so what you do is tell it to write some code, then tell it to give you the top 10 things wrong with the code, then tell it to fix the five of them that are valid and important. That is a much different flow than going on an expedition to find a SO solution to an obscure problem.

A good quality metric of your code is to ask an LLM to find the ten worst things about it and if all of those are all stupid then your code is pretty good. I did this recently on a codebase and it's number 1 complaint was that the name I had chosen was stupid and confusing (which it was, I'm not explaining the joke to a computer) and that was my sign that it was done finding problems and time to move on.

44 minutes agocolechristensen

This is great. Asking questions of library code is a big pattern of mine too.

Here's an example I saw on twitter. Asking an LLM to document a protocol from the codebase:

https://ampcode.com/threads/T-f02e59f8-e474-493d-9558-11fddf...

Do you think you will be able to capture any of this extra value? I think I'm faster at coding, but the overall corporate project timeline feels about the same. I feel more relaxed and confident that the work can be done. Not sure how to get a raise out of this.

8 hours agonuclearnicer

For me, as a remote developer, it means I'm able to finish my work in 1 hour instead of 8 hours. So I'm able to capture "extra value" in the form of time. In our team everyone uses GitHub Copilot and I use Claude Code. My teammates' productivity increased slightly but my productivity increased a lot. This is because 1. Claude Code is just a better coding agent 2. I invested time to get good at agentic coding. Eventually Copilot will catch up and management will realize that now 1 developer can do what previously would take a whole team.

7 hours agoloandbehold

I'm really curious on what your role is, and which industry are you in? I'm awed by these productivity gains others report, but I feel like AI helps in such a small part of my job (implementing specific changes as I direct).

Agentic workflows for me results in bloated code, which is fine when I'm willing to hand over an subsystem to the agent, such as a frontend on a side project and have it vibe code the entire thing. Trying to get clean code erases all/most of my productivity gains, and doesn't spark joy. I find having a back-end-forth with an agent exhausting, probably because I have to build and discard multiple mental models of the proposed solution, since the approach can vary wildly between prompts. An agent can easily switch between using Newton-Raphson and bisection when asked to refactor unrelated arguments, which a human colleague wouldn't do after a code review.

6 hours agooverfeed

I've come to the same conclusion: If you just want a huge volume of code written as fast as possible, and don't care about 1. how big it is, 2. how fast it runs, 3. how buggy it is, 4. how maintainable or understandable it is, or 5. the overall craftsmanship and artistry of it, then you're probably seeing huge productivity gains! And this is fine for a lot of people and for a lot of companies: Quality really doesn't matter. They just care about shitting out mediocre code as fast as possible.

If you do care about these things, it will take you overall longer to write the code with an LLM than it would by hand-crafting it. I started playing around with Claude on my hobby projects, and found it requires an enormous amount of exhausting handholding and post-processing to get the code to the point where I am really happy with it as a consistent, complete, expressive work of art that I would be willing to sign my name to.

6 hours agoryandrake

>shitting out mediocre code as fast as possible.

This really is what businesses want and always have wanted. I've seen countless broken systems spitting out wrong info that was actively used by the businesses in my career, before AI. They literally did not want it fixed when I brought it up because dealing with errors was part of the process now in pretty much all cases. I don't even try anymore unless I'm specifically brought on to fix a legacy system.

>that I would be willing to sign my name to.

This right here is what mgmt thinks is the big "problem" that AI solves. They have always wanted us to magically know what parts are "good enough" and what parts can slide but for us to bear the burden of blame. The real problem is same as always bad spec. AI won't solve that but it will in their eyes remove a layer in their poor communication. Obviously no SWE is going to build a system that spit out wrong info and just say "hire people to always double check the work" or add it to so-so's job duties to check, but that really is the solution most places seem to go with by lack of decision.

Perhaps there is some sort of failure of SWE's to understand that businesses don't care. Accounting will catch the expensive errors anyway. Then Execs will bull whip middle managers and it will go away.

5 hours agocitizenpaul

The adversarial tension was all that ever made any of it work.

The "Perfectionist Engineer" without a "Pragmatic Executive" to press them into delivering something good enough would of course still been in their workshop, tinkering away, when the market had already closed.

But the "Pragmatic Executive" without the "Perfectionist Engineer" around to temper their naive optimism would just as soon find themselves chased from the market for selling gilded junk.

You're right that there do seem to be some execs, in the naive optimism that defines them, eager to see if this technology finally lets them bring their vision to market without the engineer to balance them.

We'll see how it goes, I guess.

5 hours agoswatcoder

> Perhaps there is some sort of failure of SWE's to understand that businesses don't care

I think it's an engineer's nature to want to improve things and make them better, but then we naively assume that everybody else also wants to improve things.

I know I personally went through a pretty rough disillusionment phase where I realised most of the work I was asked to do wasn't actually to make anything better, but rather to achieve some very specific metrics that actually made everything but that metric worse.

Thanks to the human tendency to fixate on narratives, we can (for a while) trick ourselves into believing a nice story about what we're doing even if it's complete bunk. I think that false narrative is at the core of mission statements and why they intuitively feel fake (mission statement is often more gaslighting than guideline - it's the identity a company wants to present, not the reality it does present).

AI is eager to please and doesn't have to deal with that cognitive dissonance, so it's a metric chaser's dream.

5 hours agojordwest

> This really is what businesses want and always have wanted.

There's a difference between what they really want and executives knowing what they want. You make it sound like every business makes optimal decisions to get optimal earnings.

> They literally did not want it fixed when I brought it up because

Because they thought they knew what earns them profits. The key here they thought they knew.

The real problem behind the scenes is a lot of management is short term. Of course they don't care. They roll out their shiny features, get their promotions and leave. The issues after that are not theirs. It is THE business' problem.

2 hours agore-thc

Claims about agentic workflows are the new version of "works on my machine" and should be treated with skepticism if they cannot be committed to a repository and used by other people.

Maybe parent is a galaxy-brained genius, or.. maybe they are just leaving work early and creating a huge mess for coworkers who now must stay late. Hard to say. But someone who isn't interested in automating/encoding processes for their idiosyncratic workflows is a bad engineer, right? And someone who isn't interested in sharing productivity gains with coworkers is basically engaged in sabotage.

6 hours agorobot-wrangler

> And someone who isn't interested in sharing productivity gains with coworkers is basically engaged in sabotage.

Who says they aren't interested in sharing? To give a less emotionally charged example: I think my specific use pattern of Git makes me (a bit) more productive. And I'm happy to chew anyone's ear off about it who's willing to listen.

But the willingness and ability of my coworkers to engage in git-related lectures, while greater than zero, is very definitely finite.

3 hours agoeru

Something that is advertised as 10x improvement in productivity isn't like your personal preferences for git or a few dinky bash aliases or whatever. It's more like a secret personal project test-suite, or a whole data pipeline you're keeping private while everyone else is laboriously doing things manually.

Assuming 10x is real, then again the question: why would anyone do that? The only answers I can come up with are that they cannot share it (incompetence) or that they don't want to (sabotage). You're saying the third option is.. people just like working 8 hours while this guy works 1? Seems unlikely. Even if that's not sabotaging coworkers it's still sabotaging the business

3 hours agorobot-wrangler

The reason is because we are a Microsoft shop and our company doesn't have Claude account. I'm using my personal Claude Max account. My manager does know that I use Claude Code and I requested the person responsible for AI tooling in our company to use Claude Code but he just said that management already decided to go with GitHub copilot. He thinks that using Claude model in Copilot is same as using Claude Code. Another issue is that we are a Microsoft shop and I use Claude Code through WSL but I'm the only person on our team with Linux skills.

2 hours agoloandbehold

There are methods of connecting the claude code cli tools to copilot’s api — look at litellm or something along those lines, it’s a pip pkg and translates the calls code makes

an hour agocyberpunk

I'm teaching a course in how to do this to one of my clients this week.

Also, I used this same process to address a bug that is many years old in a very popular library this week. Admittedly, the first solution was a little wordy and required some back and forth, but I was able to get to a clean tested solution with little pain.

2 hours ago__mharrison__

Senior Software Engineer. The system is a niche business software software for a specific industry. It doesn't do any fancy math, all straightforward business logic.

> Trying to get clean code erases all/most of my productivity gains, and doesn't spark joy. I find having a back-end-forth with an agent exhausting, probably because I have to build and discard multiple mental models of the proposed solution, since the approach can vary wildly between prompts

You probably work on something that requires very unique and creative solutions. I work on dumb business software. Claude Code is generally good at following existing code patterns. As far as back-and-forth with Claude Code being exhausting, I have few tips how how to minimize number or shots required to get good solution from CC: 1. Start by exploring relevant code by asking CC questions. 2. Then use Plan Mode for anything more than trivial change. Using Plan Mode is essential. You need to make sure you and CC are on the same page BEFORE it starts writing code 3. If you see CC making same mistake over and over, add instructions to your CLAUDE.md to avoid it in the future. This way your CC setup improves over time, like a coworker who learns over time.

5 hours agoloandbehold

Thank you for the actionable ideas. I'll experiment with closer supervision during the planning stage, hopefully finer-grained implementation details will reduce unnecessarily large refactors during review.

4 hours agooverfeed

Well for me, all of my actual implementation work has been green field from “git init” and mostly coding around the AWS SDK in the target language and infrastructure as code since AI coding has gotten decent.

I haven’t had to write a line of code in a year. First ChatGPT and more recently Claude Code.

I don’t do “agentic coding”. I keep my hands on the steering wheel and build my abstractions and modules up step by step. I make sure every line of code looks like something I would write.

I’m a staff consultant (cloud + app dev) and always lead projects, discovery and design and depending on the size of the project, do all of the actual hands on work myself.

I would have had to staff at least one maybe two less senior consultants to do the actual hands on work before. It’s actually easier for me to do the work then having to have really detailed requirements and coordinating work (the whole “Mythical Man Month” thing).

FWIW: before the pearl clutching starts, I started coding in assembly in 1986 on an Apple //e and have been delivering production level code since 1996.

2 hours agoraw_anon_1111

Not the OP but we use LLMs to build a restaurant pos system with reservations, loyalty, webshop etc. Almost at feature parity with bigwigs like lightspeed/toast.

> I find having a back-end-forth with an agent exhausting, probably because I have to build and discard multiple mental models of the proposed solution, since the approach can vary wildly between prompts

Just right now I had it improve QR payments on POS. This is standard stuff, and I have done it multiple time but i'm happy I didn't have to spend the mental energy to implement it and just had to review the code and test it.

```

Perfect! I've successfully implemented comprehensive network recovery strategies for the OnlinePaymentModal.tsx file. Here's a summary of what was added:

  Implemented Network Recovery Strategies

  1. Exponential Backoff for Polling (lines 187-191)
  2. Network Status Detection (lines 223-246, 248-251)
  3. Transaction Timeout Handling (lines 110-119)
  4. Retry Logic for Initial Transaction (lines 44-105)
  5. AbortController for Request Cancellation (lines 134-139, 216-220)
  6. Better Error Messaging (lines 85-102, 193-196)
  7. Circuit Breaker Pattern (lines 126-132)
  All strategies work together to provide a robust, user-friendly payment
  experience that gracefully handles network issues and automatically
  recovers when connectivity is restored.
```

> An agent can easily switch between using Newton-Raphson and bisection when asked to refactor unrelated arguments, which a human colleague wouldn't do after a code review.

Can you share what domain your work is in? Is it deeptech. Maybe coding agents right now work better for transactional/ecommerce systems?

6 hours agosaxenaabhi

I don't know if that example is real, but if it is, that's exactly the reason I find AI tools irritating. You do not need six different ways to handle the connection being down, and if you do, you should really factor that out into a connection management layer.

One of my big issues with LLM coding assistants is that they make it easy to write lots & lots of code. Meanwhile, code is a liability, and you should want less of it.

2 hours agobccdee

These aren't 6 different way.

You are talking about something like network layers in graphql. That's on our roadmap for other reasons(switching api endpoints to digital ocean when our main cloudflare worker is having an outage), however even with that you'll need some custom logic since this is doing at least two api calls in succession, and that's not easy to abstract via a transaction abstraction in a network layer(you'll handle it durably in the network layer like how temporal does).

Despite the obvious downsides we actually moved it from durable workflow(cf's take of temporal) server side to client since on workflows it had horrible and variable latencies(sometimes 9s v/s consistent < 3s with this approach). It's not ideal, but it makes more sense business wise. I think many a times people miss that completely.

I think it just boils down to what you are aiming. AI is great for shipping bugfixes and features fast. At a company level I think it also shows in product velocity. However I'm sure very soon our competitors will catch up when AI skepticism flatters.

an hour agosaxenaabhi
[deleted]
5 hours ago
[deleted]
6 hours ago

Although many tools exist, there still seems to a large context gap here: we need better tools to orient ourselves and to navigate large (legacy) codebases. While not strictly a a source graph or the like, I do think Enso like interface may prove successful here[0].

8 hours agodleeftink

this was really interesting to read.

an hour agohello12343214

Completely agree. In the past 12 months, I've had five or six use cases that I would not have bothered scripting or automating before, but I've cranked out scripts or even small web services in under an hour that get the job done using AI. It has really revolutionized the super small bite-sized issues

7 hours agofreedomben

Well said. The cost of building a CRUD has dropped 90%.

The open question is why people needed fancy AI tools like Claude to write CRUDs in the first place. These kind of tasks ought to be have been automated a long time ago.

3 hours agotrollbridge

> These kind of tasks ought to be have been automated a long time ago.

It’s much easier to write business logic in code. The entire value of CRUD apps is in their business logic. Therefore, it makes sense to write CRUD apps in code and not some app builder.

And coding assistants can finally help with writing that business logic, in a way that frameworks cannot.

2 hours agosothatsit

Most code is simple, the fact that large complex systems are layers of simple code on top of itself, like garbage heaps at the dump, makes it complex. Sticking with the garbage analogy, the LLM is like upgrading from one shovel to an crew of 10 people with excavators to look for a lost Bitcoin hard drive.

Your project is still going to fail, but it will fail faster with the 10 excavators.

8 hours agoalmosthere

Sometimes you just need to move trash around with low accuracy... In that case the excavator swarm is good enough.

an hour agoshigawire

And pollute the world. Really good analogy.

5 hours agotclancy

> The cost of writing simple code has dropped 90%.

Need to add, “…and what ‘simple’ means is getting broader by the day.”

8 hours agogcanyon

and varies person to person

especially based on their prompting skills

8 hours agoqazxcvbnmlp

And not everyone wants to use a cloud AI either. Remember that when tons of cash is on the table, things like license agreements become less enforceable and more of a "don't get caught in the cookie jar" thing. All it would take is something similar to what's going on with book authors/publishers - a major AI provider exposed as using other firms' proprietary code without even considering to get a license - to totally blow up the "safety" of cloud based coding agents.

Local models are becoming more and more capable but the tooling still needs to get better for those.

8 hours agodabockster

While I would love for this to be true for financial and egotistical reasons, I have a growing feeling that this might not be true for long unless progress really starts to stall.

6 hours agointerstice

I've actually gone in the other direction. A year ago, I had that feeling, but since then I've gotten more certain that LLMs are never going to be able to handle complexity. And complexity is still the real problem of developing software.

We keep getting more cool features in the tools, but I don't see any indication that the models are getting any better at understanding or managing complexity. They still make dumb mistakes. They still write terrible code if you don't give them lots of guardrails. They still "fix" things by removing functionality or adding a ts-ignore comment. If they were making progress, I might be convinced that eventually they'll get there, but they're not.

5 hours agodimal

Progress has stalled already, I didn't see much improvement in the past year for my real world tasks

4 minutes agorealusername

Yes, well, I think this makes that skill of problem reduction a very valuable skill.

4 hours agocryptonector

Yes, but for experienced engineers that is still a huge huge change .

Even 12 months ago simplifying tasks alone was insufficient, you still needed a large group engineers to actually write, review and maintain a typical product for solid startup offering. This came with the associated overhead of hiring and running mid sized teams.

A lot of skilled people (y)our age/experience are forced into doing people management roles because there was no other way to deliver a product that scales(in team and complexity not DAU).

A CTO of mid-stage startup had to be good architect, a decent engineering manager, be deeply involved in product and also effectively communicate with internal and external customers.

Now for startups setting up new you can either defer the engineering manager and people complexity lot latter than you did before. You could have a very senior but small team who can be truly 10x level and be more productive without the overhead of communication, alignment and management that comes with large teams.

----

tldr; Skilled engineers can generate outsized returns to orgs that set them up to be successful(far more than before), I can't say if compensation is reflecting this yet, if not it soon will.

6 hours agomanquer

I’ve found they are able to compose well, let it build small components and stitch them together

8 hours agomountainriver

That is a good approach, bottom up, manage complexity. But the general picture is - you set the direction and hold the model responsible, it does the actual work. Think of it as your work is the negative of the AI work, it writes the code, you ensure it tests that code. The better test harness you create, the better the AI works. The real task is to constrain the AI into a narrow channel of valid work.

2 hours agovisarga

I would add "predictable" to that as well.

I fed Claude Pro a REST API spec and told it to spit out a Powershell module and well... So far that 27k lines of code largely checks out (minus the undocumented stuff I knew about).

Getting it to write the pester scripts was a very different matter...

8 hours agoRajT88

[dead]

3 hours agoTacticalCoder

> The cost of writing simple code has dropped 90%.

Plus there's a lot of simple code you shouldn't be writing either way, because it's in a library by now.

By their nature, LLMs will do their best with things that could be plagiarized.

7 hours agoTerr_

LLMs work great at identifying libraries I'd never have otherwise found and use them, as long as you ask them for solutions instead of micromanage how they should get things done.

6 hours agovidarh

Aren't we having major issues with there being too many small libraries right now and dependency chain that grows exponentially? I have thought LLMs will actually benefit us a lot here, with not having to use a lib for every little thing (leftpad etc?).

6 hours agomewpmewp2

Frankly if you were relying on libraries like Leftpad to you probably had no business writing code before AI

And if you rely on AI to write that code now you still have no business writing code

5 hours agobluefirebrand

Bit harsh.

That was probably a node / npm thing, because they had no stdlib it was quite common to have many small libraries.

I consider it an absolute golden rule for coding to not write unnecessary code & don't write collections.

I still see a lot of C that ought not to have been written.

I'm a grey beard, and don't fear for my job. But not relying on AI if it's faster to write, is as silly as refusing a correct autocomplete and typing it by hand. The bytes don't come out better

4 hours agoteknopaul

Had the cost of building custom software dropped 90%, we would be seeing a flurry of low-cost, decent-quality SaaS offering all over the marketplace, possibly undercutting some established players.

From where I sit, right now, this does not seem to be the case.

This is as if writing down the code is not the biggest problem, or the biggest time sink, of building software.

10 hours agonine_k

The keyword is "building". Yes costs may have dropped 90% just to build software. But there are 1000 other things that comes after it to run a successful software for months let alone years.

- Maintenance, Security

- Upgrades and patches

- Hosting and ability to maintain uptime with traffic

- Support and dealing with customer complexities

- New requirements/features

- Most importantly, ability to blame someone else (at least for management). Politics plays a part. If you build a tool in-house and it fails, you are on the chopping block. If you buy, you at least can say "Hey everyone else bought it too and I shouldn't be fired for that".

Customers pay for all of the above when they buy a SAAS subscription. AI may come for most of the above at some point but not yet. I say give it 3-5 years to see how it all pans out.

9 hours agocodegeek

Good points but this list is missing the most critical problem which AI does not solve; exposure.

What you've listed are the easy parts that are within people's control. You didn't list the most critical part, the actual bottleneck which is not within people's control.

The market is now essentially controlled by algorithms. I predict there will be amazing software... Which will end up ignored by the markets completely until their features are copied by big tech and nobody will know where the idea originated.

Building is absolutely worthless in the context of a monopolized marketplace.

8 hours agosocketcluster

Agreed. What you call exposure others might call distribution or attention.

2 hours agobaxtr

All of this can be written off as "building software", though. What this reveals is that the costs in a given market are likely not software at all

6 hours agoMangoToupe

llms do all that too

8 hours agoalmosthere

To be fair, writing a SaaS software is like an order, perhaps two orders of magnitude more effort than writing software that runs on a computer and does the thing you want. There's a ton of stuff that SaaS is used for now that's basically trivial and literally all the "engineering" effort is spent on ensuring vendor lock in and retaining control of the software so that you can force people to keep paying you.

10 hours agothot_experiment

We should also get a flurry of low-cost, decent-quality native local-first software, but I’m not seeing any.

8 hours agolayer8

You might not be looking hard enough. There are a few sources you could look at, one is the GitHub Awesome YouTube channel. I am seeing a lot of several-hundred-stars open source projects with unreasonably large codebases starting to gain traction. This is the frontier of adoption, and my guess is this will start cascading outward.

5 hours agothreecheese

Why? I don't want to bother making all the software that the AI wrote for me work on someone else's machine. The difference between software that solves my problem and that solves a problem many people have is also often like an order of magnitude of effort.

5 hours agothot_experiment
[deleted]
4 hours ago

And why would this happen? Local to what every SaaS product I use is available on my Mac, Windows, iPhone and iPad and the web. Some are web only and some are web and apps.

Who is going to maintain the local software? Who is going to maintain the servers for self hosted or the client software?

7 hours agoraw_anon_1111

> Local-first

> Not seeing any

Working exactly as intended?

5 hours agovachina

This. I have a massive amount of custom software running locally to solve all sorts of problems for me now.

But it's for me and tailor made to solve my precise use cases. Publishing it would just add headaches and endless feature requests and bug reports for zero benefit to me.

3 hours agowild_egg

Why should you see a flurry of software?

What LLMs demonstrate is that the problem is dealing with people, not software. Look at the number of open source maintainers who are hanging it up.

Unless you have a path to monetization, writing software for anybody but yourself is a fool's errand.

7 hours agobsder

Also also, we should reach the point where you have decent quality source code for a local application, and you can tell GPT "SaaS this", and it works.

I'm not seeing that either.

7 hours agoAnimalMuppet

This is assuming the marketplace works perfectly... Which is an incorrect assumption. Reality is that the marketplace is highly controlled by algorithms. New platforms will struggle to get exposure... No exposure, no credibility, no word of mouth, no users, catch 22... You think the big players will allow small SaaS projects to gain traction on their platforms? Have you seen how centralized the Internet is these days? Have you seen how afraid people are of betting on no-name platforms? If they choose the wrong no-name platforms and tools, they will lose their (increasingly precious) jobs. As the saying goes "Nobody lost their job for choosing IBM." As for B2C; it's dead, consumers don't have money and will have less of it in the future; the mass-market game is over.

My bet is if there were a lot of great apps being built, even excellent quality, nobody would even hear about them. The big players would copy them before anyone even found out about them.

IMO, the market is not even a playing field anymore, this is why everyone is getting into politics now, though politics is also somewhat monopolized, there is still more potential for success because there is such an abundance of dissatisfied people willing to look outside of mainstream channels. It's much easier to sell political ideologies than to sell products.

8 hours agosocketcluster

"Everything is controlled by algorithms" doesnt make any sense - its like saying everything is subject to entropy - well sure but also what?

6 hours agoacessoproibido

It's not the same because who controls the algorithms matters here. The algorithms work for some entities and against other entities. They are not neutral at all. They are aligned through shared monetary incentives, so well aligned that they would probably be less aligned if it was a literal conspiracy.

TBH. I'm kind of shocked I still have to explain this. When you get on the wrong side of the algorithms you will understand, you will understand viscerally. And I do mean 'when' not 'if'.

Maybe the algorithms have been working for you so far and you're not feeling them but just give it a few years. Unfortunately, once you understand, you won't have a voice anymore and those still in the game won't have enough empathy to help you.

6 hours agosocketcluster

> Everything is controlled by algorithms" doesnt make any sense - its like saying everything is subject to entropy - well sure but also what?

Of course it makes sense. You just refuse to cope. This is like thinking that marxism is "rich people bad"

6 hours agoMangoToupe

It has dropped by maybe MORE than 90%. My sons school recently asked me to build some tools for them -- I did this over a decade ago for them, for free. I did it again using AI tools (different problem though) and I had it mostly done in 30 minutes (after I got the credentials set up properly -- that took up more time than the main coding part). This was probably several days of work for me in the past.

9 hours agokenjackson

But in the past, you knew the codebase very well, and it was trivial to implement a fix and upgrade the software. Can the same be done with LLMs ? Well from what I see, it depends on your luck. But if the LLMs can't help you, then you gotta read the whole codebase that you've never read before and you quickly lose the initial benefits. I don't doubt someday we'll get there though.

8 hours agoTheRoque

I've hit this in little bursts, but one thing I've found is that LLMs are really good at reasoning about their own code and helping me understand how to diagnose and make fixes.

I recently found some assembly source for some old C64 games and used an LLM to walk me through it (purely recreational). It was so good at it. If I was teaching a software engineering class, I'd have students use LLMs to do analysis of large code bases. One of the things we did in grad school was to go through gcc and contribute something to it. Man, that code was so complex and compilers are one of my specialties (at the time). I think having an LLM with me would have made the task 100x easier.

8 hours agokenjackson

Does that mean you don't think you learned anything valuable through the experience of working through this complexity yourself?

I'm not advocating for everyone to do all of their math on paper or something, but when I look back on the times I learned the most, it involved a level of focus and dedication that LLMs simply do not require. In fact, I think their default settings may unfortunately lead you toward shallow patterns of thought.

8 hours agodevin

I wouldn't say there is no value to it, but I do feel like I learned more using LLMs as a companion than trying to figure everything out myself. And note, using an LLM doesn't mean that I don't think. It helps provide context and information that often would be time consuming to figure out, and I'm not sure if the time spent is proportional to the learning I'd get from it. Seeing that these memory locations mapped to sprites that then get mapped to those memory locations, which map to the video display -- are an example of things that might take a minute to explore to learn, but the LLM can tell me instantly.

So a combination of both is useful.

8 hours agokenjackson

Hard to argue with such a pragmatic conclusion!

I think the difficulty I have is that I don't think it's all that straightforward to assess how it is exactly that I came not just to _learn_, but to _understand_ things. As a result, I have low confidence in knowing which parts of my understanding were the result of different kinds of learning.

7 hours agodevin

I'd say this is similar to working with assembly vs c++ vs python. Programming in python you learn less about low level architecture trivia than in assembly, but you learn way more in terms of high level understanding of issues.

When I had to deal with/patch complex c/c++ code, I rarely ever got a deep understanding of what the code did exactly - just barely enough to patch what was needed and move on. With help of LLMs it's easier to understand what the whole codebase is about.

8 hours agokolinko

If I haven't looked at my own code in 6 months it might as well have been written by someone else.

8 hours agojazzyjackson

The most brilliant programmer I know is me three years ago. I look at code I wrote and I'm literally wondering "how did I figure out how to do that -- that makes no sense, but exactly what is needed!"

8 hours agokenjackson

I had a funny example of that.

I had a question about how to do something in Django, and after googling found a good SO answer.

I read through it thinking about how much I appreciated the author's detailed explanation and answer.

When I looked at the author it was me from two years ago.

an hour agoclaytongulick

How can I learn this skill? Past Me is usually just this idiot who made work for Present Me.

6 hours agopsunavy03

Turns out, that is also past me. In fact, often the incredible code that brilliant me wrote, which I don't understand now, is also the code that reckless me wrote that I now need to fix/add to -- and I have no idea where to start.

6 hours agokenjackson

Wow. Lucky you. When I come across code I wrote months ago, usually I'm like "what kind of crack was I on when I wrote this?"

7 hours agobitwize

They're better than one might expect at diagnosing issues from the error output or even just screenshots.

8 hours agoemodendroket

"Building software" is a bit too general, though. I believe "Building little web apps for my son's school" has gotten at least 10x easier. But the needle has not moved much on building something like Notion, or Superhuman, or Vercel, or <insert name of any non-trivial project with more than 1000 man-hours of dev work>.

Even with perfect prompt engineering, context rot catches up to you eventually. Maybe a fundamental architecture breakthrough will change this, but I'm not holding my breath.

8 hours agobloppe

Yeah, that's not a comparison to the kinds of highly complex internal systems I worked with the Fortune 1xx companies, particularly the regulated ones (healthcare). The whole "my son's school" thing is very nice, and it's cool you can knock that out so fast, but it's nothing at all like the environments I worked in, particularly the politics.

3 hours agolisbbb

LLMs, long term, have killed most SaaS.

Most SaaS used to be killed by bespoke software engineers that would build some custom thing, and it was integrated perfectly into the legacy system.

Then all those people decided to be managers and go on "i dont care" autopilot mode and hired a bunch of teens that still do care, to some extent. But those teens suck at it, and the old guys just don't really care anymore.

Now with agentic code, instead of "buy splunk" or "buy jira" or whatever thing they are trying to do, they have one of those "teens now in their mid twenties" that are SUPER excited about Agentic flows, either write an agentic tool or simply use an agentic tool to code up the 300 lines of code that would replace their need for a Jira or a Splunk or whatever. Since most people only use 5% of the total features of any product, there's no reason to buy tools anymore, just build it for a fraction of the cost.

I don't know if the above is where we're at right now, but it's coming.

7 hours agoalmosthere

Yeahh, right, I'm not sure splunk is that simple.

6 hours agobmikaili

Creating code sprawl, weird ball of twine systems etc until someone says, enough, we will just buy this SAAS solution which integrates it all. Rinse, repeat.

7 hours agoactionfromafar

Nobody is replacing Jira or Splunk with anything coming out of an LLM.

6 hours agoroncesvalles

> that would replace their need for a Jira or a Splunk

Or a JS runtime like Bun. Oh, wait...

6 hours agomyk9001

Well, because no self interested decision maker in any company of size is going to ever trust their business to an unknown company run by a one person operation.

And why would the benefits of being able to code faster accrue to a small independent developer over a large company that already has an established reputation and a customer base?

“No one ever got fired for buying Salesforce”.

I once had influence over the buying decision to support an implementation I was leading. I found this perfect SaaS product by a one man shop who was local.

Working with my CTO and lawyers, we made a proposal to the founder. We would sign with him and be 70% of his post signing revenue if he agreed to give us our own self hosted instance and put his latest code in escrow with a third party (Green Mountain) and we would have non exclusive rights to use the code (but not distribute it) under certain circumstances.

7 hours agoraw_anon_1111

It is happening though internally in businesses I've worked with. A few of them are starting to replace SaaS tools with custom built internal tooling. I suspect this pattern is happening everywhere to a varying level.

Often these SaaS tools are expensive, aren't actually that complicated (or if they are complicated, the bit they need isn't) and have limitations.

For example, a company I know recently got told their v1 API they relied on on some back office SaaS tool was being deprecated. V2 of the API didn't have the same features.

Result = dev spends a week or two rebuilding that tool. It's shipped and in production now. It would have taken similar amount of time to work around the API deprecation.

10 hours agomartinald

I don't understand the timelines here at all.

10 hours agonugger

We were paying for Salesforce, then built the features we needed to do the same tracking into our interal tool and got rid of Salesforce to save money and simplify the data internally across departments

7 hours agofigers

And now you have to spend money on developers for a system that “doesn’t make the beer taste better”. Does it give you a competitive advantage in the market?

7 hours agoraw_anon_1111

We already had Developers and the system in place this was a tiny feature in the scheme of things.

Internally it gives us a competitive advantage of the data being in our system from the beginning of the pipeline through the rest of the system where the data would be needed anyway.

6 hours agofigers

If they saved money, as they said it did, then... yes?

2 hours agocrabmusket

Saved money in the short term. But maintenance costs money. Amazon has all of the money in the world and could easily duplicate everything Salesforce does. Yet they use Salesforce internally.

2 hours agoraw_anon_1111

I know of at least two multi-billion corps that are moving to internal ETL tools instead of 5tran now because the cost to maintain internally is much lower and you can customize for cheap. SaaS as a model is at risk without something tying someone down.

8 hours agorenewiltord

The greed/“capture all of the value” mindset of SaaS kills it, because you can infer the cost of delivery in many cases and beat it.

For anything that is billed by the human, O365 is the benchmark. I’m not paying some stupid company $30/mo for some basic process, I use our scale to justify hiring a couple of contractors to build 80% of what they do for $400-600k in a few months. Half the time I can have them build on powerapps and have zero new opex.

7 hours agoSpooky23

> It is happening though internally in businesses I've worked with

How many samples do you have?

Which industries are they from?

Which SaaS products were they using, exactly and which features?

> ...a company I know recently got told their v1 API they relied on on some back office SaaS tool was being deprecated. V2 of the API didn't have the same features ... dev spends a week or two rebuilding that tool

Was that SaaS the equivalent of the left-pad Node.js module?

10 hours agolossolo

I'm not the OP, but I do have an annectote.

We've got an backend pipeline that does image processing. At every step of the pipeline, it would make copies of small (less than 10MB) files from an S3 storage source, do a task, then copy the results back up to the storage source.

Originally, it was using AWS but years ago it was decided that AWS was not cost effective so we turned to another partner OVH and Backblaze.

Unfortunately, the reliability and throughput of both of them isn't as consistent as AWS and this has been a constant headache.

We were going to go back to AWS or find a new partner, but I nominated we use NFS. So we build nothing, pay nothing, get POSIX semantics back, and speed has gone up 3x. At peak, we only copy 40GB of files per day, so it was never really necessary to use S3 except that our servers were distributed and that was the only way anyone previously could think to give each server the same storage source.

While this isn't exactly what the OP and you are talking about, I think it illustrates a fact: SaaS software was seen as the hammer to all nails, giving you solutions and externalizing problems and accountability.

Now that either the industry has matured, building in-house is easier, or cost centers need to be reduced, SaaS is going be re-evaluated under the context of 'do we really need it'?

I think the answer to many people is going to be no, you don't need enterprise level solutions at all levels of your company, especially if you're not anywhere near the Fortune 1000.

8 hours agodismantlethesun
[deleted]
8 hours ago

I ran a shared services org in a Fortune 50. Enterprise costs don’t scale down well, and things that are absolutely essential to supporting 100k people sound insane for 100 people. Our senior leaders would sometimes demand we try and the CFO and I would just eyeroll.

Nobody would hire the JP Morgan IT team to run a dentist practice IT workload. Likewise, AWS can save you money at scale, but if your business can run on 3 2U servers, it should.

7 hours agoSpooky23

You can use NFS on AWS, they have a hosted version (EFS) that is actually pretty neat.

8 hours agocyberax

I'm a consultant so I see lots of businesses, it's happening in all of them. I'm not seeing people rip out tools for custom builds to be clear, I just see people solving today problems with custom apps.

8 hours agoneom

Lots of companies make good money selling the equivalent of leftpad for confluence or jira. Anecdotally, that's exactly the kind of stuff that gets replaced with homegrown AI-built solutions at our company

9 hours agowongarsu

I helped a company that is build averse move off of Fivetran to Debezium and some of their own internal tooling for the same workload they are paying 40k less a month (yeah they just raised their prices again).

Now, that's not exactly the same thing, but their paucity of skills made them terrified to do something like this before, they had little confidence they could pull it off and their exec team would just scoff and tell them to work on other revenue generating activities.

Now the confidence of Claude is hard to shake off of them which is not exactly the way I wanted the pendulum to swing, but its almost 500k yearly back in their pockets.

8 hours agohobs

Something weird happened to software after the 90s or so.

You had all these small-by-modern-standards teams (though sometimes in large companies) putting out desktop applications, sometimes on multiple platforms, with shitloads of features. On fairly tight schedules. To address markets that are itty-bitty by modern standards.

Now people are like “We’ll need (3x the personnel) and (2x the time) and you can forget about native, it’s webshit or else you can double those figures… for one platform. What’s that? Your TAM is only (the size of the entire home PC market circa 1995)? Oh forget about it then, you’ll never get funded”

It seems like we’ve gotten far less efficient.

I’m skeptical this problem has to do with code-writing, and so am skeptical that LLMs are going to even get us back to our former baseline.

8 hours agophantasmish

Some thoughts:

1. Personally I find writing software for the web far more difficult/tedious than desktop. We sure settled on the lowest common denominator

1a. Perhaps part of that is that the web doesn't really afford the same level of WYSIWYG?

2. Is it perhaps more difficult (superlinear) to write one cloud SaaS product that can scale to the whole world, rather than apps for which each installation only needed to scale to one client? Oh and make sure to retain perfect separation between clients

2a. To make everything scale, it's super distributed, but having everything so distributed has a huge cost

3. Some level of DLL hell, but something different (update hell?) I barely do any programming in my free time anymore because I would end up spending almost the whole time needing to deal with some barrage of updates, to the IDE, to the framework, to the libraries, to the build infrastructure

3a. There's always a cost to shipping, to the development team and/or the users. With releases so frequent, that cost is paid constantly and/or unpredictably (from the development or user perspective)

3b. Is there any mental sense of completion/accomplishment anymore or just a never-ending always-accelerating treadmill?

3c. I wish I could find the source but there was some joke that said "software developers are arrogant or naïve enough to think that if you take a marathon and just break it up into smaller parts you can 'sprint' the whole way"

7 hours agoanonymars

Circa 2005 my boss and I would pitch that the relational database + HTML forms paradigm was a breakthrough that put custom software within reach of more customers: for one thing you could just delete all the Installshield engineers but also memory safety was a big problem in the Win95 era not so much about being hacked and more about application state in applications would get corrupted over time so you just expected Word to crash once an hour or so.

7 hours agoPaulHoule

Yep. Software construction was branded a team sport. Hence, social coding, tool quality being considered more important (good thing for sure), and, arguably, less emphasis on individual skill and agency.

This was in service of a time when tech was the great equalizer, powered by ZIRP. It also dovetailed perfectly with middle managers needing more reports in fast growing tech companies. Perhaps the pendulum is swinging back from the overly collective focus we had during the 2010s.

8 hours agomattgreenrocks

I would make the case as well that software underwent demographic shift as the demand skyrocketed and the barriers to entering the profession with languages and tooling dropped.

80's/90's dev teams were more weird nerds with very high dedication to their craft. Today devs are much more regular people, but there are a lot more of them.

8 hours agoRajT88

Definitely. There’s pluses and minuses to that shift.

5 hours agomattgreenrocks

> Something weird happened to software after the 90s or so.

Counterpoint: What might have happened is that we expect software to do a lot more than we did in the 90s, and we really don't expect our software features to be static after purchase.

I agree that we sometimes make things incredibly complex for no purpose in SE, but also think that we do a rose-colored thing where we forget how shitty things were in the 1990s.

8 hours agodghlsakjg

> Counterpoint: What might have happened is that we expect software to do a lot more than we did in the 90s, and we really don't expect our software features to be static after purchase.

Outside the specific case of Apple's "magical" cross-device interoperability, I can't think of many areas where this is true. When I step outside the Apple ecosystem, stuff feels pretty much the same as it did in 2005 or so, except it's all using 5-20x the resources (and is a fully enshittified ad-filled disjointed mess of an OS in Windows' case)...

> I agree that we sometimes make things incredibly complex for no purpose in SE, but also think that we do a rose-colored thing where we forget how shitty things were in the 1990s.

... aside from that everything crashes way, way less now than in the '90s, but a ton of that's down to OS and driver improvements. Our tools are supposed to be handling most of the rest. If that improved stability is imposing high costs on development of user-facing software, something's gone very wrong.

You're right that all the instability used to be truly awful, but I'm not sure it's better now because software delivery slowed way down (in general—maybe for operating systems and drivers)

7 hours agophantasmish

Part of this is the huge ZIRP-driven salary bubble in the US. If good software engineers were as cheap as good structural engineers, you'd be able to stick three of them in a room for $500k a year and add a part-time manager and they'd churn out line-of-business software for some weird little niche that saves 50 skilled-employee-years per year and costs $20k a seat.

The bubble means that a) the salaries are higher, b) the total addressable market has to justify those salaries, c) everyone cargo cults the success stories, and so d) the best practices are all based on the idea that you're going to hyperscale and therefore need a bazillion microservices hooked up to multiple distributed databases that either use atomic clocks or are only eventually consistent, distributed queues and logs for everything, four separate UIs that work on web/iOS/android/desktop, an entire hadoop cluster, some kind of k8s/mesos/ECS abomination, etc.

The rest of the world, and apparently even the rest of the US, has engineering that looks a little more like this, but it's still influenced by hyperscaler best practices.

3 hours agostrken

> Had the cost of building custom software dropped 90%, we would be seeing a flurry of low-cost, decent-quality SaaS offering all over the marketplace, possibly undercutting some established players.

Don't forget the second-order effect of clients deciding they could do it in-house.

8 hours agozahlman

In fact that is where AI could win. An in house system only needs to serve the needs of one customer whereas the SAAS has to be built for the imagined needs of many customers —- when you’re lucky you can “build one to throw away” and not throw it away.

8 hours agoPaulHoule

Yes, that was my point. It gets hard for the new SaaS competitors to "undercut" the established players when the underlying market disappears.

7 hours agozahlman

> Had the cost of building custom software dropped 90%

It definitely has for me. I'm creating toolS and utilities every week easily that I never would've attempted in the past.

> This is as if writing down the code is not the biggest problem, or the biggest time sink, of building software.

Lots of people can think logically and organize a process flow, but don't know all the ridiculous code incantations (and worse development and hosting environment details) to turn their plans into tools.

It's trivial to one-shot all kinds of impressive toys in Gemini now, but it's going to be an even bigger deal when Google adds some type of persistent data storage. It will be like the rebirth of a fully modern Microsoft Access.

9 hours agoxnx

I found that the time (1-4 hours) is enough to build any SaaS for myself

2 hours agopizzafeelsright

I mean, we have had the tech to crank out some little app for a long time. The point of the Saas used to be that you had a neck to strangle when things went south. I guess these days that's just impossible anyhow and the prices aren't worth it so we're rediscovering that software can be made instead of bought?

There have been a lot of little blogs about "home cooking" style apps that you make for yourself. Maybe AI is the microwave meal version.

8 hours agojayd16

There is also the missing explosion of App Store app submissions and other such metrics.

6 hours agoroncesvalles

For a lot of SaaS projects building the software is the simplest part.

6 hours agovidarh

I'd also expect to see a lot more AI-generated PRs on open source projects.

(Or at least AI-assisted to the point where the author feels like they should mention it.)

7 hours agoeirikbakke

Astute observation. From where I sit, the market (at least for business software; I am not very familiar with the consumer market) seems to be wide open, and businesses in the 5 - 200 employee range seem to be particularly underserved.

The marketplace for software for single-owner shops or 1-5 employee size places does seem to be quite strong, and then there's enterprise software, but small business seems to have a software marketplace that is atrociously bad. Here is the typical thing a prospective customer asks me to fix for them:

- They are using some piece of software that is essential to their business. - There really isn't much good competition for that software, and it would be a large cost to convert to another platform that also has all the same downsides below. - The software vendor used to be great, but seems to have been sold several times. - The vendor has recently switched to a subscription-only model and keeps on raising subscription prices in the 12% or so range every year, and the cost of this has started to become noticeable in their budget. - They were accustomed to software being a capital investment with a modest ongoing cost for support, but now it's becoming just an expense. - Quality has taken a nosedive and in particular new features are buggy. Promised integrations seem quite lacking and new features/integrations feel bolted on. - Support is difficult to get ahold of, and the formerly good telephone support then got replaced by being asked to open tickets/emails and now has been replaced by an AI chatbot frontend before they can even open a ticket. Most issues go unresolved.

There are literally millions of software packages in existence, and the bulk of them by numbers are niche products used by small businesses. (Think of a software package which solely exists to help you write custom enhancements for another software package which is used by a specific sector of the furniture-manufacturing business, to get an example.) The quality of this sector is not improving.

This is a field that is absolutely ripe for improvement. If the cost of building software really were dropping 90%, this would be a very easy field to move into and simply start offering for $6,000 a year the product that your competition is charging $12,000 a year for, for an inferior product. Before you bring up things like vendor lock-in or the pain of migration... why can't you write software to solve those problems, too? After all, the cost of writing a migration tool should be 90% cheaper now, too, right?

3 hours agotrollbridge

People vibe one-off solutions for themselves all the time. They just don't have the desire to productionalize them. Frankly, product knowledge is something LLMs are not that good at

9 hours agoklntsky

Same. I hate doing mobile coding, but just in the last few months I AI-coded 3 apps specifically for my needs. They'll never get released publicly, because they'd need polish and features that I don't care about personally. They potentially replace some SaaS too.

7 hours agoviraptor

In context, it sounds like you are saying you used AI to make these three applications?

Why can't AI add the polish and features also?

6 hours agotjr

It can. It's just not needed for my own apps. It would be needed for a public release and I'm just... not interested in that enough. It would cost me time and likely never get enough return.

6 hours agoviraptor

What did the 3 apps do?

I did the same for one app to give me my speed in MPH as huge text as a HUD in a car

7 hours agofigers

Very specific training app for guitar with spaced repetition, automated message forwarder (all good ones demand subscription), and something very specific to me.

7 hours agoviraptor

Low code solutions like PowerApps I bang out stuff like this all the time. If your use case is limited enough, it makes lazy developers very productive.

8 hours agoRajT88

[dead]

8 hours agowotWhytho

> Had the cost of building custom software dropped 90%, we would be seeing a flurry of low-cost, decent-quality SaaS offering all over the marketplace, possibly undercutting some established players.

NODS HEAD VIGOROUSLY

Last 12 months: Docusign down 37%, Adobe down 38%, Atlassian down 41%, Asana down 41%, Monday.com down 44%, Hubspot down 49%. Eventbrite being bought for pennies.

They are being replaced by newer, smaller, cheaper, sometimes internal solutions.

10 hours agopaulddraper

Stock prices down or revenue down? The former would do very little to support your point.

9 hours agoBobbyJo

Revenue is all up. And as far as I can see beating expectations.

9 hours agohagbarth

[dead]

8 hours agowotWhytho

Stock prices.

The latter would do very little to support my point, as it doesn't consider future growth trends.

7 hours agopaulddraper

The former does far less to support your point, because it's only indicative of what people expect to happen. It is not actually evidence that their predictions will come true.

7 hours agobigstrat2003

> I'm sure every organisation has hundreds if not thousands of Excel sheets tracking important business processes that would be far better off as a SaaS app.

Far better off for who? People constantly dismiss spreadsheets, but in many cases, they are more powerful, more easily used by the people who have the domain knowledge required to properly implement calculations or workflow, and are more or less universally accessible.

11 hours agodebo_

Spreadsheets are an incredible tool. They were a key innovation in the history of applications. I love them and use them.

But it's very hard to have a large conventional cell-formula spreadsheet that is correct. The programming model / UI are closely coupled, so it's hard to see what's going on once your sheet is above some fairly low complexity. And many workplaces have monstrous sheets that run important things, curated lovingly (?) for many years. I bet many or most of them have significant errors.

10 hours agorobotresearcher

It's astounding how useful and intuitive they are, but my biggest gripe is how easy is for anyone to mess calculations, say, SUM(<RANGE>), by simply adding one row/column/cell.

I use Google Worksheets frequently to track new things that fit into lists/tables, and giving someone else editor access without them knowing a few worksheet nuances means I have to recheck and correct them every month or two.

10 hours agoASalazarMX

Does anyone make a sanity checker for Excel or Sheets that notices things like that? Would be incredibly helpful!

8 hours agorobotresearcher

Author here. Of course not everything needs to be a web app. But I'm meaning a lot of core sheets I see in businesses need more structure round them.

Especially for collaboration, access controls, etc. Not to mention they could do with unit testing.

10 hours agomartinald

Counterpoint: if a small part of the process is getting tweaked, how responsive can the team responsible for these apps be? That’s the killer feature of spreadsheets for business processes: the accountants can change the accounting spreadsheets, the shipping and receiving people can change theirs, and there’s no team in the way to act as a bottleneck.

That’s also the reason that so-called “Shadow IT” exists. Teams will do whatever they need to do to get their jobs done, whether or not IT is going to be helpful in that effort.

10 hours agotonyarkles

i've seen many attempts to turn a widely used spreadsheet into a webapp. Eventually, it becomes an attempt to re-implement spreadsheets. The first time something changes and the user says "well in Excel i would just do this..." the dev team is off chasing existing features of excel for eternity and the users are pissed because it takes so long and is buggy, meanwhile, excel is right there ready and waiting.

10 hours agochasd00

I always see this point mentioned in "App VS Spreadsheet" but no one gives a concrete example. The whole point of using a "purpose" build app is to give some structure and consistency to the problem. If people are replicating spreadsheet feature then they needed "excel" to begin with since that is a purpose built tool for generalizing a lot of problems. It's like I can say well my notebook and pen is already in front of me, I can use this why would I ever bother opening an app? well because the app provides some additional value.

18 minutes agothingortwo

I have never heard of shadow IT. What is that?

10 hours agoLPisGood

It's when the users start taking care of IT issues themselves. Maybe the name comes from the Shadow Cabinet in England?

Where it might not be obvious is that IT in this context is not just pulling wires and approving tickets, but is "information technology" in the broader sense of using computers to solve problems. This could mean creating custom apps, databases, etc. A huge amount of this goes on in most businesses. Solutions can range from trivial to massive and mission-critical.

9 hours agoanalog31

I think the term is mainly just because it tends not to be very visible/legible to the organization as a whole (and that's probably the main risk of it: either someone leaves and a whole section of the IT infrastructure collapses, or someone sets up something horrifically insecure and the company gets pwned). Especially because most IT departments hate it so there's a strong incentive to keep it quiet (I personally think IT organizations should consider shadow IT a failing of themselves and seek out ways to collaborate with those setting it up or figure out what is lacking in the service they provide to the rest of the company that means they get passed over).

5 hours agorcxdude

That's quite possible. I've done a certain amount of it myself. A couple of programs that I wrote for the factory 15+ years ago are being used continually for critical adjustment and testing of subassemblies. All told it's a few thousand lines of Visual Basic. Not "clean code" but carefully documented with a complete theory of operation that could be used as a spec for a professionally written version.

My view is that it's not a failing, any more than "software development can't be estimated" is, but a fact of life. Every complex organization faces the dilemma of big versus little projects, and ends up having to draw the line somewhere. It makes the most sense for the business, and for developer careers, to focus on the biggest, most visible projects.

The little projects get conducted in shadow mode. Perhaps a benefit of Excel is a kind of social compromise, where it signals that you're not trying to do IT work, and IT accepts that it's not threatening.

There's a risk, but I think it's minimal. Risk is probability times impact, measured in dollars. The biggest risks come from the biggest projects, just because the potential impact is big. Virtually all of the project failures that threaten businesses come from big projects that are carried out by good engineers using all of the proper methods.

2 hours agoanalog31

It's where you have processes etc set up to manage your IT infra, but these very processes often make it impossible / too time consuming to use anything.

The team that needs it ends up managing things itself without central IT support (or visibility, or security etc..)

Think being given a locked down laptop and no admin access. Either get IT to give you admin access or buy another laptop that isn't visible to IT and let's you install whatever you need to get your job done.

The latter is often quicker and easier.

8 hours ago_puk

It's rare than a third-party SaaS can approximate one of these "core sheets" and most of the exceptions have already been explored over the last several decades years.

You have to remember that an SaaS, just like shrink-wrap software, reflects someone else's model of of a process or workflow and the model and implementation evolve per the timeline/agenda of its publisher.

For certain parts of certain workflows, where there's a highly normative and robust industry standard, like invoicing or accounting or inventory tracking, that compromise is worthwhile and we've had both shrink-wrap and SaaS products servicing those needs for a very very long time. We see churn in which application is most popular and what it's interface and pricing look like, but the domains being served have mostly been constant (mostly only growing as new business lines/fashions emerge and mature).

Most of the stuff that remains in a "core sheet" could benefit from the attention of a practiced engineer who could make it more reliable and robust, but almost always reflects that the represented business process is somehow peculiar to the organization. As Access and FoxPro and VBA and Zapier and so many tools have done before, LLM coding assistants and software building tools offer some promise in shaking some of these up by letting orgs convert their "core sheets" to "internal applications".

But that's not an opportunity for SaaS entrepreneurs. It's an opportunity for LLM experts to try to come in and pitch private, bespoke software solutions for a better deal than whatever the Access guy had promised 20 years ago. Because of the long-term maintenance challenges that still plague code that's too LLM-colored, I wouldn't want to be that expert pitching that work, but it's an opportunity for some ambitious folks for sure.

10 hours agoswatcoder

> a lot of core sheets I see in businesses need more structure round them

We had this decades ago, it was called dBase, but FoxPro (pre-Microsoft) was great too. Visual For Pro or MS Access were a brutal downgrade of every good aspect of it.

Imagine if today some startup offered a full-stack(TM) platform that included IDE, a language with SQL-like features, visual UI designer, database; generated small standalone binarires, was performant, and was smaller than most web homepages.

There are modern options, like Servoy or Lianja, but they're too "cloudy" to be considered equivalents.

Edit: seems like there's OpenXava too, but that is Java-based, too hardcore for non-professional programmers IMO. The beauty of xBase was that even a highschooler could whip out a decent business application if the requirements were modest.

9 hours agoASalazarMX

I’m yet to see a spreadsheet workflow successfully replaced by something else.

10 hours agonesarkvechnep

Programming in a spreadsheet is an anti-pattern. Does anyone review your workflow? Write tests for it? Use a real programming language; a notebook at least.

8 hours agoesafak

Streamlit apps or similar are doing a great job at this where I'm at.

As simple to build and deploy as Excel, but with the right data types, the right UI, the right access and version control, the right programming language that LLMs understand, the right SW ecosystem and packages, etc.

10 hours agocrubier

Are they actually as simply to deploy as Excel? My guess would be that most streamlit apps never make it further than the computer they're written on.

9 hours agoSauntSolaire

If you have the right tooling (e.g. Streamlit.io) then yes, literally.

To 'deploy' an Excel file I go to Office365 and create my excel file and hit share

To 'deploy' a Streamlit file I go to Streamlit and write my single file python code (can be a one liner) and hit share

7 hours agocrubier
[deleted]
2 hours ago

maybe the strategy in those cases is to augment the core spreadsheet with tools that can unit test it and broadcast changes etc

8 hours agotehjoker

Better security. Better availability. Less chance of losing data.

Assuming the SaaS is implemented competently, of course. Otherwise there's not much advantage.

10 hours agojimbokun

To nick pick, it is more about 800% gain in productivity. Cost actually increased.

I refer here to my experience as a solo developer.

With AI assistance I don't spend less hours coding, but more.

There is the thrill of shipping relevant features that were sleeping in my drawers for ages, quicker. Each hour of coding delivers just 8 x more features and bug solving.

Also, whereas I spent a few dozen dollars per month on server costs, I now also spend an equivalent amount on subscriptions and API calls to LLM services for this AI assisted coding. Worth every penny.

So while roductivity increased manifold, absolute cost actually increased as well for me.

21 minutes agoseinecle

As I said in a previous post:

I think the 90/90 rule comes into play. We all know Tom Cargill quote (even if we’ve never seen it attributed):

The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.

It feels like a gigantic win when it carves through that first 90%… like, “wow, I’m almost done and I just started!” And it ‘is’ a genuine win! But for me it’s dramatically less useful after that. The things that trip up experienced developers really trip up LLMs and sometimes trying to break the task down into teeny weeny pieces and cajole it into doing the thing is worse than not having it.

So great with the backhoe tasks but mediocre-to-counterproductive with the shovel tasks. I have a feeling a lot of the impressiveness depends on which kind of tasks take up most of your dev time.

If your job is pumping out low-effort websites that are essentially marketing tools for small businesses, it must feel like magic. I think the more magical it feels for your use case, the less likely your use case will be earning you a living 2 years from now.

6 hours agoDrewADesign

Yeah, I think the more your job demands correctness in novel scenarios the less impressed you are with these shiny demos. I encourage anyone to pause the demo once the thing is generated and stare at what it did. Is it genuinely correct and impressive? Are you impressed because it made a thing generally shaped like what you expected, or because it would be genuinely impressive (or even adequate) if a person did it?

5 hours agojanalsncm

These kind of future prediction posts keep coming, and I'm tired of them. Reality is always more boring, less extreme, and slower at changing, because there are too many factors involved, and the authors never account for everything.

Maybe we should collect all of these predictions, then go back in 5-10 years and see if anyone was actually right.

8 hours agoblauditore

Despite a couple forward-looking statements, I didn’t read this as a prediction. It seems more of a subjective/anecdotal assessment of where things are in December 2025. (Yes, with some conjecture about the implications for next year.)

Overall, it echos my experience with Claude Opus 4.5 in particular. We’ve passed a threshold (one of several, no doubt).

8 hours agotobyjsullivan

Just to test out the OP articles theory, I was about to write some unit tests. I decided to let Opus 4.5 have a go. It did a pretty good job, but I spent probably as much time parsing what it had done as I would have writing the code from scratch. I still needed to clean it up, and of course, unsurprisingly, it had made a few tests that only really exercised the mocking it had made. A kind of mistake I wouldn't be caught dead sending in for peer review.

I'm glad the OP feels fine just letting Opus do whatever it wants without a pause to look under the covers, and perhaps we all have to learn to stop worrying and love the LLM? But I think really, here and now, we're witness to just another hype article written by a professional blogger and speaker, who's highly motivated to write engagement bait like this.

an hour agoaeonfox
[deleted]
an hour ago

In your heart you either believe something or you don’t. I am happy to live in a world where so many people follow the courage of their convictions, even if they sound insane or uncomfortable.

7 hours agorprend

Yeah yeah there is this guy with a weird moustache with some crazy ideas that we are being held down by these other group of people. We should definitely follow him. He sounds crazy but he seems so convincing. And look at the cool insignia and symbols! Did you know this salute was back from the Romans? - You circa 1920.

5 hours agothe_overseer

I love the hand drawn chart. Apparently "Open Source" was invented around 2005, which significantly reduced development cost, then AWS was invented in 2011 or so and made development even cheaper, but then, oh no, in 2018 "complexity" happened and development became harder!

8 hours agoCobrastanJorji

I don't read this as when open-source was invented, but when it happened for the corporate world. In 2002 it was a very reasonable choice for $BIG_COMPANY to use a proprietary web server, e.g. IIS. In 2008 that would have been really be weird.

8 hours agojcelerier

But why did that make development cheaper? An enterprise copy of Windows with IIS cost maybe a thousand bucks, right? Maybe there were more costs, my knowledge is, y'know, 23 years out of date.

7 hours agoCobrastanJorji

AI drawn chart and AI written article

4 hours agokhannn

In 2018 We had kubernetes, which improved the development speed another 300%!

8 hours agozqna

Kubernetes with production workloads allowed companies to finally realize the "fail fast" ethos.

8 hours agoRajT88

With Kubernetes this hated it must be boring enough to safely start using!

7 hours agonrhrjrjrjtntbt

It's boring, alright. You're gonna curse this moment when you realize you've become a yaml engineer.

Actually, I bet AI is pretty good at creating those damn manifests. It's mostly regurgitation after all.

7 hours agotormeh

Good write-up. I don't disagree with any of his points, but does anybody here have practical suggestions on how to move forward and think about one's career? I've been a frontend (with a little full stack) for a few years now, and much of the modern landscape concerns me, specifically with how I should be positioning myself.

I hear vague suggestions like "get better at the business domain" and other things like that. I'm not discounting any of that, but what does this actually mean or look like in your day-to-day life? I'm working at a mid-sized company right now. I use Cursor and some other tools, but I can't help but wonder if I'm still falling behind or doing something wrong.

Does anybody have any thoughts or suggestions on this? The landscape and horizon just seems so foggy to me right now.

11 hours agoBigHatLogan

Author here, thanks for your kind words!

I think it's about looking at what you're building and proactively suggesting/prototyping what else could be useful for the business. This does get tricky in large corps where things are often quite siloed, but can you think "one step ahead" of the product requirements and build that as well?

I think regardless if you build it, it's a good exercise to run on any project - what would you think to build next, and what does the business actually want. If you are getting closer on those requests in your head then I think it's a positive sign you are understanding the domain.

11 hours agomartinald

I think you're right about trying to stay one step ahead of product requirements. Maybe my issue here is that I'm looking for another "path" where one might not exist, at least not a concretely defined one. From childhood to now, things were set in front of me and I just sort of did them, but now it feels like we're entering a real fog of war.

It would be helpful, as you suggest, to start shifting away from "I code based on concrete specs" to "I discover solutions for the business."

Thanks for the reply (and for the original essay). It has given me a lot to chew on.

10 hours agoBigHatLogan

ARE you the author? Or did you prompt AI to get this?

3 hours agosatisfice

Blind leading the blind, but my thinking is this:

1. Use the tools to their fullest extend, push boundaries and figure out what works and what doesn't

2. Be more than your tools

As long as you + LLM is significantly more valuable than just an LLM, you'll be employed. I don't know how "practical" this advice is, because it's basically what you're already doing, but it's how I'm thinking about it.

10 hours agocolonCapitalDee

Realistically, someone else + LLM at -10% compensation will be employed

9 hours agoares623

Then why wasn't someone else employed at -10% compensation instead of you before LLMs?

8 hours agoubercow13

Let's say LLMs add 50 "skill points" to your output. Developer A is at 60 skill points in terms of coding ability, developer B is at 40. The differential between them looks large. Now add LLMs. Developer A is at 110 skill points, developer B is at 90. Same difference, but now it doesn't look as large.

The (perceived, alleged) augmentation by LLMs makes individual differences in developer skill seem less important. From the business's perspective, you are not getting much less by hiring a less skilled developer vs. hiring a more skilled one, even if both of them would be using LLMs on the job.

Obviously, real life is more complicated than this, but that's a rough idea of what the CEO and the shareholders are grappling with from a talent acquisition standpoint.

8 hours agobitwize

Don't chase specific technologies, especially not ones driven by for-profit companies. Chase ideas, become great in one slice of the industry, and the very least you can always fall back on that. Once established within a domain, you can always try to branch out, and feel a lot more comfortable doing so.

Ultimately, software is for doing something, and that something can be a whole range of things. If you become really good at just a slice of that, things get a lot easier regardless of the general state of the industry.

11 hours agoembedding-shape

Thanks for the response. When you say "one slice of the industry", is the suggestion to understand the core business of whatever I'm building instead of being the "specs to code" person? I guess this is where the advice starts to become fuzzy and vague for me.

10 hours agoBigHatLogan

Its always been foggy. Even without AI, you were always at risk of having your field disrupted by some tech you didn't see coming.

AI will probably replace the bottom ~30-70%(depends who you ask) of dev jobs. Dont get caught in the dead zone when the bottom falls out.

Exactly how we'll train good devs in the future, if we don't give them a financially stable environment environment to learn in while they're bad, is an open question.

10 hours agonick486

My suggestion would be to move to a higher level of abstraction, change the way which you view the system.

Maybe becoming full stack? Maybe understanding the industry a little deeper? Maybe analyzing your company's competitors better? That would increase your value for the business (a bit of overlap with product management though). Assuming you can now deliver the expected tech part more easily, that's what I'd do.

As for me, I've moved to a permanent product management position.

10 hours agoMrPapz

Great question, hard to quickly answer.

My .02$. Show you can tackle harder problems. That includes knowing which problems matter. That happens with learning a "domain", versus just learning a tool (e.g. web development) in a domain.

Change is scary, but thats because most aren't willing to change. Part of the "scare" is the fear of lost investment (e.g. pick wrong major or career). I can appreciate that, but with a little flexibility, that investment can be repurposed quicker today that in pre-2022 thanks to AI.

AI is just another tool, treat it like a partner not a replacement. That can also include learning a domain. Ask AI how a given process works, its history, regulations, etc. Go confirm what it says. Have it break it down. We now can learn faster than ever before. Trust but verify.

You are using Cursor, that shows a willingness to try new things. Now try to move faster than before, go deeper into the challenges. That is always going to be valued.

10 hours agoronald_petty
[deleted]
10 hours ago

Also blind leading the blind here but I see two paths.

1) Specialize in product engineering, which means taking on more business responsibility. Maybe it means building your own products, or maybe it means trying to get yourself in a more customer-facing or managerial role? Im not very sure. Probably do this if you think AI will be replacing most programmers.

2) Specialize in hard programming problems that AI can't do. Frontend is probably most at risk, low level systems programming least at risk. Learn Rust or C/C++, or maybe backend (C#\Java\Go) if you don't want to transition all the way to low level systems stuff.

That being said I don't think AI is really going to replace us anytime soon.

9 hours agosamdoesnothing

Sheep farming sounds nice. Or making wooden furniture. Something physical.

11 hours agoisoprophlex

Nobody knows the answer.

Answers I see are typically "be a product manager" or "start your own business" which obviously 95% of developers can't/don't want to do.

11 hours agocatigula

It's not just about "building" ... who is going to maintain all this new sub-par code pushed to production every day?

Who is going to patch all bugs, edge cases and security vulnerabilities?

9 hours agovb-8448

Nobody.

In fact looking at the vibecoders enthusiasm for serverless I’m expecting a sharp spike in surprise cloud bills never mind thinking about edge case

5 hours agoHavoc

I hear this argument all the time but it seems to leave out code reviews

8 hours agomountainriver

In teams of high performers who have built a lot of mutual trust, code reviews are mostly a formality and a stop gap against the big, obvious accidental blunders. "LGTM!"

I do not know or trust the agents that are putting out all this code, and the code review process is very different.

Watching the Copilot code review plugin complain about Agent code on top of it all has been quite an experience.

7 hours agonsxwolf

I happily got rid of a legacy application (lost the pitch, another agency now must deal with the shit) I inherited as a somewhat technically savvy person about a year ago.

It was built by real people. Not a single line of AI slop in it. It was the most fragile crap I had ever the misfortune to witness. Even in my wildest vibe coding a prototype moments I was not able to get the AI to produce that amount of anti patterns, bad shit and code that would have had Hitchcock running.

I think we would be shocked to see what kind of human slop out there is running in production. The scale might change, but at least in this example, if I had rebuilt the app purely by vibe coding the code quality and the security of the code would actually have improved. Even with the lowest vibe coding effort thinkable.

I am not in any way condoning (is this the right word) bad practices, or shipping vibe code into prod without very, very thorough review. Far from it. I am just trying to provide a counter point to the narrative, that at least in the medium sized business I got to know in my time consulting/working in agencies, I have seen quite a metric ton of slop, that would make coding agents shiver.

8 hours agosdoering

DigitalOcean version 1 was a duck taped together mash of bash, chron jobs and perl, 2 people out of 12 understood it, 1 knew how to operate it. It worked, but it was insane, like really, really insane. 0% chance the original chatgpt would have written something as bad as DO v1.

8 hours agoneom

Are you suggesting the original ChatGPT could build DigitalOcean?

6 hours agoan0malous

To me, built and written are not the same. Built: OK, maybe that's an exaggeration. But could an early "this is pretty good at code" llm have written digitalocean v1? I think it could, yes (no offense Jeff). In terms of volume of code and size of architecture, yeah it was big and complex, but it was literally a bunch of relatively simple cron, bash and perl, and the whole thing was very...sloppy (because we were moving very quickly) - DigitalOcean as I last knew of it (a very long time ago), transformed to a very well written modern go shop. (Source: I am part of the "founding team" or whatever.)

5 hours agoneom

AI doesn't overcome the limits of the one who is giving the input, like in pre-ai era SW, if the input sucks the output sucks.

What changed is the speed: AI and vibe coding just gave a turboboost to all you described. The amount of code will go parabolic (maybe it's already parabolic) and, in the mid-term, we will need even more swe/sre/devops/security/ecc to keep up.

8 hours agovb-8448

The argument isn’t that all slop is AI, but that all AI is slop.

8 hours agogeon

Turns out building enterprise software has more in common with generating slop than not.

8 hours agobaq
[deleted]
8 hours ago

The theory goes very simple, you tell the agent to patch the bug. Now the practice though...

9 hours agosoco

yeah, in practice: would you like to onboard a Boeing 747 where some of the bugs were patched by some agents,

what is the percentage risk of malfunction you are going to accept as a passenger?

8 hours agofullstackwife

No. But most software products are nowhere near that sensitive and very few of them are developed with the level of caution and rigor appropriate for a safety-critical component.

8 hours agoemodendroket

>> yeah, in practice: would you like to onboard a Boeing 747 where some of the bugs were patched by some agents,

In this case, the traditional human process hasn't gone well either.

8 hours agoTuringNYC

It is working great as long as it is adhered to and budgeted.

8 hours agogeon

human process is the understanding that the mistakes will make people die

8 hours agofullstackwife

The bugs were mostly caused by MBAs, who one assumes will remain.

8 hours agodboreham

> I've had Claude Code write an entire unit/integration test suite in a few hours (300+ tests) for a fairly complex internal tool. This would take me, or many developers I know and respect, days to write by hand.

I'm not sure about this. The tests I've gotten out in a few hours are the kind I'd approve if another dev sent then but haven't really ended up finding meaningful issues.

8 hours agoNormal_gaussian

Just to be clear, they weren't stupid 'is 1+1=2' type tests.

I had the agent scan the UX of the app being built, find all the common flows and save them to a markdown file.

I then asked the agent to find edge cases for them and come up with tests for those scenarios. I then set off parallel subagents to develop the the test suite.

It found some really interesting edge cases running them - so even if they never failed again there is value there.

I do realise in hindsight it makes it sound like the tests were just a load of nonsense. I was blown away with how well Claude Code + Opus 4.5 + 6 parallel subagents handled this.

8 hours agomartinald

Have you noticed how it's never "I got this awesome code!"? It's always "I got good code, trust me".

People say their prompts are good, awesome code is being generated, it solved a month's worth of work in a minute. Nobody comes with receipts.

8 hours agokace91

I keep seeing posts like this so I decided to video record all my LLM coding sessions and post them on YouTube. Early days, I only had the idea on Saturday.

8 hours agodboreham

I find I get better tests if I use agents to generate tests.

8 hours agoAeolun

This article mentions cost to ship, but ignores that the largest cost of any software project isn't consumed by how long it takes to get to market, but by maintenance and addition of new features. How is agentic coding doing there? I've only seen huge, unmaintainable messes so far.

11 hours agoJohnMakin

While this is true, I think some fields like game development may not always have this problem. If your goal is to release a non-upgradable game - fps, arcade, single-player titles, maintenance may be much less important than shipping.

edit: typos

10 hours agop2detar

I'm trying to understand where this kind of thinking comes from. I'm not trying to belittle you, I sincerely want to know: Are you aware that everyone writing software has the goal of releasing software so perfect it never needs an upgrade? Are you aware that we've all learned that that's impossible?

8 hours agokrupan

> I'm trying to understand where this kind of thinking comes from.

I used to be a game developer.

8 hours agop2detar

this was basically true until consoles started getting an online element. the up-front testing was more serious compared to the complexity of the games. there were still bugs, but there was no way to upgrade short of a recall.

8 hours agotehjoker

And why did we abandon this model?

Also, computer games existed at the same time as consoles. People were playing games loaded from floppy disks on computers back in the early 1980's

8 hours agokrupan

I'm not saying that this model is profitable in the current environment, but it did exist in a real world environment at one point, making the point that certain processes are compatible with useful products, but maybe not leading edge competitive products that need to make a profit currently.

7 hours agotehjoker

Nobody wants to ship that! They want perpetually upgraded live service games instead, because that's recurring revenue.

8 hours agopjc50

I think that is an applicable domain, but the problem is that every gamer I know who is not in the tech industry is vehemently opposed to AI.

9 hours agoliampulles

Well, they just love complaining. You won't find many who profess to like DLC, yet that sells.

8 hours agoemodendroket

one year in, AI slop > Human-written slop

10 hours agobdangubic

I am highly skeptical of this claim.

10 hours agoJohnMakin

personal experience, not general claim. I am 30-years in the industry and have seen a lot of human-written code…

10 hours agobdangubic

Agreed. I think a core problem is many developers (on HN) don't realise how "bad" so much human written code is.

I've seen unbelievably complex logistics logic coded in... WordPress templates and plugins to take a random example. Actually virtually impossible to figure out - but AI can actually extract all the logic pretty well now.

10 hours agomartinald

> Agreed. I think a core problem is many developers (on HN) don't realise how "bad" so much human written code is.

what do you think "AI" is trained on exactly?

7 hours agoblibble

finally the right question! I would upvote you 1,000 times if I could!

this is why they need a senior/seasoned developer behind them. for things that can simply be learned directly (e.g. from man/docs) it rocks, without guidance. for other things it needs guidance

5 hours agobdangubic

[dead]

10 hours agopaoaoaks

there are many millions of people writing code… that’s way too many to get any good quality. you might get lucky and get involved with codebase which does not make you dizzy (or outright sick) but most of us are not that lucky

10 hours agobdangubic

[dead]

10 hours agoszundi

Does this mean the AI slop is higher quality or that there's more of it?

10 hours agojimbokun

certainly not more of it now, we have decades and decades of human-written code if I am understanding the question correctly.

all's I am saying is that "anti-AI" HN crowd literally glorifies human-written code every second of every day here, "AI slop this, AI code unmaintainable that..." I have been a contractor for many years now and usually brought on to fix shit and human-written code is in vast majority of cases much worse compared to AI generated code. the sample size of the latter is smaller but my general argument remains. I think people that write these "AI slop" comments should pick their favorite language/framework/... and then go to github and browse through codebases, written by humans (ignore commits before xxxxxx) and then see if they like what they see :)

8 hours agobdangubic

I contracted briefly on a post-LLM-boom Excel modernization project (which ended up being consulting mainly, because I had to spend all my time explaining key considerations for a long-running software project that would fit their domain).

The company had already tried to push 2 poor data analysts who kind of new Python into the role of vibe coding a Python desktop application that they would then distribute to users. In the best case scenario, these people would have vibe coded an application where the state was held in the UI, with no concept of architectural seperation and no prospects of understanding what the code was doing a couple months from inception (except through the lens of AI sycophancy), all packaged as a desktop application which would generate excel spreadsheets that they would then send to each other via Email (for some reason, this is what they wanted - probably because it is what they know).

You can't blame the business for this, because there are no technical people in these orgs. They were very smart people in this case, doing high-end consultancy work themselves, but they are not technical. If I tried to do vibe chemistry, I'm sure it would be equally disastrous.

The only thing vibe coding unlocks for these orgs by themselves is to run headfirst into an application which does horrendous things with customer data. It doesn't free up time for me as the experienced dev to bring the cost down, because again, there is so much work needed to bring these orgs to the point where they can actually run and own an internal piece of software that I'm not doing much coding anyway.

8 hours agoliampulles

> post-LLM-boom Excel modernization project

Well there is a terrifying thought

5 hours agoHavoc

From the article:

"I've had Claude Code write an entire unit/integration test suite in a few hours (300+ tests) for a fairly complex internal tool"

Did you catch what the author didn't mention? Are the tests any good? Are they even tests? I'm playing with Opus now (best entertainment for a coder), it is excellent at writing fake code and fabricating results. It wrote me a test that validates an extremely complex utility, and the test passed!

What was the test? Call utility with invalid parameters and check that there is an error.

17 minutes ago112233

I think the whole software industry has tried to obscure the fact that most companies who hire software engineers are writing exactly the same code as every other company. How many people here have written the same goddamn webapp at the last 3 companies they've been to? Anyone ever wonder why nobody just publishes blueprints to software and licenses that blueprint to a single engineer to customize? Because there's a lot less money in doing that, versus selling a lot more software add-ons/SaaS/etc.

There is no value-add to hiring software engineers to build basic apps. That's what AI will be good for: repeating what has already been written and published to the web somewhere. The commoditized software that we shouldn't have been paying to write to begin with.

But AI won't help you with all the rest of the cost. The maintenance, which is 80% of your cost anyway. The infrastructure, monitoring, logging, metrics, VCS hosting, security scanning, QA, product manager, designer, data scientist, sales/marketing, customer support. All of that is part of the cost of building and running the software. The software engineers that churn out the initial app is a smaller cost than it seems. And we're still gonna need skilled engineers to use the AI, because AI is an idiot savant.

Personally I think 50% cost reduction in human engineers is the best you can expect. That's not nothing, but that's probably like a 10% savings on total revenue expenditure.

6 hours ago0xbadcafebee

Man, that's a big title. I can't wait to see the data on how the cost has dropped so far.

> AI Agents however in my mind massively reduce...

Nevermind. It's a vibe 90%.

8 hours agoCobrastanJorji

That. I was expecting some overview of the last couple of decades in a "There's no Silver Bullet" fashion.

Instead it's some guy that claims it takes a team to make CI/CD for something he can vibe-code in a day, and that agentic code totally solves the complexity problems caused by too much React.

8 hours agomarcosdumay

Even if that were true, so we've made development needlessly more complicated, only to gain back the time lost by running numberwang across enough datacenters to fill a small country? You haven't abstracted anything away, Morty, just created another layer of shit.

https://news.ycombinator.com/item?id=22601623

7 hours agobitwize

Oh, if LLMs did actually solve the problem of too much React, that would by itself drop the cost of doing the software the author cares about by close to 90%.

And yes, it's a completely self-imposed problem that many people don't have at all. But LLMs make it worse, and the author is celebrating that it's worse now.

7 hours agomarcosdumay

> written an entire unit/integration test suite in a few hours

It’s often hard to ground how “good” blog writers are, but tidbits like this make it easy to disregard the author’s opinions. I’ve worked in many codebases where the test writers share the authors sentiment. They are awful and the tests are at best useless and often harmful.

Getting to this point in your career without understanding how to write effective tests is a major red flag.

11 hours agopaoaoaks

I've used llms to help me write large sets of test cases, but it requires a lot of iteration and the mistakes it makes are both very common and insidious.

Stuff like reimplementing large amounts of the code inside the tests because testing the actual code is "too hard", spending inordinate amounts of time covering every single edge case on some tiny bit of input processing unrelated to the main business logic, mocking out the code under test, changing failing tests to match obviously incorrect behavior... basically all the mistakes you expect to see totally green devs who don't understand the purpose of tests making.

It saves a shitload of time setting up all the scaffolding and whatnot, but unless they very carefully reviewed and either manually edited or iterated a lot with the LLM I would be almost certain the tests were garbage given my experiences.

(This is with fairly current models too btw - mostly sonnet 4 and 4.5, also in fairness to the LLM a shocking proportion of tests written by real people that I've read are also unhelpful garbage, I can't imagine the training data is of great quality)

10 hours agop1necone

But we have 500% code coverage?!?!

9 hours agoIceDane

I strongly agree. It may be even more than 90%. For example yesterday I was able to use lovable (and Claude code web) on my phone to build out an almost 1:1 replacement (for my purposes) for an expensive subscription based app for working out: https://strengthquest.lovable.app/

This is simply unimaginable level of productivity— in one day on my phone, I can essentially build and replace expensive software. Unreal days we are living in.

4 hours agomaddmann

Did I miss something or is there actually no evidence provided that costs have dropped?

11 hours agorecursive

Well... evidence, but there's obviously a graph with a line going places!

11 hours agoisoprophlex

Especially if you factor-in the fact that the AI companies are losing money for now, and that it's not sustainable.

8 hours agoTheRoque

Yeah, when does the other shoe drops and after being addicted to AI coding we suddenly have the rug pulled on price.

8 hours agodjmips

I think the cost of prototyping has definitely gone down.

Developing production grade software which you want to people to rely on and pay for it is not gone down so much. The "weak" link is still human.

Debugging complex production issues needs intimate knowledge of the code. Not gonna happen in next 3-4 years atleast.

9 hours agodevnull3

These tools do two things:

1. Rob one’s understanding of the design and implementation

2. Give the unqualified enough confidence to contribute

4 hours agoglouwbug

> Software engineering has got - in my opinion, often needlessly - complicated, with people rushing to very labour intensive patterns such as TDD, microservices, super complex React frontends and Kubernetes.

TDD as defined by Kent Beck (https://tidyfirst.substack.com/p/canon-tdd ) doesn't belong in that list. Beck's TDD is a way to order work you'd do anyway: slice the requirement, automate checks to confirm behavior and catch regressions, and refactor to keep the code healthy. It's not a bloated workflow, and it generalizes well to practices like property-based testing and design-by-contract.

7 hours agovinipolicena

How is it that some people feel comfortable just making up numerical statistics out of thin air?

3 hours agofoobarbecue

Where are the billions of dollars spent on GPUs and new data centers accounted for in this estimation?

9 hours agoSoftTalker

Ya completely agree, these companies will eventually push these costs to the consumer, might be in 1-2yrs, but it will eventually happen and though regulatory capture make it harder and harder to run local AI models because of “security” reasons.

9 hours agobdavid21wnec

From the creators of the 10x engineer:

0.1x engineering cost

19 minutes agogloosx

The author teaches AI workshops. Nothing wrong with that, but I think it should be disclosed here. A lot of money is riding on LLMs being financially successful which explains a lot of the hype.

8 hours agorudedogg

I am a believer in the new agentic coding tools (I wasn't 6 months ago) but the delays and time it takes to build something haven't really changed even though everyone I know is using them. What I see is what has always been there:

Product doesn't understand the product because if it was easy to understand then someone else would have solved the problem already and we wouldn't have jobs. This means you need to iterate and discuss and figure out just like always. The iterations can be bolder, bigger, etc and maybe a bit faster but ultimately software scales np so a 10x improvement in -individual- capability doesn't scale to 10x improvement in -organizational- capability.

Let me put it another way. If your problem was so simple you could write a 200 word prompt to fully articulate it then you probably don't have much of a moat and aren't providing enough value to be competitive.

6 hours agojmward01

I must be holding wrong then because I do use Claude Code all the time and I do think its quite impressive… still I cant see where the productivity gains go nor am I even sure they exist (they might, I just cant tell for sure!)

10 hours agojdmoreira

if you back and forth with the model, and discuss/approve every change it does, that's the problem.

you need to give it a bigish thing so it can work 15 min on it. and in those 15 min you prepare the next one(s)

8 hours agohurturue

Sure. But am I supposed to still understand that code at some point? Am I supposed to ask other team members to review and approve that code as if I had written it?

I'm still trying to ship quality work by the same standards I had 3 or 5 years ago.

8 hours agojdmoreira

when compiler appeared assembly programmers would complain all day how ugly and inneficient the generated code was

if you want to get the productivity gain you need to figure out how to solve the code review problem

7 hours agohurturue

Your solution is, "just ship worse code, it's probably fine"?

I think it's your standards that have fallen 90%…

2 hours agobccdee

I keep seeing articles like these popup. I am in the industry but not in the “AI” industry. What I have no concept of, is the current subsidized, VC funded, anywhere close to what the final product will be? I always fall back to the Uber paradox. Yes it was great at first, now it’s 3x what it cost and has only given cabs pricing power. This was good for consumers to start but now it’s just another part of the k shaped economy. So is that ultimately where AI goes? Top percent can afford a high monthly subscription and the not so fortunate get there free 5 minutes per month

10 hours agobdavid21wnec

But even if that did happen, the open source models are excellent and cost virtually nothing?

Like I prefer Opus 4.5 and Gemini 3 to the open weights models, but if Anthropic or Google upped the pricing 10x then everyone would switch to the open weights models.

Arguably you could say that the Chinese labs may stop releasing them, true, but even if all model development stopped today then they'd still be extremely useful and a decent competitor.

10 hours agomartinald

Again I’m not in the “AI” industry so I don’t fully understand the economics and don’t run open models locally.

What’s the cost to run this stuff locally, what type of hardware is required. When you say virtually nothing, do you mean that’s because you already have a 2k laptop or gpu?

Again I am only asking because I don’t know. Would these local models run OK on my 2016 Mac Pro intel or do I need to upgrade to the latest M4 chip with 32GB memory for it to work correctly?

10 hours agobdavid21wnec

Basically yes, the useful models need a modernish GPU to get inference running at a usable speed. You can get smaller parameter models 3b/7b running on older laptops, it just won’t produce output at a useful speed.

7 hours agorimeice

The large open-weights models aren't really usable for local running (even with current hardware), but multiple providers compete on running inference for you, so it's reasonable to assume that there is and will be a functioning marketplace.

8 hours agocriemen

This wouldn't be the first time that the cost of software radically dropped. It happened back during the early 1960s for the first time when IBM introduced the System 360, which included backward compatibility for the 1401. Prior to this point, the maximum lifespan of software was tied to that of the computer in question. The software would be re-written for the next architecture, every time a new computer was purchased.

The advent of the PC, and the appearance of Turbo Pascal, Visual Basic, and spreadsheets that could be automated made it possible for almost anyone to write useful applications.

If it gets cheaper to write code, we'll just find more uses for it.

7 hours agomikewarot

Copying GPL code, with global search&replace of the brand names, has always lowered the cost of software 'development' dramatically.

11 hours agoneilv

I would love to see where I can find a full test coverage to paste in for an internal too that I can search&replace on to get it working…

10 hours agobdangubic

If the cost of building software dropped so much - where is that software?..

Was there an explosion of useful features in any software product you use? A jump in quality? Anything tangible an end user can see?..

10 hours agoazov
[deleted]
7 hours ago

Im really liking it for writing boring code.

As an example I wanted a plugin for visual studio. In the past I would have spent hours on it or just not bothered but I used Claude code to write it, it isn’t beautiful or interesting code, it lacks tests but it works and saves me time. It isn’t worth anything, won’t ever be deployed into production, I’ll likely share it but won’t try to monetise it, it is boring ugly code but more than good enough for its purpose.

Writing little utility apps has never been simpler and these are probably 90% cheaper

8 hours agoed_elliott_asc

A plugin that does what exactly? A lot of comments here and under other posts are just declaring things with the following template: "I wanted to do X, but before it would took me N amount of hours, but now with LLM tool L, it has taken me way less time. I can't share anything about X, but LLM tool L is very useful. Just trust me, bro"

7 hours agoskydhash

I think AI can be really powerful tool. I am more productive with it than not, but a lot of my time interacting with AI is reviewing its code, finding problems with it (I always find some issues with it), and telling it what to do differently multiple times, and eventually giving up, and fixing up the code by hand. But it definitely has reduced average time it takes me to implement features. But I also worry that not everyone would be responsible and check/fix AI generated code.

8 hours agosharpy

pretty decent article - but what it misses is most of these agents are trained on bad code - which is open source.

so what does this mean in practice? for people working on proprietary systems (cost will never go down) - the code is not on github, maybe hosted on an internal VCS - bitbucket etc. the agents were never trained on that code - yeah they might help with docs (but are they using the latest docs?)

for others - the agents spit bad code, make assumptions that don't exist, call api's that don't exist or have been deprecated ?

each of those you need an experienced builder who has 1. technical know-how 2. domain expertise ? so has the cost of experienced builder(s) gone down ? I don't think so - I think it has gone up

what people are vibecoding out there - is mostly tools / apps that deal in closed systems (never really interact with the outside world), scripts were ai can infer based on what was done before etc but are these people building anything new ?

I have also noticed there's a huge conflation with regards to - cost & complexity. zirp drove people to build software on very complex abstractions eg kubernetes, nextjs, microservices etc - hence people thought they needed huge armies of people etc. however we also know the inverse is true that most software can be built by teams of 1-3 people. we have countless proof of this.

so people think to reduce cost is to use a.i agents instead of addressing the problem head-on - built software in a simpler manner. will ai help - yeah but not to the extent of what is being sold or written daily.

8 hours agodzonga

> these agents are trained on bad code - which is open source.

This is doubtful and not what I've seen in over 30 years in the industry. People who are ashamed of their source code don't make it Open Source. In general, Open Source will be higher quality than closed source.

Sure, these days you will need to avoid github repositories made by students for their homework assignments. I don't think that's a problem.

8 hours agoTepix

The idea that LLMs were trained on miscellaneous scraped low quality code may have been true a year ago, but I suspect it is no longer true today

All of the major model vendors are competing on how well their models can code. The key to getting better code out of the model is improving the quality of the code that it is trained on.

Filtering training data for high quality code is easier than filtering for high quality data if other types.

My strong hunch is that the quality of code being used to train current frontier models is way higher than it was a year ago.

8 hours agosimonw

Then why is all my software slower, buggier, and with a worse UX?

11 hours agoan0malous
[deleted]
11 hours ago

Right? Past couple years software quality has taken a shit.

10 hours agoHumblyTossed

It's fascinating to read these comments - I believe everyone. Some are getting huge productivity gains and others very little - so perhaps we are not in the same business. I know that I've ranged over various work - all called software development and the variety of work was quite different - some I wouldn't call challenging but still needed a lot of manual labor - perhaps this is the type of work that finds easy wins from AI automation. Still other work was much more challenging but I've never really attempted to use AI in my work because it was forbidden by policy. I've used AI at home for fun projects and it has helped me with languages I've never used before but I've never come close to 90% productivity boost. Anyway, fascinating!

6 hours agodjmips

I agree with your observations, in my own job I cover a great deal of the aspects of all software development practices for a few clients. Probably something you'd normally have a bunch of different roles do. Not because of AI, I have been in this role since before the AI boom, this is just how agency work is sometimes.

My observation is that there is perhaps 15% of my job that has been boosted by AI by quite a lot, and the rest it hasn't touched much at all. Most of the job just isn't coding badically. The code generation aspect is a bit flawed too because to get good results I often spend more time collating requirements and engineering the prompt that I again could have just done it quicker myself.

There is a sweet spot in there where the requirements were easy to write out, and the code was simple enough but there is a lot to write, that it's nice to not have to write it myself. But even then I am finding that AI is often not successful, and if it takes three tries to get it to do the work properly then there is no productivity gain. Often enough time is lost to the failed attempts.

Usually there isn't that much code to write, but it's fairly complex and needs to be correct, which is where I find LLMs have too many failed attempts and waste time.

(I am an 18+ year "everything" developer, my experiences are from using Claude Code)

5 hours agoehnto

Can someone help me out how to get started in this kind of coding setup?

I haven't written production code for the last 8 years, but has prior development experience for about 17 years (ranging from C++, full stack, .NET, PHP and bunch of other stuff).

I used AI at personal level, and know the basics. Used Claude/Github to me help fix and write some pieces of the code in languages I wasn't familiar with. But it seems like people talking and deploying large real world projects in short-"er" amount of time. An old colleague of mine whom I trust mentioned his startup is developing code 3x faster than we used to develop software.

Is there resource that explains the current best practices (presumably it's all new)? Where do I even start?

6 hours agoBloondAndDoom

Betteridge's law proven correct once again. The answer to the headline is: no. Perhaps it will be true in the future, nobody knows.

I'm skeptical the extent to which people publishing articles like this use AI to build non-trivial software, and by non-trivial I mean _imperfect_ codebases that have existed for a few years, battle tested, with scars from hotfixes to deal with fires and compromises to handle weird edge cases/workarounds and especially a codebase where many developers have contributed to it over time.

Just this morning I was using Gemini 3 Pro working on some trivial feature, I asked it about how to go about solving an issue and it completely hallucinated a solution suggesting to use a non-existing function that was supposedly exposed by a library. This situation has been the norm in my experience for years now and, while this has improved over time, it's still very, very common occurrence. If it can't get these use cases down to an acceptable successful degree, I just don't see how much I can trust it to take the reins and do it all with an agentic approach.

And this is just a pure usability perspective. If we consider the economics aspect, none of the AI services are profitable, they are all heavily subsidized by investor cash. Is it sustainable long term? Today it seems as if there is an infinite amount of cash but my bet is that this will give in before the cost of building software drops by 90%.

8 hours agoxdc0

>I asked it about how to go about solving an issue and it completely hallucinated a solution suggesting to use a non-existing function that was supposedly exposed by a library.

Yeah, that's a huge pain point in LLMs. Personally, I'm way less impacted by them because my codebase is only minimally dependent on library stuff (by surface area) so if something doesn't exist or whatever, I can just tell the LLM to also implement the thing it hallucinated :P

These hallucinations are usually a good sign of "this logically should exist but it doesn't exist yet" as opposed to pure bs.

5 hours agoPannoniae
[deleted]
10 hours ago

> This takes a fairly large mindset shift, but the hard work is the conceptual thinking, not the typing.

But the hard work always was the conceptual thinking? At least at and beyond the Senior level, for me it was always the thinking that's the hard work, not converting the thoughts into code.

8 hours agocriemen
[deleted]
6 hours ago

Software Development is much more than writing code. Writing code may have become 90% easier, but a lot of the other development tasks haven't appreciably changed due to AI, although that might come. So, for now at least the answer to the question posed in the headline is no.

An exception might be building something that is well specified in advance, maybe because it's a direct copy of existing software.

7 hours agojdswain

It depends. For AI to work for large projects (did a post on this forever ago in AI terms. https://getstream.io/blog/cursor-ai-large-projects/)

But you need: a staff level engineer to guide it, great standardization and testing best practices. And yes in that situation you can go 10-50x faster. Many teams/products are not in that environment though.

9 hours agotschellenbach

I work on a big ball of open source spaghetti and AI has become invaluable in helping me navigate my way through it. Even when it's wrong - it gives me valuable clues.

9 hours agoandybak

it has for me. I'm probably paying less than 10%, saving on seas, occasional contract fees for custom integrations, and zapier fees linking them together.

I've no idea what's going on in the enterprise space, but in the small 1-10 employee space, absolutely

7 hours agodubeye

> One objection I hear a lot is that LLMs are only good at greenfield projects. I'd push back hard on this. I've spent plenty of time trying to understand 3-year-old+ codebases where everyone who wrote it has left.

Where I am, 3 year old is greenfield, and old and large is 20 years old and has 8million lines of nasty c++. I’ll have to wait a bit more I think …

9 hours agoAgingcoder

The argument is building anew is 90% cheaper.

I can’t see LLMs contributing to anything this size when they can’t remember what I said 4 messages ago

4 hours agoglouwbug

How we would design a rigorous study that measures total cost of ownership when teams integrate AI assistance, including later maintenance and defect rates, rather than just initial output speed?

6 hours agonelox

The cost of writing software dropped but the complexity ballooned so we’re on the side of needing ai assistants to write it all for us.

5 hours agoreactordev

I'd like to believe that this is the case, and it may well be the case that we do end up there.

But currently reproduceability, reliability, correctness, and consistency are lacking.

There's also meaningful domain variance.

7 hours agoGlyptodon

The only cost that's dropped by 90% is writing unoriginal blog posts

4 hours agoculi

Let’s say you’re right. Do we still want to, though? I mean. At some point we will no longer have the skill to babysit the AI agent.

5 hours agokalx

When the LLM code bases are too complex for the humans on deck to understand and debug… that sounds like the turning point when companies go back to real developers IMO. Any serious mission critical code needs knowledgeable humans on deck who can leap into again when s** hits the fan, put out fires and patch critical bugs.

5 hours agoturtletontine

The cost of building the first version of FB has dropped 90% The cost of building the next FB stays the same

More sophisticated tools mean more refined products.

If an easier and cheaper method for working carbon fiber becomes broadly available, it won't mean you get less money; it means you'll now be cramming carbon fiber in the silverware, in the shoes, in baby strollers, EVERYWHERE. The cost of a carbon fiber bike will drop 90%, but teams will be doing a LOT more.

You could say the cost per line of code has dropped 90%, but the number of lines of code written will 100x.

6 hours agolrsaturnino

Building yes. Maintaining no.

Cars wont be cheap because bumper prices fell 90%

But....

Obviously AI is coming for the whole car so I will operate on tbe assumption 90% is coming. We will eventually be agentic orhcestra conductors.

7 hours agonrhrjrjrjtntbt

I don’t know if it’s 90%, but I’m shipping in 2 days things that took 2-4 weeks before.

Opus 4.5 in particular has been a profound shift. I’m not sure how software dev as a career survives this. I have nearly 0 reason to hire a developer for my company because I just write a spec and Claude does it in one shot.

It’s honestly scary, and I hope my company doesn’t fail because as a developer I’m fucked. But… statistically my business will fail.

I think in a few years there will only be a handful of software companies—the ones who already have control of distribution. Products can be cloned in a few weeks now; not long until it’s a few minutes. I used to see a new competitor once every six months. Now I see a new competitor every few hours.

11 hours agoqwertyastronaut

Just out of curiosity, what software product were you making in two weeks before using AI? Or maybe I’m misunderstanding your use of shipping.

10 hours agothrowaway31131

Shipping features, not entire products.

5 hours agoqwertyastronaut
[deleted]
6 hours ago

I feel like I have have heard this exact statement about model FooBar X.Y about a half dozen times over the last couple of years.

10 hours agoLPisGood

Agreed. Opus 4.5 does feel like a real shift and I have had exactly your experience. I've shipped stuff that would have taken me weeks in days. And really to a much higher quality standard - test suites would have been far smaller if I'd built manually. And probably everything in MVP Bootstrap CSS.

11 hours agomartinald

Yeah, I really think software engineering is over. Not right now, but Opus 4.5 is incredible, it wont be long before 5 and 5.5 are released.

They wont automate everything, but the bar for being able to produce working software will plummet.

10 hours agollmslave

I have no idea how you could debug something in two days that is sufficient to ship. I certainly think that an LLM could write a few thousand lines, but who could read them?

Are you shipping things you haven't reviewed at all, and pronouncing them high quality?

I find these threads baffling. I haven't seen a glut of new software anywhere. I certainly haven't seen a bunch of companies fixing the same bugs that have been sitting in their trackers for years. People keep telling me there's this deluge of LLM code happening, but it (the actual code) is all a secret and behind closed doors. Why in the world would you keep it a secret? Why would any multibillion dollar company that ships AI features have any known bugs left in their flagship products?

I haven't seen a difference anywhere when looking outwards. I personally find it useful, but I have to constantly force refactors and rearchitecting to make the code intelligible. When I add features, bugs get reinserted, refactors get reverted, and instrumentation silently disappears. If I don't do the constant refactors, I wouldn't even notice this was happening half the time.

7 hours agopessimizer

I'd love to hear what sort of software you are building.

9 hours agoIceDane

This is my anon account so I prefer not to leave any identifying details. Niche solo-dev SaaS that makes $50k a month.

4 hours agoqwertyastronaut

this is roughly same for me

10 hours agobdangubic

I totally agree with you. I am working on a new platform right now for a niche industry. Maybe theres $10m ARR to make total in the industry. Last year, it wouldn’t be worth the effort to raise, hire a PM, a few devs, QA, etc. But for a solo dev like myself with AI, it definitely is worth it now.

10 hours agoe10jc

this probably levels the playing field for a while, and then dramatically raises the bar over a longer period of time

as better engineers and better designers get more leverage with lower nuisance in the form of meetings and other people, they will be able to build better software with a level of taste and sophistication that wouldn't make sense if you had to hand type everything

6 hours agoarach

No, it did not. Thanks for asking.

5 hours agohexo

Writing a giant unit test suite being the primary example that stuck out to me from that article really doesn't give a lot of credence to the question?

And yet, the conclusion seems to be as if the answer is yes?

Until AI can work organizationally as opposed to individually it'll necessarily be restricted in its abilities to produce gains beyond relatively marginal improvements (Saved 20 hours of developer time on unit tests) for a project that took X weeks/months/years to work it's way through Y number of people.

So sure, simple projects, simple asks, unit tests, projects handled by small teams of close knit coworkers who know the system in and out and already have the experience to differentiate between good code and bad? I could see that being reduced by 90%.

But, it doesn't seem to have done much for organizational efficiency here at BigCo and unit tests are pretty much the very tip of a project's iceberg here. I know a lot of people are using the AI agents, and I know a lot of people who aren't, and I worry for the younger engineers who I'm not sure have the chops to distinguish between good, bad, and irrelevant and thus leave in clearly extraneous code, and paragraphs in their documents. And as for the senior engineers with the chops, they seem to do okay with it although I can certainly tell you they're not doing ten times more than they were four years ago.

I kinda rambled at the end there, all that to say... organizational efficiency is the bug to solve.

(It's very difficult, I believe the 2D interfaces we've had for the last 40 years or whatever are not truly meeting the needs of the vast cathedrals of code we're working in, same thing for our organizations, our code reviews, everything man)

8 hours agocodyb

The cost of building "something" has dropped 90%. The cost of building "something good" has maybe dropped 30%.

The gap between a demo and a product is still enormous.

8 hours agoalex-ross

If software actually is 90% cheaper to build in 2026 there will be 10x the simple apps and abandonware to follow. Throwaway software like throwaway phones. It’ll be weird.

8 hours agomlhpdx

The cost of building droppped by 90% but the cost of maitaining is increased by 9000%

5 hours agoest

Next quarters problem

5 hours agoHavoc

The cost of writing software had dropped by 90% since outsourcing was invented and all the software jobs have moved to India was I was told 15 years ago.

8 hours agosamlinnfer

The cost of training agents is in the billions so Im not sure the cost dropped, it just shifted and now the cost distribution is different

8 hours agohoppp

Feels almost too on-the-nose to write "Betteridge's Law of Headlines" but the answer is obviously no. Look no further than the farce of their made-up "graph" of cost over time with no units or evidence.

11 hours agomwkaufma

Can we also take into account the mental cost associates with building software? Because how I see it, managing output from agents is way more exhausting than doing it ourself.

And obviously the cost of not upskilling in intricate technical details as much as before (aka staying at the high level perspective) will have to be paid at some point

8 hours agormnclmnt

It is pretty hard work huh! I was surprised. In my case, I was doing a personal project but in the end I felt a little crispy although the result was succesful.

6 hours agodjmips

For me, the cost of motivating myself dropped significantly. I now feel like working on little things that have been pending tasks for ages. A db synch script here , an unearthed project from 12 years ago there , migrating project package versions , finding and fixing incomplete/missing data , refactoring legacy code to be suitable for unit testing , installing a bunch of cron jobs all in a day's work.

an hour agonurettin

The cost of generating lines of code which look good has dropped 90%.... Take that as you will.

8 hours agoshusson

> I've had Claude Code write an entire unit/integration test suite in a few hours (300+ tests) for a fairly complex internal tool. This would take me, or many developers I know and respect, days to write by hand.

I should have stopped reading here. People who think that the time it takes to write some code is the only metric that matters are only marginally better than people who rank employees by lines of code.

3 hours agophendrenad2

I love how LLMs have made everyone forget how hard it is to verify software correctness and how hard it is to maintain existing software. There is endless gushing about how quickly LLMs can write code. Whenever I point out the LLMs make a lot of mistakes people just wave their hands and say software is easy to validate. The huge QA departments at all software shops would beg to disagree, along with the CVE database, the zero day brokers, etc. But you know, whatever, they're just boomers right?

8 hours agokrupan

In context of B2B SaaS products that require a high degree of customization per client, I think there could be an argument for this figure.

The biggest bottleneck I have seen is converting the requirements into code fast enough to prove to the customer that they didn't give us the right/sufficient requirements. Up until recently, you had to avoid spending time on code if you thought the requirements were bad. Throwing away 2+ weeks of work on ambiguity is a terrible time.

Today, you could hypothetically get lucky on a single prompt and be ~99% of the way there in one shot. Even if that other 1% sucks to clean up, imagine if it was enough to get the final polished requirements out of the customer. You could crap out an 80% prototype in the time it takes you to complete one daily standup call. Is the fact that it's only 80% there bad? I don't think so in this context. Handing a customer something that almost works is much more productive than fucking around with design documents and ensuring requirements are perfectly polished to developer preferences. A slightly wrong thing gets you the exact answer a lot faster than nothing at all.

8 hours agobob1029

Yes.

2 hours agogijoeyguerra

the cost of creating a great product and amount of time it takes to get infront of the customers has still not reduced

8 hours agohjaveed

> I've had Claude Code write an entire unit/integration test suite in a few hours (300+ tests)

I'd love to see someone do this, or a similar task, live on stream. I always feel like an idiot when I read things like this because despite using Claude Code a lot I've never been able to get anything of that magnitude out of it that wasn't slop/completely unusable, to the point where I started to question if I hadn't been faster writing everything by hand.

Claiming that software is now 90% cheaper feels absurd to me and I'd love to understand better where this completely different worldview comes from. Am I using the tools incorrectly? Different domains/languages/ecosystems?

9 hours agocloogshicer

100% agreed. I use Claude Code to write 90% of my code at this point, but have found that it is genuinely worse than a junior at writing meaningful test cases. Most of the time it will make up interfaces or mock things incorrectly etc to the point where I just give up and write them myself. The bulk of the “tests” it writes test things which are meaningless (does the interface exist etc). This is with typescript + vitest with opus 4.5.

5 hours agofraud

> Jevons Paradox says that when something becomes cheaper to produce, we don't just do the same amount for less money. Take electric lighting for example; while sales of candles and gas lamps fell, overall far more artificial light was generated.

Can’t wait to debug all that stuff.

3 hours agod--b

Alternate take: The cost of building software will remain the same but software will need to be 10x as feature-rich to remain competitive.

If you can build it in a weekend so can I. So you're going to have to figure out bigger things to build.

3 hours agochasing

We might actually get all the software we actually need. We won’t have to listen to antiquated DMV/IRS/health systems not being updated because the projects designed to replace them failed.

3 hours agoseanmcdirmid

This is the kind of piece that becomes popular around the top of the hype cycle when people are trying to keep it going but can sense that perhaps Wiley E coyote has run off the cliff and is suspended in the air. Obviously, by any possible objective indicator, the cost of software development has barely budged and "AGI" is nowhere in the offing while luminary scientists appear to be drawn to whatever the next big thing is, having seen the limits of their (admittedly impressive) creation.

I'm sure that AI tools will be here to stay and will become more integrated and better. I wonder what the final result will be, -20% productivity as in the METR study? +20%? Anything like 90% is the kind of sensationalism reserved for r/WallStreetBets

4 hours agotehjoker

By making up numbers and not supplying any evidence, you can come to any conclusion you like! Then you get to draw a graph with no units on it. Finally, you can say things that are objectively false like "These assertions are rapidly becoming completely false".

8 hours agohenning

No. Not unless your business wasn't competitive to begin with

8 hours agoMangoToupe

Maybe I'm holding it wrong, but I don't actually see the huge productivity gains from LLM-assisted software development. Work is leaning on us to use AI—not requiring it yet, but we're at DEFCON 3, borderline 2 (DEFCON 1 being a Shopify situation). My team's experience is that it needs LOTS of handholding and manual fixing to produce even something basic that's remotely fit for production use.

I closed a comment from ~2.5y ago (https://news.ycombinator.com/item?id=36594800) with this sentence: "I'm not sure that incorporating LLMs into programming is (yet) not just an infinite generator of messes for humans to clean up." My experience with it is convincing me that that's just what it is. When the bills come due, the VC money dries up, and the AI providers start jacking up their prices... there's probably going to be a boom market for humans to clean up AI messes.

8 hours agobitwize

I don't really build software any more and have moved into other parts of the business. But I'm still a huge user of software and I'd just echo all the other comments asking if it's so easy to get all these great tools built and shipped, where are they? I can see that YouTube is flooded with auto-generated content. I can see that blogspam has skyrocketed beyond belief. I can see that the number of phishing texts and voicemails I get every day has gone through the roof. I don't see any flood of new CNCF incubating projects. I don't see that holy grail entire OS comparable to Linux but written in Rust. I don't see the other holy grail new web browser that can compete with Firefox, Chrome, and Safari. It's possible people are shipping more of the stripped down Jira clones designed for a team of ten that gets 60 customers and stops receiving updates after 2 years but that's not the kind of software that would be visible to me.

If you're replacing spreadsheets with a single-purpose web UI with proper access control and concurrent editing that doesn't need Sharepoint or Google Workspaces, fine, but if you're telling me that's going to revolutionize the entire industry and economy and justify trillions of dollars in new data centers, I don't think so. I think you need to actually compete with Sharepoint and Google Workspaces. Supposedly, Google and Microsoft claim to be using LLMs internally more than ever, but they're publicly traded companies. If it's having some huge impact, surely we'll see their margins skyrocket when they have no more labor costs, right?

8 hours agononameiguess

It will be interesting how this goes moving forward. Agents learn from massive scraping. With the newest tools and frameworks there is nothing but documentation and initial examples to scrape. And now that agent output is flooding everything it can be expected there will be a lot of feedback with automated learning early in development cycles.

Lots of applications have a simple structure of collecting and operating data with fairly well documented business logic tying everything together. Coding outside of that is going to be more tricky.

And if agentic coding is so great then why are there so still so many awful spreadsheets that can't compete with Excel? Something isn't adding up quite as well as some seem to expect.

5 hours agom0llusk

I think the author underestimate the forces that introduce coordination overhead.

"Good AI developers" are a mystery being (not really, but for corporate they are). Right now, companies are trying to measure them to understand what makes them tick.

Once that is measured, I can assure you that the next step is trying to control their output, which will inevitably kill their productivity.

> This then allows developers who really master this technology to be hugely effective at solving business problems.

See what I mean?

"If only we could make them work to solve our problems..."

You can! But that implies additional coordination overhead, which means they'll not be as productive as they were.

> Your job is going to change

My job changes all the time. Developers are ready for this. They were born of change, molded by it. You know what hasn't caught up with the changes?

5 hours agogaigalas

Ai saves me like an hour per month tops. I still don't understand the hype. It's a solution in search of a problem. It can't solve the hard coding problems. And it doesn't say when it can't solve the essay ones either. It's less valuable than resharper. So the business value is maybe $10 a month. That can't finance this industry.

11 hours agoon_the_train

I read these sort of comments every so often and I do not understand them. You are in a sea of people telling you that they are developing software much quicker which ticks the required boxes. I understand that for some reason this isn't the case for your work flow, but obviously it has a lot more value for others.

If you are a chairmaker and everyone gains access to a machine that can spit out all the chair components but sometimes only spits out 3 legs or makes a mistake on the backs, you might find it pointless. Maybe it can't do all the nice artisan styles you can do. But you can be confident others will take advantage of this chair machine, work around the issues and drive the price down from $20 per chair to $2 per chair. In 24 months, you won't be able to sell enough of your chairs any more.

10 hours agoaverageRoyalty

Maybe, or maybe the size of the chair market grows because with $2 chairs more buyers enter. The high end is roughly unaffected because they were never going to buy a low end chair.

10 hours agothrowaway31131

> You are in a sea of people telling you that they are developing software much quicker which ticks the required boxes

But that's exactly not the case. Everyone is wondering what tf this is supposed to be for. People are vehemently against this tech, and yet it gets shoved down our throats although it's prohibitively expensive.

Coding should be among the easiest problems to tackle, yet none of the big models can write basic "real" code. They break when things get more complex than pong. And they can't even write a single proper function with modern c++ templating stuff for example.

10 hours agoon_the_train

They can actually - I thought they couldn’t , but the latest ones can, much to my surprise.

I changed my mind after playing with cursor 2 ( cursor 1 had lasted all of 10 mins), which actually wrote a full blown app with documentation, tests , coverage, ci/cd, etc. I was able to have it find a bug I encountered when using the app - it literally ran the code, inserted extra logs, grepped the logs , found the bug and fixed it.

8 hours agoAgingcoder

> And they can't even write a single proper function with modern c++ templating stuff for example.

That's just not true. ChatGPT 4 could explain template concepts lucidly but would always bungle the implementation. Recent models are generally very strong at generating templated code, even if its fairly complex.

If you really get out into the weeds with things like ADL edge cases or static initialization issues they'll still go off the rails and start suggesting nonsense though.

8 hours agopton_xd

> Coding should be among the easiest problems to tackle, yet none of the big models can write basic "real" code. They break when things get more complex than pong. And they can't even write a single proper function with modern c++ templating stuff for example.

This is simply false and ignorant

8 hours agomargorczynski

If you’re quicker then competition heats up management wants more done, efficiencies are soon forgotten and new expectations and baselines set.

11 hours agoandrewstuart

Sure but that's the good of it. Lower labor cost = more productivity. The customer wins in the end because the equivalent product is cheaper or a better product costs the same. Businesses and employees still have to compete against each other so things won't get easier for them in the long term.

10 hours agofoxglacier

The customer only wins if the customer is the one using the tools directly, otherwise it leaves all the power in the hands of businesses who's only real goal is maximum profits. And without already possessing domain knowledge to be able to guide, judge, and correct AI along the way, its existence will be of limited use to consumers and business will not feel much pressure to make anything cheaper, it just leaves more margin to funnel to the top.

8 hours agoAngryData

Except this is capitalism, so any improvements will go disproportionately to the owners. This narrative of improvements for customers has been repeated for decades and it keeps being wrong.

More stock buybacks and subscriptions!

8 hours agoDraiken

Ask someone who builds software for a fee. Are you able to do 90% more? Fire 9/10 engineers? Produce 90% faster?

No, no, and no.

11 hours agomore_corn

It's even worse. If the cost drops by 90%, the corresponding productivity increase should be 900%, not 90%.

11 hours agorecursive

90% more - nope. 35-55% more, just about on average. I am 30-year in though, not sure what these numbers are for junior devs

10 hours agobdangubic

https://arstechnica.com/ai/2025/07/study-finds-ai-tools-made...

They thought they were faster too.

Yes, I know, AGI is just around the corner, we just need to wait more because "agents" are improving every day.

In the mean time, LLMs are kinda useful instead of web searches, mostly because search is both full of spam and the search providers are toxic.

10 hours agonottorp

additional reading list

  - IntuitionLabs, “AI’s Impact on Graduate Jobs: A 2025 Data Analysis” (2025) - 
https://intuitionlabs.ai/pdfs/ai-s-impact-on-graduate-jobs-a...

  - Demirci, O., Hannane, J., & Zhu, X., “Who Is AI Replacing? The Impact of Generative AI on Online Freelancing Platforms,” Management Science (2024) - https://pubsonline.informs.org/doi/10.1287/mnsc.2024.05420%20PubsOnline

  - Indeed Hiring Lab, “AI at Work Report 2025: How GenAI is Rewiring the DNA of Jobs” (September 2025) - https://www.hiringlab.org/wp-content/uploads/2025/09/Indeed-Hiring-Lab-AI-at-Work-Report-2025.pdf%20Indeed%20Hiring%20Lab
4 hours agobdangubic

I am just talking about personal point-of-view, wasn’t interviewed by Arstechnica or others that live off clickbait

9 hours agobdangubic

[dead]

10 hours agojackie293746

My tell-tale sign that AI is moving the needle is the disappearance of the concept of leetcode. If you've done an interview lately you would know AI hasn't moved any needles yet

7 hours agols-a

Perhaps the cost will drop over time, but it will be because writing code is becoming more accessible. It's not just because of AI, but the natural progress of education and literacy on the topic that would have happened anyway.

What I see are salaries stagnating and opportunity for new niche roles or roles being redefined to have more technical responsibility. Is this not the future we all expected before AI hype anyway? People need to relax and refocus on what matters.

9 hours agosublinear

I'll believe that when I see it.

3 hours agolaughing_man

This article was more of an advertisement for...something than any meaningful commentary.

How good are tests written by AI, really? The junk "coverage" unit tests sure, but well thought out integration tests? No way. Testing code is difficult, some AI slop isn't going to make that easier because someone has to know the code and the infrastructure it is going in to and reason about all of it.

8 hours agolisbbb
[deleted]
9 hours ago
[deleted]
10 hours ago

[dead]

10 hours agojackie293746

[dead]

8 hours agogtsop

*90% so far..

I've only been working with AI for a couple of months, but IMHO it's over. The Internet Age which ran 30 years from roughly 1995-2025 has ended and we've entered the AI Age (maybe the last age).

I know people with little programming experience who have already passed me in productivity, and I've been doing this since the 80s. And that trend is only going to accelerate and intensify.

The main point that people are having a hard time seeing, probably due to denial, is that once problem solving is solved at any level with AI, then it's solved at all levels. We're lost in the details of LLMs, NNs, etc, but not seeing the big picture. That if AI can work through a todo list, then it can write a todo list. It can check if a todo list is done. It can work recursively at any level of the problem solving hierarchy and in parallel. It can come up with new ideas creatively with stable diffusion. It can learn and it can teach. And most importantly, it can evolve.

Based on the context I have before me, I predict that at the end of 2026 (coinciding with the election) America and probably the world will enter a massive recession, likely bigger than the Housing Bubble popping. Definitely bigger than the Dot Bomb. Where too many bad decisions compounded for too many decades converge to throw away most of the quality of life gains that humanity has made since WWII, forcing us to start over. I'll just call it the Great Dumbpression.

If something like UBI is the eventual goal for humankind, or soft versions of that such as democratic socialism, it's on the other side of a bottleneck. One where 1000 billionaires and a few trillionaires effectively own the world, while everyone else scratches out a subsistence income under neofeudalism. One where as much food gets thrown away as what the world consumes, and a billion people go hungry. One where some people have more than they could use in countless lifetimes, including the option to cheat death, while everyone else faces their own mortality.

"AI was the answer to Earth's problems" could be the opening line of a novel. But I've heard this story too many times. In those stories, the next 10 years don't go as planned. Once we enter the Singularity and the rate of technological progress goes exponential, it becomes impossible to predict the future. Meaning that a lot of fringe and unthinkable timelines become highly likely. It's basically the Great Filter in the Drake equation and Fermi paradox.

This is a little hard for me to come to terms with after a lifetime of little or no progress in the areas of tech that I care about. I remember in the late 90s when people were talking about AI and couldn't find a use for it, so it had no funding. The best they could come up with was predicting the stock market, auditing, genetics, stuff like that. Who knew that AI would take off because of self-help, adult material and parody? But I guess we should have known. Every other form of information technology followed those trends.

Because of that lack of real tech as labor-saving devices to help us get real work done, there's been an explosion of phantom tech that increases our burden through distraction and makes our work/life balance even less healthy as underemployment. This is why AI will inevitably be recruited to demand an increase in productivity from us for the same income, not decrease our share of the workload.

What keeps me going is that I've always been wrong about the future. Maybe one of those timelines sees a great democratization of tech, where even the poorest people have access to free problem solving tech that allows them to build assistants that increase their leverage enough to escape poverty without money. In effect making (late-stage) capitalism irrelevent.

If the rate of increasing equity is faster than the rate of increasing excess, then we have a small window of time to catch up before we enter a Long Now of suffering, where wealth inequality approaches an asymptote making life performative, pageantry for the masses who must please an emperor with no clothes.

In a recent interview with Mel Robbins in episode 715 of Real Time, Bill Maher said "my book would be called: It's Not Gonna Be That" about the future not being what we think it is. I can't find a video, but he describes it starting around the 19:00 mark:

https://podcasts.musixmatch.com/podcast/real-time-with-bill-...

Our best hope for the future is that we're wrong about it.

9 hours agozackmorris

It’s over. I’ve been writing FUD manually since the 60’s. But people fresh out of FB troll boot camp are already rolling it out 99% faster than me.