Horses eat feed. Cars eat gasoline. LLMs eat electricity, and progress may even now be finding its limits in that arena. Besides the fact that just more compute and context size aren’t the right kind of progress. LLMs aren’t coming for your job any more than computer vision is, for a lot of reasons, but I’ll list two more:
1. Even if LLMs made everyone 10x as productive, most companies will still have more work to do than resources to assign to those tasks. The only reason to reduce headcount is to remove people who already weren’t providing much value.
2. Writing code continues to be a very late step of the overall software development process. Even if all my code was written for me, instantly, just the way I would want it written, I still have a full-time job.
Funny the original post doesn’t mention AI replacing the coding part of his job.
There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it.
I want to be be optimistic. But it’s hard to ignore what I’m doing and seeing. As far as I can tell, we haven’t hit serious unemployment yet because of momentum and slow adoption.
I’m not replying to argue, I hope you are right. But I look around and can’t shake the feeling of Wile E. Coyote hanging in midair waiting for gravity to kick in.
This is a very insightful take. People forget that there is competition between corporations and nations that drives an arms race. The humans at risk of job displacement are the ones who lack the skill and experience to oversee the robots. But if one company/nation has a workforce that is effectively 1000x, then the next company/nation needs to compete. The companies/countries that retire their humans and try to automate everything will be out-competed by companies/countries that use humans and robots together to maximum effect.
Overseeing robot is a time limited activity. Even building robot has a finite horizon.
Current tech can't yet replace everything but many jobs already see the horizon or are at sunset.
Last few time this happened the new tech, whether textile mills or computers, drove job creation as well as replacement.
This time around some component of progress are visibile, because end of the day people can use this tech to create wealth at unprecedented scale, but other arent as the tech is run with small teams at large scale and has virtually no related ondustries is depends on like idk cars would. It's energy and gpus.
Maybe we will be all working on gpu related industries? But seems another small team high scale job. Maybe few tens of million can be employed there?
Meanwhile I just dont see the designer + AI job role materializing, I see corpos using AI and cutting the middleman, while designers + AI get mostly ostracized, unable to raise, like a cran in a bucket of crabs.
I think you’ve missed the point. Cars replaced horses - it wasn’t cars+horses that won. Computers replaced humans as the best chess players, not computers with human oversight. If successful, the end state is full automation because it’s strictly superhuman and scales way more easily.
Humans still play chess and horses are still around as a species.
(Disclaimer: this is me trying to be optimistic in a very grim and depressing situation)
Perhaps you have missed the essential point. Who drives the cars? It's not the horses, is it? And a chess computer is just as unlikely to start a game of chess on its own as a horse is to put on its harness and pull a plow across a field. I'm not entirely sure what impact all this will have on the job market, but your comparisons are flawed.
[deleted]
> The only reason to reduce headcount is to remove people who already weren’t providing much value.
There were many secretaries up until the late 20th century that took dictation, either writing notes of what they were told or from a recording, then they typed it out and distributed memos. At first, there were many people typing, then later mimeograph machines took away some of those jobs, then copying machines made that faster, then printers reduced the need for the manual copying, then email reduced the need to print something out, and now instant messaging reduces email clutter and keep messages shorter.
All along that timeline there were fewer and fewer people involved, all for the valuable task of communication. While they may not have held these people in high esteem, they were critical for getting things done and scaling.
I’m not saying LLMs are perfect or will replace every job. They make mistakes, and they always will; it’s part of what they are. But, as useful as people are today, the roles we serve in will go away and be replaced by something else, even if it’s just to indicate at various times during the day what is or isn’t pleasing.
I feel like this sort of misses the point. I didn't think the primary thrust of his article was so much about the specific details of AI, or what kind of tasks AI can now surpass humans on. I think it was more of a general analysis (and very well written IMO) that even when new technologies advance in a slow, linear progression, the point at which they overtake an earlier technology (or "horses" in this case), happens very quickly - it's the tipping point at which the old tech surpasses the new. For some reason I thought of Hemingway's old adage "How did you go bankrupt? - Slowly at first, then all at once."
I agree with all the limitations you've written about the current state of AI and LLMs. But the fact is that the tech behind AI and LLMs never really gets worse. I also agree that just scaling and more compute will probably be a dead end, but that doesn't mean that I don't think that progress will still happen even when/if those barriers are broadly realized.
Unless you really believe human brains have some sort of "secret special sauce" (and, FWIW, I think it's possible - the ability of consciousness/sentience to arise from "dumb matter" is something that I don't think scientists have adequately explained or even really theorized), the steady progress of AI should, eventually, surpass human capabilities, and when it does, it will happen "all at once".
For what it's worth, the decline in use of horses was much slower than you might expect. The model T Ford motor car reached peak production in 1925 [0], and for an inexact comparison (I couldn't find numbers for the US) the horse population of France started to decline in 1935, but didn't drop below 80% of its historical peak until the late 1940's down to 10% of its peak by the 1970's [1].
Frech eat horse meat. Cattle is still present in US...
If there’s more work than resources, then is that low value work or is there a reason the business is unable to increase resources? AI as a race to the bottom may be productive but not sure it will be societally good.
Yip, the famous example here being John Maynard Keynes, of Keynesian economics. [1] He predicted a 15 hour work week following productivity gains that we have long since surpassed. And not only did he think we'd have a 15 hour work week, he felt that it'd be mostly voluntary - with people working that much only to give themselves a sense of purpose and accomplishment.
Instead our productivity went way above anything he could imagine, yet there was no radical shift in labor. We just instead started making billionaires by the thousand, and soon enough we can add trillionaires. He underestimated how many people were willing to designate the pursuit of wealth as the meaning of life itself.
Productivity gains are more likely to be used to increase margins (profits and therefore value to shareholders) then it is to reduce work hours.
At least since the Industrial Revolution, and probably before, the only advance that has led to shorter work weeks is unions and worker protections. Not technology.
Technology may create more surplus (food, goods, etc) but there’s no guarantee what form that surplus will reach workers as, if it does at all.
Margins require a competitive edge. If productivity gains are spread throughout a competitive industry, margins will not get bigger; prices will go down.
> Productivity gains are more likely to be used to increase margins (profits and therefore value to shareholders) then it is to reduce work hours
I mean, that basically just sums up how capitalism works. Profit growth is literally (even legally!) the only thing a company can care about. Everything else, like product quality, pays service to that goal.
Failure of politics and the media then. Majority of voters have been fooled into voting against their economic interests.
In the same essay ("Economic Possibilities for our Grandchildren," 1930) where he predicted the 15-hour workweek, Keynes wrote about how future generations would view the hoarding of money for money's sake as criminally insane.
"There are changes in other spheres too which we must expect to come. When the accumulation of wealth is no longer of high social importance, there will be great changes in the code of morals. We shall be able to rid ourselves of many of the pseudo-moral principles which have hag-ridden us for two hundred years, by which we have exalted some of the most distasteful of human qualities into the position of the highest virtues. We shall be able to afford to dare to assess the money-motive at its true value. The love of money as a possession – as distinguished from the love of money as a means to the enjoyments and realities of life – will be recognised for what it is, a somewhat disgusting morbidity, one of those semi-criminal, semi-pathological propensities which one hands over with a shudder to the specialists in mental disease. All kinds of social customs and economic practices, affecting the distribution of wealth and of economic rewards and penalties, which we now maintain at all costs, however distasteful and unjust they may be in themselves, because they are tremendously useful in promoting the accumulation of capital, we shall then be free, at last, to discard."
A study [1] I was looking at recently was extremely informative. It's a poll from UCLA given to incoming classes that they've been carrying out since the 60s. In 1967 86% of student felt it was "essential" or "very important" to "[develop] a meaningful philosophy of life", while only 42% felt the same of "being very well off financially." By 2015 those values had essentially flipped, with only 47% viewing a life philosophy as very important, and 82% viewing being financially well off as very important.
It's rather unfortunate it only began in 1967, because I think we would see an even more extreme flip if we were able to just go back a decade or two more, and back towards Keynes' time. As productivity and wealth accumulation increased, society seems to have trended in the exact opposite direction he predicted. Or at least there's a contemporary paradox. Because I think many, if not most, younger people hold wealth accumulation with some degree of disdain yet also seek to do the exact same themselves.
In any case, in a society where wealth is seen as literally the most important aspect in life, it's not difficult to predict what follows.
Well, keep in mind students at UCLA at 1967 were probably among the most wealthy in the country. A lot more average people at UCLA nowadays. Of course being financially well off wouldn't be the most important thing if you were already financially well off.
> We shall be able to rid ourselves of many of the pseudo-moral principles which have hag-ridden us for two hundred years
Still haven’t gotten rid of work for work’s sake being a virtue, which explains everything else. Welfare? You don’t “deserve” it. Until we solve this problem, we’re not or less heading straight for feudalism.
> We just instead started making billionaires by the thousand, and soon enough we can add trillionaires.
People are not simple machines or animals. Unless AI becomes strictly better than humans and humans + AI, from the perspective of other humans, at all activities, there will still be lots of things for humans to do to provide value for each other.
The question is how do our individuals, and more importantly our various social and economic systems handle it when exactly what humans can do to provide value for each other shifts rapidly, and balances of power shift rapidly.
If the benefits of AI accrue to/are captured by a very small number of people, and the costs are widely dispersed things can go very badly without strong societies that are able to mitigate the downsides and spread the upsides.
I'm optimistic.
Banks used to have rooms full of bank clerks who manually did double-entry bookkeeping for all the bank's transactions. For most people, this was a very boring job, and it made bank transactions slow and expensive. In the 50's and 60's we replaced all these people with computers. An entire career of "bank clerk" vanished, and it was a net good for humanity. The cost of bank transactions came down (by a lot!), banks became more responsive and served their customers better. And the people who had to do double-entry bookkeeping all day long got to do other, probably more interesting, jobs.
There are a ton of current careers that are just email + meetings + powerpoint + spreadsheet that can go the same way. They're boring jobs (for most people doing them) and having humans do them makes administration slow and expensive. Automating them will be a net good for humanity. Imagine if "this meeting could have been an email" actually moves to "this meeting never happened at all because the person making the decision just told the LLM and it did it".
You are right that the danger is that most of the benefits of this automation will accrue to capital, but this didn't happen with the bank clerk automation - bank customers accrued a lot of the benefits too. I suspect the same will be true with this automation - if we can create and scale organisations easier and cheaper without employing all the admin staff that we currently do, then maybe we create more agile, responsive, organisations that serve their customers better.
An ATM is a reliable machine with a bounded risk - the money inside - while an AI agent could steer your company into bankruptcy and have no liability for it. AI has no skin and depending on application, much higher upper bound for damage. A digit read wrong in a medical transcript, patient dies.
> There are a ton of current careers that are just email + meetings + powerpoint + spreadsheet that can go the same way.
Managing risks, can't automate it. Every project and task needs a responsibility sink.
You can bound risk on ai agents just like an atm. You just can’t rely upon the ai itself to enforce those limits, of course. You need to place limits outside the ai’s reach. But this is already documented best practice.
The point about ai not having “skin” (I assume “skin in the game”) is well taken. I say often that “if you’ve assigned an ai agent the ‘a’ in a raci matrix, you’re doing it wrong”. Very important lesson that some company will learn publicly soon enough.
> Every project and task needs a responsibility sink.
I don't disagree, though I'd put it more as "machines cannot take responsibility for decisions, so machines must not have authority to make decisions".
But we've all been in meetings where there are too many people in the room, and only one person's opinion really counts. Replacing those other people with an LLM capable of acting on the decision would be a net positive for everyone involved.
Workshopping this tortured metaphor:
AI, at the limit, is a vampiric technology, sucking the differentiated economic value from those that can train it. What happens when there are no more hosts to donate more training-blood? This, to me, is a big problem, because a model will tend to drift from reality without more training-blood.
The owners of the tech need to reinvest in the hosts.
Realistically, at a certain point the training would likely involve interaction with reality (by sensors and actuators), rather than relying on secondhand knowledge available in textual form.
Yeah I feel like the real ah ha moment is still coming once there is a GPT-like thing that has been trained on reality, not its shadow.
Reality cannot be perceived. A crisp shadow is all you can hope for.
The problem for me is the point of the economy in the limit where robots are better, faster and cheaper than any human at any job. If the robots don’t decide we’re worth keeping around we might end up worse than horses.
Yes and reality is the hard part. Moravec’s Paradox [1] continues to ring true. A billion years of evolution went into our training to be able to cope with the complexity of reality. Our language is a blink of an eye compared to that.
Look I think that is the whole difficulty. In reality, doing the wrong thing results in pain, and the right thing in relief/pleasure. A living thing will learn from that.
But machines can experience neither pain nor pleasure.
> What happens when there are no more hosts to donate more training-blood?
LLMs have over 1B users and exchange over 1T tokens with us per day. We put them through all conceivable tasks and provide support for completing those tasks, and push back when the model veers off. We test LLM ideas in reality (like experiment following hypothesis) and use that information to iterate. These logs are gold for training on how to apply AI in real world.
There's only so much you can learn from humans. AI didn't get superhuman in go (game) by financing more new good human go players. It just played with itself even discarding human source knowledge and achieved those levels.
People are animals.
[deleted]
When horses develop technology and create all sorts of jobs for themselves, this will be a good metaphor.
I'd be more worried about the implicit power imbalance. It's not what can humans provide for each-other, it's what can humans provide for a handful of ultra-wealthy oligarchs.
Yeah, from the perspective of the ultra-wealthy us humans are already pretty worthless and they'll be glad to get rid of us.
But from the perspective of a human being, an animal, and the environment that needs love, connection, mutual generosity and care, another human being who can provide those is priceless.
I propose we break away and create our own new economy and the ultra-wealthy can stay in their fully optimised machine dominated bunkers.
Sure maybe we'll need to throw a few food rations and bags of youthful blood down there for them every once in a while, but otherwise we could live in an economy that works for humanity instead.
Charlie Chaplin's speech is more relevant now than ever before:
The thing that the ultra-wealthy desire above all else is power and privilege, and they won't be getting either of that in those bunkers.
They sure as shit won't be content to leave the rest of us alone.
Yeah I know it's an unrealistic ideal but it's fun to think about.
That said my theory about power and privilege is that it's actually just a symptom of a deep fear of death. The reason gaining more money/power/status never lets up is because there's no amount of money/power/status that can satiate that fear, but somehow naively there's a belief that it can. I wouldn't be surprised if most people who have any amount of wealth has a terrible fear of losing it all, and to somebody whose identity is tied to that wealth, that's as good as death.
[deleted]
> It's not what can humans provide for each-other, it's what can humans provide for a handful of ultra-wealthy oligarchs.
You can definitely use AI and automation to help yourself and your family/community rather than the oligarchs. You set the prompts. If AI is smart enough to do your old job, it is also smart enough to support you be independent.
I was trying to phrase something like this, but you said it a lot better than I ever could.
I can’t help but smile at the possibility that you could be a bot.
An engine performs a simple mechanical operation. Chess is a closed domain. An AI that could fully automate the job of these new hires, rather than doing RAG over a knowledge base to help onboard them, would have to be far more general than either an engine or a chessbot. This generality used to be foregrounded by the term "AGI." But six months to a year ago when the rate of change in LLMs slowed down, and those exciting exponentials started to look more like plateauing S-curves, executives conveniently stopped using the term "AGI," preferring weasel-words like "transformative AI" instead.
I'm still waiting for something that can learn and adapt itself to new tasks as well as humans can, and something that can reason symbolically about novel domains as well as we can. I've seen about enough from LLMs, and I agree with the critique that som type of breakthrough neuro-symbolic reasoning architecture will be needed. The article is right about one thing: in that moment AI will overtake us suddenly! But I doubt we will make linear progress toward that goal. It could happen in one year, five, ten, fifty, or never. In 2023 I was deeply concerned about being made obsolete by AI, but now I sleep pretty soundly knowing the status quo will more or less continue until Judgment Day, which I can't influence anyway.
Remember, these companies (including the author) have an incentive to continue selling fear of job displacement not because of how disruptive LLMs are, but because of how profitable it is if you scare everyone into using your product to “survive”.
To companies like Anthropic, “AGI” really means: “Liquidity event for (AI company)” - IPO, tender offer or acquisition.
Afterwards, you will see the same broken promises as the company will be subject to the expectations of Wall St and pension funds.
[deleted]
Person whose job it is to sell AI selling AI is what I got from this post.
I've also noticed that LLMs are really good at speeding up onboarding. New hires basically have a friendly, never tired mentor available. It gives them more confidence in the first drafted code changes / design docs. But I don't think the horse analogy works.
It's really changing cultural expectations. Don't ping a human when an LLM can answer the question probably better and faster. Do ping a human for meaningful questions related to product directions / historical context.
What LLMs are killing is:
- noisy Slacks with junior folks questions. Those are now your Gemini / chat gpt sessions.
- tedious implementation sessions.
The vast majority of the work is still human led from what I can tell.
That sounds like a horrible onboarding experience. Human mentors provide a lot more than just answering questions, like providing context, comraderie or social skills, or even coping mechanisms. Starting a new job can be terrifying for juniors, and if their only friend is faceless chat bot...
You're right. We need to keep tabs on the culture for new hires for the reasons you mentioned. LLMs are really good at many onboarding tasks, but the social ones.
I think done right it is a superior onboarding experience. As a new hire, you no longer have to wait for your mentor to be available to learn some badly documented tech things. This is really empowering some of them. The lack of building human context / connections etc is real, and I don't think LLMs can meaningfully help there. Hence my skepticism for the horse analogy.
you still lose a bit from not having those juniors' questions around - where is your documentation sucking or your code is confusing?
Both questions are getting scary good answers from the latest models. Yes, I tried, on a large proprietary code base which shouldn’t be included in any training set.
Software engineers used to know that measuring lines of code written was a poor metric for productivity...
Charitably I'm guessing it's supposed to be an allusion to the chart with cost per word? Which is measuring an input cost not an output value, so the criticism still doesn't quite make sense, but it's the best I can do...
Maybe it was edited. I count at least 6 instances of the word “code”
So, a free idea from me: train the next coding LLM to produce not regular text, but patches which shortens code while still keeping the code working the same.
gonna tell claude to write all my code in one line
Cost per word is a bizarre metric to bring up. Since when is volume of words a measure of value or achievement?
It also puts a thumb on the scale for AI, which tends to emit pages of text to answer simple questions.
Sounds like any post-secondary, graduate student, or management consultant out there being there are, very often, page/word count or hours requirements. Considering the model corpora, wordiness wins out.
The chart is actually words "thought or written" so I guess they are running up the numbers even more by counting Claudes entire inner monologue, on top of what it ultimately outputs.
these are not just “words” but answers to questions from people who got a job at anthropic had…
How about we stop trying the analogy clothing on and just tell it like it is? AI is unlike any other technology to date. Just like predicting the weather, we don't know what it will be like in 20 months. Everything is a guesstimate.
This is the correct take. We all have that "Come to Jesus" moment eventually, where something blows our minds so profoundly that we believe anything is possible in the immediate future. I respect that, it's a great take to have and promotes a lot of discussion, but now more than ever we need concretes and definitives instead of hype machines and their adjacent counterparts.
Too much is on the line here regardless of what ultimately ends up being true or just hype.
It’s hard to filter the hot air from the realistic predictions. I’ve been hearing for over 10 years now that truck drivers are obsolete and that trucks will drive themselves. Yet today truck drivers are still very much in demand.
While in the last year I’ve seen generated images go from complete slop to indistinguishable from real photos. It’s hard to know what is right around the corner and what isn’t even close.
This is a fun piece... but what killed off the horses wasn't steady incremental progress in steam engine efficiency, it was the invention of the internal combustion engine.
According to Wikipedia, the IC engine was invented around 1800 and only started to get somewhere in the late 1800s. Sounds like the story doesn’t change.
Engine efficiency, chess rating, AI cap ex. One example is not like the other. Is there steady progress in AI? To me it feels like it’s little progress followed by the occasional breakthrough but I might be totally off here.
The only 'line go up' graph they have left is money invested. I'm even dubious of the questions answered graph. It looks more like a feature added to internal wiki that went up in usage. Instead it's portrayed as a measure of quality or usefulness.
I think you are totally off. Individual benchmarks are not very useful on their own, but as far as I’m aware they all tell the same story of continual progress. I don’t find this surprising since it matches my experience as well.
What example do you need? In every single benchmark AI is getting better and better.
Before someone says "but benchmark doesn't reflect real world..." please name what metric you think is meaningful if not benchmark. Token consumption? OpenAI/Anthropic revenue?
Whenever I try and use a "state of the art" LLM to generate code it takes longer to get a worse result than if I just wrote the code myself from the start. That's the experience of every good dev I know. So that's my benchmark. AI benchmarks are BS marketing gimmicks designed to give the appearance of progress - there are tremendous perverse financial incentives.
This will never change because you can only use an LLM to generate code (or any other type of output) you already know how to produce and are expert at - because you can never trust the output.
Third party benchmarks like terminalbench exist.
W.r.t code changes especially small ones (say 50 lines spread across 5 files), if you can't get an agent to make nearly exactly the code changes you want, just faster than you, that's a you problem at this point. If it maybe would take you 15 minutes, grok-code-fast-1 can do it in 2.
Right. With careful use of AIs, I can use it to gather information to help me make better designs (like giving me summaries of the current best available frameworks or libraries to choose for a given project), but as far as just generating an architecture and then generating the code and devops and so on for that? It's just not there, unless you're creating an app that effectively already exists, like some basic CRUD app.
If you're creating basic CRUDs, what on earth are you doing? That kind of thing should have been automated a long time ago.
What do you mean when you say building crud apps should be automated?
CRUD apps are ridiculously simple and have been in existence my entire life. Yet it is surprisingly difficult to make a basic CRUD and host it somewhere. The bulk of useful but simple business apps are just a CRUD with a tiny bit of customisation and integration around them.
It is true that LLMs make it easier to build these kind of things without having to become a competent programmer first.
E.g using Rails and generate scaffolding. Makes it real fast and easy to make a CRUD app.
conventionally, it should have been abstracted by a higher-level language.
> please name what metric you think is meaningful
Job satisfaction and human flourishing
By those metrics, AI is getting worse and worse
AI is very satisfied in doing the job, just ask it.
AI is able to speed up the progress, to give more resources, to give the most important thing people have - time. The fact that these incredible gifts are misused (or used inefficiently) is not the problem of AI. This would be like complaining that the objective positive of increased food production is actually a negative, because people are getting fatter.
OpenAI net profit.
The figures for cost are wildly off to start with.
ChatGPT was released 3 years ago and that was complete ass compared to what we have today.
I think it's a cool perspective, but the not-so-hidden assumption is that for any given domain, the efficiency asymptote peaks well above the alternative.
And that really is the entire question at this point: Which domains will AI win in by a sufficient margin to be worth it?
> the not-so-hidden assumption is that for any given domain, the efficiency asymptote peaks well above the alternative
This is an assumption for the best-case scenario, but I think you could also just take the marginal case. Steady progress builds until you get past the state of the art system, and then the switch becomes easy to justify.
> In 1920, there were 25 million horses in the United States, 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.
But would you rather be a horse in 1920 or 2020? Wouldn't you rather have modern medicine, better animal welfare laws, less exposure to accidents, and so on?
The only way horses conceivably have it worse is that there are fewer of them (a kind of "repugnant conclusion")...but what does that matter to an individual horse? No human regards it as a tragedy that there are only 9 billion of us instead of 90 billion. We care more about the welfare of the 9 billion.
The equivalency here is not 9 billion versus 90 billion, it's 9 billion versus 90 million, and the question is how does the decline look? Does it look like the standard of living for everyone increasing so high that the replacement rate is in the single digit percentage range, or does it look like some version of Elysium where millions have immense wealth and billions have nothing and die off?
> No human regards it as a tragedy that there are only 9 billion of us instead of 90 billion.
I have met some transhumanists and longtermists who would really like to see some orders of magnitude increase in the human population. Maybe they wouldn't say "tragedy", but they might say "burning imperative".
I also don't think it's clearly better for more beings to exist rather than fewer, but I just want to assure you that the full range of takes on population ethics definitely exists, and it's not simply a matter of straightforward common sense how many people (or horses) there ought to be.
> Back then, me and other old-timers were answering about 4,000 new-hire questions a month.
> Then in December, Claude finally got good enough to answer some of those questions for us.
What getting high on your own supply actually looks like. These are not the types of questions most people have or need answered. It's unique to the hiring process and the nascent status of the technology. It seems insane to stretch this logic to literally any other arena.
On top of that horses were initially replaced with _stationary_ gasoline engines. Horses:Cars is an invalid view into the historical scenario.
"In 1920, there were 25 million horses in the United States, 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.
And not very long after, 93 per cent of those horses had disappeared.
I very much hope we'll get the two decades that horses did."
I'm reminded of the idiom "be careful what you wish for, as you might just get it." Rapid technogical change has historically lead to prosperity over the long term but not in the short term. My fear is that the pace of change this time around is so rapid that the short term destruction will not be something that can be recovered from even over the longer term.
I just have no idea how rigerously the data was reviewed.
The 95% decline simply does no compute with
4,500,000 in 1959
and even an increase to
7,000,000 in 1968
largely due to increase in recreational horse population.
So that recreational existence at the leisure of our own machinery seems like an optional future humans can hope for too.
Turns out the chart is about farm horses only as counted by the USDA not including any recreational horses. So this is more about agricultural machinery vs. horses, not passenger cars.
---
City horses (the ones replaced by cars and trucks) were nearly extinct by 1930 already.
City horses were formerly almost exclusively bred on farms but because of their practical disappearance such breeding is no longer necessary. They have declined in numbers from 3,500,000 in 1910 to a few
hundred thousand in 1930.
My reading of tfa is exactly that - the author is hoping that we'll have at least a generation or so to adapt, like horses did, but is concerned that it might be significantly more rapid.
To be clear though, the horses didn't adapt. Their populate was reduced by orders of a magnitude.
True, but the horses' population started (slightly) rising again when they went from economic tools to recreational tools for humans. What will happen to humans?
The horse population was being boosted beyond normal numbers by human intervention. When humans stopped breeding them the numbers dropped.
At least currently humans do not need AI to reproduce.
There were approximately zero horses in the wild, so it was all about what humans found useful.
Pray it’s still humans who ask these kinds of questions about AI, not the other way around.
Did the population of work/service dogs decline? Horses were already a form of automation over human labor.
Bullocks.
That's what Sandy over the road (born 1932, died last year), used to hitch up every morning at 4am, when he was ten, to sled a tank of water back to the farm from the local spring.
"You're absolutely right!" Thanks for pointing it out. I was expecting that kind of perspective when the author brought up horses, but found the conclusion to be odd. Turns out it was just my reading of it.
the stability of no govt faced risk over a 20% increase in horse unemployment
Someone who makes horseshoes then learns how to make carburetors, because the demand is 10x.
If AI is really likely to cause a mass extinction event, then non-proliferation becomes critical as it was in the case with nuclear weapons. Otherwise, what does it really mean for AI to "replace people" outside of people needing to retool or socially awkward people having to learn to talk to people better? AI surely will change a lot, but I don't understand the steps needed to get to the highly existential threat that has become a cliché in every "Learn CLAUDE/MCP" ad I see. A period of serious unemployment, sure, but this article is talking about population collapse, as if we are all only being kept alive and fed to increase shareholder value for people several orders of magnitude more intelligent than us, and with more opposable thumbs. Do people think 1.2B people are going to die because of AI? What is the economy but people?
This is food for thought, but horses were a commodity; people are very much not interchangeable with each other. The BLS tracks ~1,000 different occupations. Each will fall to AI at a slightly different rate, and within each, there will be variations as well. But this doesn't mean it won't still subjectively happen "fast".
> Back then, me and other old-timers were answering about 4,000 new-hire questions a month.
> Then in December, Claude finally got good enough to answer some of those questions for us.
> … Six months later, 80% of the questions I'd been being asked had disappeared.
Interesting implications for how to train juniors in a remote company, or in general:
> We find that sitting
near teammates increases coding feedback by 18.3% and improves code quality. Gains
are concentrated among less-tenured and younger employees, who are building human capital. However, there is a tradeoff: experienced engineers write less code when
sitting near colleagues.
This tracks with my own AI usage over just this year. There have been two releases that caused step changes in how much I actually use AI:
1. The release of Claude Code in February
2. The release of Opus 4.5 two weeks ago
In both of these cases, it felt like no big new unlocks were made. These releases aren’t like OpenAI’s o1, where they introduced reasoning models with entirely new capabilities, or their Pro offerings, which still feel like the smartest chatbots in the world to me.
Instead, these releases just brought a new user interface, and improved reliability. And yet these two releases mark the biggest increases in my AI usage. These releases caused the utility of AI for my work to pass thresholds where Claude Code became my default way to get LLMs to read my code, and then Opus 4.5 became my default way to make code changes.
I would add Gemini Nano Banna Pro to that list - (its words with image ability) is amazing..
Regarding horses vs. engines, what changed the game was not engine efficiency, but the widespread availability of fuel (gas stations) and the broad diffusion of reliable, cheap cars. Analogies can be made to technologies like cell phones, MP3 players, or electric cars: beyond just the quality of the core technology, what matters is a) the existence of supporting infrastructure and b) a watershed level of "good/cheap enough" where it displaces the previous best option.
funny how we have all of this progress yet things that actually matter (sorry chess fans) in the real world are more expensive: health care, housing, cars. and what meager gains there are seem to be more and more concentrated in a smaller group of people.
plenty of charts you can look at - net productivity by virtually any metric vs real adjusted income. the example I like are kiosks and self checkout. who has encountered one at a place where it is cheaper than its main rival and is directly attributable to (by the company or otherwise) to lower prices?? in my view all it did was remove some jobs. that's the preview. that's it. you will lose jobs and you will pay more. congrats.
even with year 2020 tech you could automate most work that needs to be done, if our industry wouldn't endlessly keep disrupting itself and have a little bit of discipline.
so once ai destroys desk jobs and the creative jobs, then what? chill out? too bad anyone who has a house won't let more be built.
To give backing i’m from Australia which has ~2.5x the median wealth per capita of US citizens but a lower average wealth. This shows through in the wealth of a typical citizen. Less homelessness, better living standards (hdi in australia is higher) etc.
This is a recent development where the median wealth of citizens in progressively taxes nations has quickly overtaken the median wealth of USA citizens.
All it takes is tax on the extremely wealthy and lessening taxes on the middle class… seems obvious right? Yet things gave consistently been going the other way for along time in the USA.
> All it takes is tax on the extremely wealthy and lessening taxes on the middle class… seems obvious right?
You could tax 100% of all of the top 1%'s income (not progressively, just a flat 100% tax) and it'd cover less than double the federal government's budget deficit in the US. There would be just enough left over to pay for making the covid 19 ACA subsidies permanent and a few other pet projects.
Of course, you can't actually tax 100% of their income. In fact, you'd need higher taxes on the top 10% than anywhere else in the West to cover the deficit, significantly expand social programs to have an impact, and lower taxes on the middle class.
It should be pointed out that Australia has higher taxes on their middle class than the US does. It tops out at 45% (plus 2% for medicare) for anyone at $190k or above.
If you live in New York City, and you're in the top 1% of income earners (taking cash salary rather than equity options) you're looking at a federal tax rate of 37%, a state tax rate of 10.9%, and a city income tax rate of 3.876% for a total of 51.77%. Some other states have similarly high tax brackets, others are less, and others yet use other schemes like no income tax but higher sales and property taxes.
Not quite so obvious when you look closer at it.
The point isn't to just cover the tax bill, it's that by shifting the burden up the class ladder, there is more capital available to the classes that spend and circulate their money in the economy rather than merely accumulate it
[dead]
I think by the time the wealthy realize they're setting themselves up for the local equivalent of the French Revolution it will be a bit late. It's a really bad idea to create a large number of people with absolutely nothing to lose.
I suspect the wealthy think they can shield themselves by exerting control over mass media, news outlets, the press, and domestic surveillance, all amplified by AI.
If all that fails, they have their underground bunkers on faraway islands and/or backup citizenships.
> I suspect the wealthy think they can shield themselves by exerting control over
Agreed and I think this is a result of a naive belief that we humans tend to have that controlling thoughts can control reality. Politicians still live by this belief but eventually reality does catch up. By that time all trust is long gone.
They already know, and do not care. Their plan is quite literally to retreat into bunkers with shock collars enforcing the loyalty of their guards.
The richest of the rich have purchased islands where they can hole up.
Stripped of their infinite freedom out here to hide in a bunker? No chance
The bunkers are in case of nuclear war or serious pandemics. Absolutely worst case last resort scenario, not just "oh I don't care if I end up there"
Moreover when you acting absolutely relentlessly like certain car maker.
People usually change their behavior after some pretty horrific events. So I would predict something like that in future. For both Europe and US too.
Those are all expensive because of artificial barriers meant to keep their prices high. Go to any Asian country and houses, healthcare and cars are priced like commodities, not luxuries.
Tech and AI have taken off in the US partially because they’re in the domain of software, which hasnt bee regulated to the point of deliberate inefficiency like
other industries in the US.
If we had less regulation of insurance companies, do you think they’d be cheaper?
(I pick this example because our regulation of insurance companies has (unintuitively) incentivized them to pay more for care. So it’s an example of poor regulation imo)
Well, they'd be more functional as insurance, at least! The way insurance is supposed to work is that your insurance premium is proportional to the risk. You can't go uninsured and then after discovering that your house is on fire and about to burn down, sign up for an insurance plan and expect it to be covered.
We've blundered into a system that has the worst parts of socialized health care and private health insurance without any of the benefits.
Health care is the more complicated one of the examples cited, but housing definitely is an 'own goal' in how we made it too difficult to build in too many places - especially "up and in" rather than outward expansion.
Health care is complicated, but I don't think it would hard to understand how less regulations could lower prices. More insurers could enter markets, could compete across state lines, and compliance costs could be lowered.
However regulation is helpful for those already sick or with pre-existing conditions. Developed countries with well-regulated systems also have better health outcomes than the US does.
[deleted]
> Go to any Asian country and houses, healthcare and cars are priced like commodities, not luxuries.
What do you mean? Several Asian cities have housing crises far worse than the US in local purchasing power, and I'd even argue that a "cheap" home in many Asian countries is going to be of a far lower quality than a "cheap" home in the US.
you mean the same Asia that has the same problem? USA enjoying arbitrage is not actually a solution nor is it sustainable. not to mention that if you control for certain things, like house size for instance relative to inflation adjusted income it isn't actually much different despite popular belief.
It would be kinda funny if not so tragic how economists will argue both "[productive improvement] will make things cheaper" and then in the next breath "deflation is bad and must be avoided at all costs"
But is it really, though? Dollars aren't meant to be held.
I think the idea of dollars as purely a trading medium where absolute prices don't matter wouldn't be such an issue if wages weren't always the last thing to rise with inflation.
As it is now anyone with assets is only barely affected by inflation while those who earn a living from wages have their livelihood eroded over time covertly.
Exactly as the current owners… ahem, leaders of this country want it.
Barely affected? They benefit massively from it. That is why the rich get richer.
True, in terms of share of the pie for sure
Housing is a funny old one and speaks to it being a human problem. One thing a lot of people dont truly engage with with the housing issue is that its a massive issue of distribution. Too many people want to live in too few places. Yes, central banks & interest rates (being too low and also now being relatively too high), nimbyism, and rent seeking play an important role too but solving the "too many people live in too few places" issue actually fixes that problem (slowly, and possibly unpalatably slow for some, but a fix nonetheless)
The key issue upstream is that too many good jobs are concentrated in too few places, and that leads to consumerism stimulating those places and making them further more attractive. Technology, through Covid, actually gave governments a get out of jail free card by allowing remote work to become more mainstream. Only to just not grasp the golden egg they were given. Pivot economies more to remote working more actively helps distribute people to other places with more affordable home. Over time, and again slowly, those places become more attractive because people now actually live there.
Existing homeowners can still wrap themselves in the warm glow of their high house prices which only loses "real" value through inflation which people tend not to notice as much.
But we decided to try to go back to the status quo so oh well
Food and clothes are much cheaper. People used to have to walk or hitchhike a lot more. People died younger, or were trapped with abusive spouses and/or parents. Crime was high. There was little economic mobility. It really sucked if you weren’t a straight white man. Houses had one bathroom. Power went out regularly. Travel was rare and expensive; people rarely flew anywhere. There was limited entertainment or opportunities to learn about the world.
yeah that my question to the author too - if A.I is to really earn its keep it means A.I should help in getting more physical products into people's hands & helping with producing more energy.
physical products & energy are the two things that are relevant to people's wellbeing.
right now A.I is sucking up the energy & the RAM - so is it gonna translate into a net positive ?
That's the question though isn't it. If everyone got a subscription to claude-$Latest would they be able to pay their rent with it?
No, because they’d be waiting in the lengthy queues that would be necessary for anyone to use it. There are hard constraints to this tech that make what you’re talking about infeasible.
No because nurses, mechanics, and janitors are still needed.
It's interesting to see Cyberpunk 2077 became somehow relatable more and more.
>in the real world are more expensive: health care, housing, cars.
Think of it another way. It's not that these things are more expensive. It's that the average US worker simply doesn't provide anything of value. China provides the things of value now. How the government corrected for this was to flood the economy with cash. So it looks like things got more expensive, when really it's that wages reduced to match reality. US citizens selling each other lattes back and forth, producing nothing of actual value. US companies bleeding people dry with fees. The final straw was an old man uniting the world against the USA instead of against China.
If you want to know where this is going, look at Britain: the previous world super power. Britain governed far more of the earth than the USA ever did, and now look at it. Now the only thing it produces is ASBOs. I suppose it also sells weapons to dictators and provides banking to them. That is the USA's future.
Yep. My grandma bought her house in ~1962 for $20k working at a factory making $2/hr. Her mortgage was $100/m; about 1 weeks worth of pay. $2/hr then is the equivalent of ~$21/hr today.
If you were to buy that same house today, your mortgage would be about $5100/m-- about 6 weeks of pay.
And the reason is exactly what you're saying: the average US worker doesn't provide as much value anymore. Just as her factory job got optimized/automated, AI is going to do the same for many. Tech workers were expensive for a while and now they're not. The problem is that there seems to be less and less opportunity where one can bring value. The only true winners are the factory owners and AI providers in this scenario. The only chance anybody has right now is to cut the middleman out, start their own business, and pray it takes off.
But the us is China's market, so the ccp goes along even though they are the producer. Because a domestic consumer economy would mean sharing the profits of that manufacturing with the workers. But that would create a middle class not dependent on the party leading (at least in their minds, and perhaps not wrongly) to instability. It is a dance of two, and neither can afford to let go. And neither can keep dancing any longer. I think it will be very bad everywhere.
It's inflation, simple as that. The US left the gold standard at the exact same time that productivity diverged from wages. Coincidence? No.
Pretty much everything gets more expensive, with the outliers being tech which has gotten much cheaper, mostly because the rate at which it progresses is faster than the rate at which governments can print money. But everything we need to survive, like food, housing, etc, keeps getting more expensive. And the asset class get richer as a result.
Well, politically, housing becoming cheaper is considered a failure. And this is true for all ages. As an example, take Reddit. Skews younger, more Democrat-voting, etc. You'd think they'd be for lower housing prices. But not really. In fact, they make fun of states like Texas whose cities act to allow housing to become cheaper: https://www.reddit.com/r/LeopardsAteMyFace/comments/1nw4ef9/...
That's just an example, but the pattern will easily repeat. One thing that came out of the post-pandemic era is that the lowest deciles saw the biggest rises in income. Consequently, things like Doordash became more expensive, and stuff like McDonald's stopped staffing as much.
This isn't some grand secret, but most Americans who post on Twitter, HN, or Reddit consider the results some kind of tragedy, though it is the natural thing that happens when people become much higher income: you can't hire many of them to do low-productivity jobs like bus a McD's table.
That's what life looks like when others get richer relative to you. You can't consume the fruits of their labor for cheap. And they will compete for you with the things that you decided to place supply controls on. The highly-educated downwardly-mobile see this most acutely, which is why you see it commonly among the educated children of the past elite.
Thank you, I've replied too many times that if people want low priced housing, it's easily found in Texas. The replies are empty or stating that they don't want to live there because... it's Texas.
So the young want cheap affordable housing, right in the middle of Manhattan, never going to happen.
[dead]
4000 questions a month from new hireds. How many of those were repeated many times. A lot. So if they'd built a wiki?
I am not an AI sceptic.. I use it for coding. But this article is not compelling.
I think the author's point is that each type of job will basically disappear roughly at once, shortly after AI crosses the bar of "good enough" in that particular field.
I think the turning point will be when AI assisted individuals or tiny companies are able to deliver comparable products/value as the goliaths.
That would be the ideal scenario; when you can build a small business more easily.
[deleted]
>>This was a five-minute lightning talk given over the summer of 2025 to round out a small workshop.
Glad I noticed that footnote.
Article reeks of false equivalences and incorrect transitive dependencies.
my favorite part was where the graphs are all unrelated to each other
Ironically, you could use the sigmoid function instead of horses. The training stimulus slowly builds over multiple iteration and then suddenly, flip: the wrong prediction reverses.
We still have chess grandmasters if you have noticed..
Yes, and we'll continue to have human coding competitions for entertainment purpose. Good luck trying to live off the prize money though.
Hikaru makes good money streaming on Twitch tho
[deleted]
I mean it's hard to argue that if we invented a human in a box (AGI) human work would be irrelevent. But I don't know how we could watch current AI and anyone can say we have that.
The big thing this AI boom has showed us that we can all be thankful to have seen is what a human in a box will eventually look like. The first generation of humans to be able to see that is a super lucky experience to have.
Maybe it's one massive breakthrough away or maybe it's dozens away. But there is no way to predict when some massive breakthrough will occur Illya said 5-20 that really just means we don't know.
Why a human in a box and not an android? A lot of jobs will require advanced robotics to fully automate. And then there are jobs where customer preference is for human interaction or human entertainment. It's like how superior chess engines have not reduced the profession of chess grandmasters, because people remain more interested in human chess competition.
I've never visited this blog before but I really enjoy the synthesis of programming skill (at least enough skill to render quick graphs and serve them via a web blog) and writing skill here. It kind of reminds me of the way xkcd likes to drive home his ideas. For example, "Surpassed by a system that costs one thousand times less than I do... less, per word thought or written, than ... the cheapest human labor" could just be a throwaway thought, and wouldn't serve very well on its own, unsupported, in a serious essay, and of course the graph that accompanies that thought in Jones's post here is probably 99.9% napkin math / AI output, but I do feel like it adds to the argument without distracting from it.
(A parenthetical comment explaining where he ballparked the measurements for himself, the "cheapest human labor," and Claude numbers would also have supported the argument, and some writers, especially web-focused nerd-type writers like Scott Alexander, are very good at this, but text explanations, even in parentheses, have a way of distracting readers from your main point. I only feel comfortable writing one now because my main point is completed.)
Terrible comparison.
Horses and cars had a clearly defined, tangible, measurable purpose: transport... they were 100% comparable as a market good, and so predicting an inflection point is very reasonable. Same with Chess, a clearly defined problem in finite space with a binary, measurable outcome. Funny how Chess AI replacing humans in general was never considered as a serious possibility by most.
Now LLMs, what is their purpose? What is the purpose of a human?
I'm not denying some legitimate yet tedious human tasks are to regurgitate text... and a fuzzy text predictor can do a fairly good job of that at less cost. Some people also think and work in terms of text prediction more often than they should (that's called bullshitting - not a coincidence).
They really are _just_ text predictors, ones trained on such a humanly incomprehensible quantity of information as to appear superficially intelligent, as far as correlation will allow. It's been 4 years now, we already knew this. The idea that LLMs are a path to AGI and will replace all human jobs is so far off the mark.
LLMs can only hallucinate and cannot reason or provide answers outside of their training set distribution. The architecture needs to fundamentally change in order to reach human equivalence, no matter how many benchmarks they appear to hit.
> 90% of the horses in the US disappeared
Where did they go?
they grew old and died ?
sometimes not nearly so pleasant for them.
Point taken, but it's hard to take a talk seriously when it has a graph showing AI becoming 80% of GDP! What does the "P" even stand for then?
Conclusion: Soylent..?
damn
Humans design the world to our benefit, horses do not.
Most humans don't. Only the wealthy and powerful are able to do this
And they often do it at the expense of the rest of us
Cool, now lets make a big list of technologies that didn't take off like they were expected to
Wait till the robots arrive. That they will know how to do a vast range of human skills, some that people train their whole lives for, will surprise people the most. The future shock I get from Claude Code, knowing how long stuff takes the hard way, especially niche difficult to research topics like the alternate applicable designs of deep learning models to a modeling task, is a thing of wonder. Imagine now that a master marble carver shows up at an exhibition and some sci-fi author just had robots make a perfect beautiful equivalent of a character from his novel, equivalent in quality to Michaelangelo's David, but cyberpunk.
I thought this was going to be about how much more intelligent horses are than AIs and I was disappointed
This post is kind of sad. It feels like he's advocating for human depopulation since the trajectory aligns with horse populations declining by 93% also.
Indeed. I do wonder if the inventors of the "transformer architecture" knew all the potential Pandora's boxes they were opening when they invented it. Probably not.
No one wants to say the scary potential logical conclusion of replacing the last value that humans have a competitive advantage in; that being intelligence and cognition. For example there is one future scenario of humanity where only the capital and resource holders survive; the middle and lower classes become surplus to requirements and lose any power. Its already happening slowly via inflation and higher asset prices after all - it is a very real possibility. I don't think a revolution will be possible in this scenario; with AI and robotics the rich could outnumber pretty much everyone.
Not advocating, just predicting. And not necessarily actual population, just population in paid employment.
yeah but machines don't produce horseshit, or do they? (said in the style of Vsauce)
> And not very long after, 93 per cent of those horses had disappeared.
> I very much hope we'll get the two decades that horses did.
> But looking at how fast Claude is automating my job, I think we're getting a lot less.
This "our company is onto the discovery that will put you all out of work (or kill you?)" rhetoric makes me angry.
Something this powerful and disruptive (if it is such) doesn't need to be owned or controlled by a handful of companies. It makes me hope the Chinese and their open source models ultimately win.
I've seen Anthropic and OpenAI employees leaning into this rhetoric on an almost daily basis since 2023. Less so OpenAI lately, but you see it all the time from these folks. Even the top leadership.
Meanwhile Google, apart from perhaps Kilpatrick, is just silent.
At this point "we're going to make all office work obsolete" feels more like a marketing technique than anything actually connected to reality. It's sort of like how Coca-Cola implies that drinking their stuff will make you popular and well-liked by other attractive, popular people.
Meanwhile, my own office is buried in busywork that there are currently no AI tools on the market that will do the work for us, and AI entering a space sometimes increases busywork workloads. For example, when writing descriptions of publications or listings for online sales, we have to put more effort now into not sounding like it was AI-generated or we will lose sales. The AI tools for writing descriptions / generating listings are not very helpful either. (An inaccurate listing/description is a nightmare.)
I was able to help set up a client with AI tools to help him generate basically a faux website in a few hours that has lots of nice graphic design, images, etc. so that his new venture looks like a real company. Well, except for the "About Us" page that hallucinated an executive team plus a staff of half a dozen employees. So I guess work like that does get done faster now.
Well, tbf the author was hired to answer newbie questions. Perhaps the position is that of an evangelist, not a scientist.
[dead]
It's astounding how subtly anti-AI HN has become over the past year, as the models keep getting better and better. It's now pervasive across nearly every AI thread here.
As the potential of AI technical agents has gone from an interesting discussion to extraordinarily obvious as to what the outcome is going to be, HN has comically shifted negative in tone on AI. They doth protest too much.
I think it's a very clear case of personal bias. The machines are rapidly coming for the lucrative software jobs. So those with an interest in protecting lucrative tech jobs are talking their book. The hollowing out of Silicon Valley is imminent, as other industrial areas before it. Maybe 10% of the existing software development jobs will remain. There's no time to form powerful unions to stop what's happening, it's already far too late.
I don't think is the case; I think what's actually going on is that the HN crowd are the people who are stuck actually trying to use AI tools and aware of their limitations.
I have noticed, however, that people who are either not programmers or who are not very good programmers report that they can derive a lot of benefit from AI tools, since now they can make simple programs and get them to work. The most common use case seems to be some kind of CRUD app. It's very understandable this seems revolutionary for people who formerly couldn't make programs at all.
For those of us who are busy trying to deliver what we've promised customers we can do, I find I get far less use out of AI tools than I wish I did. In our business we really do not have the budget to add another senior software engineer, and we don't the spare management/mentor/team lead capacity to take on another intern or junior. So we're really positioned to be taking advantage of all these promises I keep hearing about AI, but in practical terms, it saves me at an architect or staff level maybe 10% of my time and for one of our seniors maybe 5%.
So I end up being a little dismissive when I hear that AI is going to become 80% of GDP and will be completely automating absolutely everything, when what I actually spend my day on is the same-old same-old of trying to get some vendor framework to do what I want to get some sensor data out of their equipment and deliver apps to end customers that use enough of my own infrastructure that they don't require $2,000 a month of cloud hosting services per user. (I picked that example since at one customer, that's what we were brought in to replace: that kind of cost simply doesn't scale.)
I value this comment even though I don't really agree about how useful AI is. I recognise in myself that my aversion to AI is at least partly driven by fear of it taking my job.
I’m not anti-AI; I use it every day. But I also think all this hand-wringing is overblown and unbalanced. LLMs, because of what they are, will never replace a thoughtful engineer. If you’re writing code for a living at the level of an LLM then your job was probably already expendable before LLMs showed up.
except you know, you had a job. and coming out of college could get one… if you were graduating right now in compsci you’ll find a wasteland with no end in sight…
> The hollowing out of Silicon Valley is imminent
I think AI tools are great, and I use them daily and know their limits. Your view is commonly held by management or execs who don't have their boots on the ground.
That's what I've observed. I currently have more work booked than I can reasonably get done in the next year, and my customers would be really delighted if I could deliver it to them sooner, and take on even more projects. But I have yet to find any way that just adding AI tools to the mix makes us orders-of-magnitude better. The most I've been able to squeeze out is a 5% to 10% increase.
But they do have their hands on your budget, and they are responsible for creating and filling positions.
>It's astounding how subtly anti-AI HN has become over the past year, as the models keep getting better and better. It's now pervasive across nearly every AI thread here.
I don't think you can characterise it as a sentiment of the community as a whole. While every AI thread seems to have it's share of AI detractors, the usernames of the posters are becoming familiar. I think it might be more accurate to say that there is a very active subset of users with that opinion.
This might hold true for the discourse in the wider community. You see a lot of coverage about artists outraged by AI, but when I speak to artists they have a much more moderate opinion. Cautious, but intrigued. A good number of them are looking forward to a world that embraces more ambitious creativity. If AI can replicate things within a standard deviation of the mean, the abundance of that content there will create an appetite for something further out.
It's not subtle.
But the temptation of easy ideas cuts both ways. "Oldsters hate change" is a blanket dismissal, and there are legitimate concerns in that body of comments.
Horses eat feed. Cars eat gasoline. LLMs eat electricity, and progress may even now be finding its limits in that arena. Besides the fact that just more compute and context size aren’t the right kind of progress. LLMs aren’t coming for your job any more than computer vision is, for a lot of reasons, but I’ll list two more:
Funny the original post doesn’t mention AI replacing the coding part of his job.
There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it.
I want to be be optimistic. But it’s hard to ignore what I’m doing and seeing. As far as I can tell, we haven’t hit serious unemployment yet because of momentum and slow adoption.
I’m not replying to argue, I hope you are right. But I look around and can’t shake the feeling of Wile E. Coyote hanging in midair waiting for gravity to kick in.
This is a very insightful take. People forget that there is competition between corporations and nations that drives an arms race. The humans at risk of job displacement are the ones who lack the skill and experience to oversee the robots. But if one company/nation has a workforce that is effectively 1000x, then the next company/nation needs to compete. The companies/countries that retire their humans and try to automate everything will be out-competed by companies/countries that use humans and robots together to maximum effect.
Overseeing robot is a time limited activity. Even building robot has a finite horizon.
Current tech can't yet replace everything but many jobs already see the horizon or are at sunset.
Last few time this happened the new tech, whether textile mills or computers, drove job creation as well as replacement.
This time around some component of progress are visibile, because end of the day people can use this tech to create wealth at unprecedented scale, but other arent as the tech is run with small teams at large scale and has virtually no related ondustries is depends on like idk cars would. It's energy and gpus.
Maybe we will be all working on gpu related industries? But seems another small team high scale job. Maybe few tens of million can be employed there?
Meanwhile I just dont see the designer + AI job role materializing, I see corpos using AI and cutting the middleman, while designers + AI get mostly ostracized, unable to raise, like a cran in a bucket of crabs.
I think you’ve missed the point. Cars replaced horses - it wasn’t cars+horses that won. Computers replaced humans as the best chess players, not computers with human oversight. If successful, the end state is full automation because it’s strictly superhuman and scales way more easily.
Humans still play chess and horses are still around as a species.
(Disclaimer: this is me trying to be optimistic in a very grim and depressing situation)
Perhaps you have missed the essential point. Who drives the cars? It's not the horses, is it? And a chess computer is just as unlikely to start a game of chess on its own as a horse is to put on its harness and pull a plow across a field. I'm not entirely sure what impact all this will have on the job market, but your comparisons are flawed.
> The only reason to reduce headcount is to remove people who already weren’t providing much value.
There were many secretaries up until the late 20th century that took dictation, either writing notes of what they were told or from a recording, then they typed it out and distributed memos. At first, there were many people typing, then later mimeograph machines took away some of those jobs, then copying machines made that faster, then printers reduced the need for the manual copying, then email reduced the need to print something out, and now instant messaging reduces email clutter and keep messages shorter.
All along that timeline there were fewer and fewer people involved, all for the valuable task of communication. While they may not have held these people in high esteem, they were critical for getting things done and scaling.
I’m not saying LLMs are perfect or will replace every job. They make mistakes, and they always will; it’s part of what they are. But, as useful as people are today, the roles we serve in will go away and be replaced by something else, even if it’s just to indicate at various times during the day what is or isn’t pleasing.
I feel like this sort of misses the point. I didn't think the primary thrust of his article was so much about the specific details of AI, or what kind of tasks AI can now surpass humans on. I think it was more of a general analysis (and very well written IMO) that even when new technologies advance in a slow, linear progression, the point at which they overtake an earlier technology (or "horses" in this case), happens very quickly - it's the tipping point at which the old tech surpasses the new. For some reason I thought of Hemingway's old adage "How did you go bankrupt? - Slowly at first, then all at once."
I agree with all the limitations you've written about the current state of AI and LLMs. But the fact is that the tech behind AI and LLMs never really gets worse. I also agree that just scaling and more compute will probably be a dead end, but that doesn't mean that I don't think that progress will still happen even when/if those barriers are broadly realized.
Unless you really believe human brains have some sort of "secret special sauce" (and, FWIW, I think it's possible - the ability of consciousness/sentience to arise from "dumb matter" is something that I don't think scientists have adequately explained or even really theorized), the steady progress of AI should, eventually, surpass human capabilities, and when it does, it will happen "all at once".
For what it's worth, the decline in use of horses was much slower than you might expect. The model T Ford motor car reached peak production in 1925 [0], and for an inexact comparison (I couldn't find numbers for the US) the horse population of France started to decline in 1935, but didn't drop below 80% of its historical peak until the late 1940's down to 10% of its peak by the 1970's [1].
[0] https://en.wikipedia.org/wiki/Ford_Model_T#Mass_production
[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC7023172/
Frech eat horse meat. Cattle is still present in US...
If there’s more work than resources, then is that low value work or is there a reason the business is unable to increase resources? AI as a race to the bottom may be productive but not sure it will be societally good.
Yip, the famous example here being John Maynard Keynes, of Keynesian economics. [1] He predicted a 15 hour work week following productivity gains that we have long since surpassed. And not only did he think we'd have a 15 hour work week, he felt that it'd be mostly voluntary - with people working that much only to give themselves a sense of purpose and accomplishment.
Instead our productivity went way above anything he could imagine, yet there was no radical shift in labor. We just instead started making billionaires by the thousand, and soon enough we can add trillionaires. He underestimated how many people were willing to designate the pursuit of wealth as the meaning of life itself.
[1] - https://en.wikipedia.org/wiki/Keynesian_economics
Productivity gains are more likely to be used to increase margins (profits and therefore value to shareholders) then it is to reduce work hours.
At least since the Industrial Revolution, and probably before, the only advance that has led to shorter work weeks is unions and worker protections. Not technology.
Technology may create more surplus (food, goods, etc) but there’s no guarantee what form that surplus will reach workers as, if it does at all.
Margins require a competitive edge. If productivity gains are spread throughout a competitive industry, margins will not get bigger; prices will go down.
> Productivity gains are more likely to be used to increase margins (profits and therefore value to shareholders) then it is to reduce work hours
I mean, that basically just sums up how capitalism works. Profit growth is literally (even legally!) the only thing a company can care about. Everything else, like product quality, pays service to that goal.
Failure of politics and the media then. Majority of voters have been fooled into voting against their economic interests.
In the same essay ("Economic Possibilities for our Grandchildren," 1930) where he predicted the 15-hour workweek, Keynes wrote about how future generations would view the hoarding of money for money's sake as criminally insane.
"There are changes in other spheres too which we must expect to come. When the accumulation of wealth is no longer of high social importance, there will be great changes in the code of morals. We shall be able to rid ourselves of many of the pseudo-moral principles which have hag-ridden us for two hundred years, by which we have exalted some of the most distasteful of human qualities into the position of the highest virtues. We shall be able to afford to dare to assess the money-motive at its true value. The love of money as a possession – as distinguished from the love of money as a means to the enjoyments and realities of life – will be recognised for what it is, a somewhat disgusting morbidity, one of those semi-criminal, semi-pathological propensities which one hands over with a shudder to the specialists in mental disease. All kinds of social customs and economic practices, affecting the distribution of wealth and of economic rewards and penalties, which we now maintain at all costs, however distasteful and unjust they may be in themselves, because they are tremendously useful in promoting the accumulation of capital, we shall then be free, at last, to discard."
A study [1] I was looking at recently was extremely informative. It's a poll from UCLA given to incoming classes that they've been carrying out since the 60s. In 1967 86% of student felt it was "essential" or "very important" to "[develop] a meaningful philosophy of life", while only 42% felt the same of "being very well off financially." By 2015 those values had essentially flipped, with only 47% viewing a life philosophy as very important, and 82% viewing being financially well off as very important.
It's rather unfortunate it only began in 1967, because I think we would see an even more extreme flip if we were able to just go back a decade or two more, and back towards Keynes' time. As productivity and wealth accumulation increased, society seems to have trended in the exact opposite direction he predicted. Or at least there's a contemporary paradox. Because I think many, if not most, younger people hold wealth accumulation with some degree of disdain yet also seek to do the exact same themselves.
In any case, in a society where wealth is seen as literally the most important aspect in life, it's not difficult to predict what follows.
[1] - https://www.heri.ucla.edu/monographs/50YearTrendsMonograph20...
Well, keep in mind students at UCLA at 1967 were probably among the most wealthy in the country. A lot more average people at UCLA nowadays. Of course being financially well off wouldn't be the most important thing if you were already financially well off.
> We shall be able to rid ourselves of many of the pseudo-moral principles which have hag-ridden us for two hundred years
Still haven’t gotten rid of work for work’s sake being a virtue, which explains everything else. Welfare? You don’t “deserve” it. Until we solve this problem, we’re not or less heading straight for feudalism.
> We just instead started making billionaires by the thousand, and soon enough we can add trillionaires.
We just instead started doing Bullshit Jobs. https://en.wikipedia.org/wiki/Bullshit_Jobs
People are not simple machines or animals. Unless AI becomes strictly better than humans and humans + AI, from the perspective of other humans, at all activities, there will still be lots of things for humans to do to provide value for each other.
The question is how do our individuals, and more importantly our various social and economic systems handle it when exactly what humans can do to provide value for each other shifts rapidly, and balances of power shift rapidly.
If the benefits of AI accrue to/are captured by a very small number of people, and the costs are widely dispersed things can go very badly without strong societies that are able to mitigate the downsides and spread the upsides.
I'm optimistic.
Banks used to have rooms full of bank clerks who manually did double-entry bookkeeping for all the bank's transactions. For most people, this was a very boring job, and it made bank transactions slow and expensive. In the 50's and 60's we replaced all these people with computers. An entire career of "bank clerk" vanished, and it was a net good for humanity. The cost of bank transactions came down (by a lot!), banks became more responsive and served their customers better. And the people who had to do double-entry bookkeeping all day long got to do other, probably more interesting, jobs.
There are a ton of current careers that are just email + meetings + powerpoint + spreadsheet that can go the same way. They're boring jobs (for most people doing them) and having humans do them makes administration slow and expensive. Automating them will be a net good for humanity. Imagine if "this meeting could have been an email" actually moves to "this meeting never happened at all because the person making the decision just told the LLM and it did it".
You are right that the danger is that most of the benefits of this automation will accrue to capital, but this didn't happen with the bank clerk automation - bank customers accrued a lot of the benefits too. I suspect the same will be true with this automation - if we can create and scale organisations easier and cheaper without employing all the admin staff that we currently do, then maybe we create more agile, responsive, organisations that serve their customers better.
An ATM is a reliable machine with a bounded risk - the money inside - while an AI agent could steer your company into bankruptcy and have no liability for it. AI has no skin and depending on application, much higher upper bound for damage. A digit read wrong in a medical transcript, patient dies.
> There are a ton of current careers that are just email + meetings + powerpoint + spreadsheet that can go the same way.
Managing risks, can't automate it. Every project and task needs a responsibility sink.
You can bound risk on ai agents just like an atm. You just can’t rely upon the ai itself to enforce those limits, of course. You need to place limits outside the ai’s reach. But this is already documented best practice.
The point about ai not having “skin” (I assume “skin in the game”) is well taken. I say often that “if you’ve assigned an ai agent the ‘a’ in a raci matrix, you’re doing it wrong”. Very important lesson that some company will learn publicly soon enough.
> Every project and task needs a responsibility sink.
I don't disagree, though I'd put it more as "machines cannot take responsibility for decisions, so machines must not have authority to make decisions".
But we've all been in meetings where there are too many people in the room, and only one person's opinion really counts. Replacing those other people with an LLM capable of acting on the decision would be a net positive for everyone involved.
Workshopping this tortured metaphor:
AI, at the limit, is a vampiric technology, sucking the differentiated economic value from those that can train it. What happens when there are no more hosts to donate more training-blood? This, to me, is a big problem, because a model will tend to drift from reality without more training-blood.
The owners of the tech need to reinvest in the hosts.
Realistically, at a certain point the training would likely involve interaction with reality (by sensors and actuators), rather than relying on secondhand knowledge available in textual form.
Yeah I feel like the real ah ha moment is still coming once there is a GPT-like thing that has been trained on reality, not its shadow.
Reality cannot be perceived. A crisp shadow is all you can hope for.
The problem for me is the point of the economy in the limit where robots are better, faster and cheaper than any human at any job. If the robots don’t decide we’re worth keeping around we might end up worse than horses.
Yes and reality is the hard part. Moravec’s Paradox [1] continues to ring true. A billion years of evolution went into our training to be able to cope with the complexity of reality. Our language is a blink of an eye compared to that.
[1] https://en.wikipedia.org/wiki/Moravec's_paradox
Look I think that is the whole difficulty. In reality, doing the wrong thing results in pain, and the right thing in relief/pleasure. A living thing will learn from that.
But machines can experience neither pain nor pleasure.
> What happens when there are no more hosts to donate more training-blood?
LLMs have over 1B users and exchange over 1T tokens with us per day. We put them through all conceivable tasks and provide support for completing those tasks, and push back when the model veers off. We test LLM ideas in reality (like experiment following hypothesis) and use that information to iterate. These logs are gold for training on how to apply AI in real world.
There's only so much you can learn from humans. AI didn't get superhuman in go (game) by financing more new good human go players. It just played with itself even discarding human source knowledge and achieved those levels.
People are animals.
When horses develop technology and create all sorts of jobs for themselves, this will be a good metaphor.
I'd be more worried about the implicit power imbalance. It's not what can humans provide for each-other, it's what can humans provide for a handful of ultra-wealthy oligarchs.
Yeah, from the perspective of the ultra-wealthy us humans are already pretty worthless and they'll be glad to get rid of us.
But from the perspective of a human being, an animal, and the environment that needs love, connection, mutual generosity and care, another human being who can provide those is priceless.
I propose we break away and create our own new economy and the ultra-wealthy can stay in their fully optimised machine dominated bunkers.
Sure maybe we'll need to throw a few food rations and bags of youthful blood down there for them every once in a while, but otherwise we could live in an economy that works for humanity instead.
Charlie Chaplin's speech is more relevant now than ever before:
https://www.youtube.com/watch?v=J7GY1Xg6X20
The thing that the ultra-wealthy desire above all else is power and privilege, and they won't be getting either of that in those bunkers.
They sure as shit won't be content to leave the rest of us alone.
Yeah I know it's an unrealistic ideal but it's fun to think about.
That said my theory about power and privilege is that it's actually just a symptom of a deep fear of death. The reason gaining more money/power/status never lets up is because there's no amount of money/power/status that can satiate that fear, but somehow naively there's a belief that it can. I wouldn't be surprised if most people who have any amount of wealth has a terrible fear of losing it all, and to somebody whose identity is tied to that wealth, that's as good as death.
> It's not what can humans provide for each-other, it's what can humans provide for a handful of ultra-wealthy oligarchs.
You can definitely use AI and automation to help yourself and your family/community rather than the oligarchs. You set the prompts. If AI is smart enough to do your old job, it is also smart enough to support you be independent.
I was trying to phrase something like this, but you said it a lot better than I ever could.
I can’t help but smile at the possibility that you could be a bot.
An engine performs a simple mechanical operation. Chess is a closed domain. An AI that could fully automate the job of these new hires, rather than doing RAG over a knowledge base to help onboard them, would have to be far more general than either an engine or a chessbot. This generality used to be foregrounded by the term "AGI." But six months to a year ago when the rate of change in LLMs slowed down, and those exciting exponentials started to look more like plateauing S-curves, executives conveniently stopped using the term "AGI," preferring weasel-words like "transformative AI" instead.
I'm still waiting for something that can learn and adapt itself to new tasks as well as humans can, and something that can reason symbolically about novel domains as well as we can. I've seen about enough from LLMs, and I agree with the critique that som type of breakthrough neuro-symbolic reasoning architecture will be needed. The article is right about one thing: in that moment AI will overtake us suddenly! But I doubt we will make linear progress toward that goal. It could happen in one year, five, ten, fifty, or never. In 2023 I was deeply concerned about being made obsolete by AI, but now I sleep pretty soundly knowing the status quo will more or less continue until Judgment Day, which I can't influence anyway.
Remember, these companies (including the author) have an incentive to continue selling fear of job displacement not because of how disruptive LLMs are, but because of how profitable it is if you scare everyone into using your product to “survive”.
To companies like Anthropic, “AGI” really means: “Liquidity event for (AI company)” - IPO, tender offer or acquisition.
Afterwards, you will see the same broken promises as the company will be subject to the expectations of Wall St and pension funds.
Person whose job it is to sell AI selling AI is what I got from this post.
I've also noticed that LLMs are really good at speeding up onboarding. New hires basically have a friendly, never tired mentor available. It gives them more confidence in the first drafted code changes / design docs. But I don't think the horse analogy works.
It's really changing cultural expectations. Don't ping a human when an LLM can answer the question probably better and faster. Do ping a human for meaningful questions related to product directions / historical context.
What LLMs are killing is:
- noisy Slacks with junior folks questions. Those are now your Gemini / chat gpt sessions.
- tedious implementation sessions.
The vast majority of the work is still human led from what I can tell.
That sounds like a horrible onboarding experience. Human mentors provide a lot more than just answering questions, like providing context, comraderie or social skills, or even coping mechanisms. Starting a new job can be terrifying for juniors, and if their only friend is faceless chat bot...
You're right. We need to keep tabs on the culture for new hires for the reasons you mentioned. LLMs are really good at many onboarding tasks, but the social ones.
I think done right it is a superior onboarding experience. As a new hire, you no longer have to wait for your mentor to be available to learn some badly documented tech things. This is really empowering some of them. The lack of building human context / connections etc is real, and I don't think LLMs can meaningfully help there. Hence my skepticism for the horse analogy.
you still lose a bit from not having those juniors' questions around - where is your documentation sucking or your code is confusing?
Both questions are getting scary good answers from the latest models. Yes, I tried, on a large proprietary code base which shouldn’t be included in any training set.
Software engineers used to know that measuring lines of code written was a poor metric for productivity...
https://www.folklore.org/Negative_2000_Lines_Of_Code.html
Ctrl-F 'lines', 0 results
Ctrl-F 'code', 0 results
What is this comment about?
Charitably I'm guessing it's supposed to be an allusion to the chart with cost per word? Which is measuring an input cost not an output value, so the criticism still doesn't quite make sense, but it's the best I can do...
Maybe it was edited. I count at least 6 instances of the word “code”
So, a free idea from me: train the next coding LLM to produce not regular text, but patches which shortens code while still keeping the code working the same.
gonna tell claude to write all my code in one line
Cost per word is a bizarre metric to bring up. Since when is volume of words a measure of value or achievement?
It also puts a thumb on the scale for AI, which tends to emit pages of text to answer simple questions.
Sounds like any post-secondary, graduate student, or management consultant out there being there are, very often, page/word count or hours requirements. Considering the model corpora, wordiness wins out.
The chart is actually words "thought or written" so I guess they are running up the numbers even more by counting Claudes entire inner monologue, on top of what it ultimately outputs.
these are not just “words” but answers to questions from people who got a job at anthropic had…
How about we stop trying the analogy clothing on and just tell it like it is? AI is unlike any other technology to date. Just like predicting the weather, we don't know what it will be like in 20 months. Everything is a guesstimate.
This is the correct take. We all have that "Come to Jesus" moment eventually, where something blows our minds so profoundly that we believe anything is possible in the immediate future. I respect that, it's a great take to have and promotes a lot of discussion, but now more than ever we need concretes and definitives instead of hype machines and their adjacent counterparts.
Too much is on the line here regardless of what ultimately ends up being true or just hype.
It’s hard to filter the hot air from the realistic predictions. I’ve been hearing for over 10 years now that truck drivers are obsolete and that trucks will drive themselves. Yet today truck drivers are still very much in demand.
While in the last year I’ve seen generated images go from complete slop to indistinguishable from real photos. It’s hard to know what is right around the corner and what isn’t even close.
This is a fun piece... but what killed off the horses wasn't steady incremental progress in steam engine efficiency, it was the invention of the internal combustion engine.
According to Wikipedia, the IC engine was invented around 1800 and only started to get somewhere in the late 1800s. Sounds like the story doesn’t change.
https://en.wikipedia.org/wiki/Internal_combustion_engine
Quite. For reference, the horse population of France didn't decline significantly until the late 1940's [0].
[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC7023172/
Engine efficiency, chess rating, AI cap ex. One example is not like the other. Is there steady progress in AI? To me it feels like it’s little progress followed by the occasional breakthrough but I might be totally off here.
The only 'line go up' graph they have left is money invested. I'm even dubious of the questions answered graph. It looks more like a feature added to internal wiki that went up in usage. Instead it's portrayed as a measure of quality or usefulness.
I think you are totally off. Individual benchmarks are not very useful on their own, but as far as I’m aware they all tell the same story of continual progress. I don’t find this surprising since it matches my experience as well.
What example do you need? In every single benchmark AI is getting better and better.
Before someone says "but benchmark doesn't reflect real world..." please name what metric you think is meaningful if not benchmark. Token consumption? OpenAI/Anthropic revenue?
Whenever I try and use a "state of the art" LLM to generate code it takes longer to get a worse result than if I just wrote the code myself from the start. That's the experience of every good dev I know. So that's my benchmark. AI benchmarks are BS marketing gimmicks designed to give the appearance of progress - there are tremendous perverse financial incentives.
This will never change because you can only use an LLM to generate code (or any other type of output) you already know how to produce and are expert at - because you can never trust the output.
Third party benchmarks like terminalbench exist.
W.r.t code changes especially small ones (say 50 lines spread across 5 files), if you can't get an agent to make nearly exactly the code changes you want, just faster than you, that's a you problem at this point. If it maybe would take you 15 minutes, grok-code-fast-1 can do it in 2.
Right. With careful use of AIs, I can use it to gather information to help me make better designs (like giving me summaries of the current best available frameworks or libraries to choose for a given project), but as far as just generating an architecture and then generating the code and devops and so on for that? It's just not there, unless you're creating an app that effectively already exists, like some basic CRUD app.
If you're creating basic CRUDs, what on earth are you doing? That kind of thing should have been automated a long time ago.
What do you mean when you say building crud apps should be automated?
CRUD apps are ridiculously simple and have been in existence my entire life. Yet it is surprisingly difficult to make a basic CRUD and host it somewhere. The bulk of useful but simple business apps are just a CRUD with a tiny bit of customisation and integration around them.
It is true that LLMs make it easier to build these kind of things without having to become a competent programmer first.
E.g using Rails and generate scaffolding. Makes it real fast and easy to make a CRUD app.
conventionally, it should have been abstracted by a higher-level language.
> please name what metric you think is meaningful
Job satisfaction and human flourishing
By those metrics, AI is getting worse and worse
AI is very satisfied in doing the job, just ask it.
AI is able to speed up the progress, to give more resources, to give the most important thing people have - time. The fact that these incredible gifts are misused (or used inefficiently) is not the problem of AI. This would be like complaining that the objective positive of increased food production is actually a negative, because people are getting fatter.
OpenAI net profit.
The figures for cost are wildly off to start with.
ChatGPT was released 3 years ago and that was complete ass compared to what we have today.
I think it's a cool perspective, but the not-so-hidden assumption is that for any given domain, the efficiency asymptote peaks well above the alternative.
And that really is the entire question at this point: Which domains will AI win in by a sufficient margin to be worth it?
> the not-so-hidden assumption is that for any given domain, the efficiency asymptote peaks well above the alternative
This is an assumption for the best-case scenario, but I think you could also just take the marginal case. Steady progress builds until you get past the state of the art system, and then the switch becomes easy to justify.
> In 1920, there were 25 million horses in the United States, 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.
But would you rather be a horse in 1920 or 2020? Wouldn't you rather have modern medicine, better animal welfare laws, less exposure to accidents, and so on?
The only way horses conceivably have it worse is that there are fewer of them (a kind of "repugnant conclusion")...but what does that matter to an individual horse? No human regards it as a tragedy that there are only 9 billion of us instead of 90 billion. We care more about the welfare of the 9 billion.
The equivalency here is not 9 billion versus 90 billion, it's 9 billion versus 90 million, and the question is how does the decline look? Does it look like the standard of living for everyone increasing so high that the replacement rate is in the single digit percentage range, or does it look like some version of Elysium where millions have immense wealth and billions have nothing and die off?
> No human regards it as a tragedy that there are only 9 billion of us instead of 90 billion.
I have met some transhumanists and longtermists who would really like to see some orders of magnitude increase in the human population. Maybe they wouldn't say "tragedy", but they might say "burning imperative".
I also don't think it's clearly better for more beings to exist rather than fewer, but I just want to assure you that the full range of takes on population ethics definitely exists, and it's not simply a matter of straightforward common sense how many people (or horses) there ought to be.
> Back then, me and other old-timers were answering about 4,000 new-hire questions a month.
> Then in December, Claude finally got good enough to answer some of those questions for us.
What getting high on your own supply actually looks like. These are not the types of questions most people have or need answered. It's unique to the hiring process and the nascent status of the technology. It seems insane to stretch this logic to literally any other arena.
On top of that horses were initially replaced with _stationary_ gasoline engines. Horses:Cars is an invalid view into the historical scenario.
"In 1920, there were 25 million horses in the United States, 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.
And not very long after, 93 per cent of those horses had disappeared.
I very much hope we'll get the two decades that horses did."
I'm reminded of the idiom "be careful what you wish for, as you might just get it." Rapid technogical change has historically lead to prosperity over the long term but not in the short term. My fear is that the pace of change this time around is so rapid that the short term destruction will not be something that can be recovered from even over the longer term.
I just have no idea how rigerously the data was reviewed. The 95% decline simply does no compute with
4,500,000 in 1959
and even an increase to
7,000,000 in 1968
largely due to increase in recreational horse population.
https://time.com/archive/6632231/recreation-return-of-the-ho...
So that recreational existence at the leisure of our own machinery seems like an optional future humans can hope for too.
Turns out the chart is about farm horses only as counted by the USDA not including any recreational horses. So this is more about agricultural machinery vs. horses, not passenger cars.
---
City horses (the ones replaced by cars and trucks) were nearly extinct by 1930 already.
City horses were formerly almost exclusively bred on farms but because of their practical disappearance such breeding is no longer necessary. They have declined in numbers from 3,500,000 in 1910 to a few hundred thousand in 1930.
https://www2.census.gov/library/publications/decennial/1930/...
My reading of tfa is exactly that - the author is hoping that we'll have at least a generation or so to adapt, like horses did, but is concerned that it might be significantly more rapid.
To be clear though, the horses didn't adapt. Their populate was reduced by orders of a magnitude.
True, but the horses' population started (slightly) rising again when they went from economic tools to recreational tools for humans. What will happen to humans?
The horse population was being boosted beyond normal numbers by human intervention. When humans stopped breeding them the numbers dropped.
At least currently humans do not need AI to reproduce.
There were approximately zero horses in the wild, so it was all about what humans found useful.
Pray it’s still humans who ask these kinds of questions about AI, not the other way around.
Did the population of work/service dogs decline? Horses were already a form of automation over human labor.
Bullocks.
That's what Sandy over the road (born 1932, died last year), used to hitch up every morning at 4am, when he was ten, to sled a tank of water back to the farm from the local spring.
"You're absolutely right!" Thanks for pointing it out. I was expecting that kind of perspective when the author brought up horses, but found the conclusion to be odd. Turns out it was just my reading of it.
the stability of no govt faced risk over a 20% increase in horse unemployment
Someone who makes horseshoes then learns how to make carburetors, because the demand is 10x.
https://en.wikipedia.org/wiki/Jevons_paradox
If AI is really likely to cause a mass extinction event, then non-proliferation becomes critical as it was in the case with nuclear weapons. Otherwise, what does it really mean for AI to "replace people" outside of people needing to retool or socially awkward people having to learn to talk to people better? AI surely will change a lot, but I don't understand the steps needed to get to the highly existential threat that has become a cliché in every "Learn CLAUDE/MCP" ad I see. A period of serious unemployment, sure, but this article is talking about population collapse, as if we are all only being kept alive and fed to increase shareholder value for people several orders of magnitude more intelligent than us, and with more opposable thumbs. Do people think 1.2B people are going to die because of AI? What is the economy but people?
This is food for thought, but horses were a commodity; people are very much not interchangeable with each other. The BLS tracks ~1,000 different occupations. Each will fall to AI at a slightly different rate, and within each, there will be variations as well. But this doesn't mean it won't still subjectively happen "fast".
> Back then, me and other old-timers were answering about 4,000 new-hire questions a month.
> Then in December, Claude finally got good enough to answer some of those questions for us.
> … Six months later, 80% of the questions I'd been being asked had disappeared.
Interesting implications for how to train juniors in a remote company, or in general:
> We find that sitting near teammates increases coding feedback by 18.3% and improves code quality. Gains are concentrated among less-tenured and younger employees, who are building human capital. However, there is a tradeoff: experienced engineers write less code when sitting near colleagues.
https://pallais.scholars.harvard.edu/sites/g/files/omnuum592...
This tracks with my own AI usage over just this year. There have been two releases that caused step changes in how much I actually use AI:
1. The release of Claude Code in February
2. The release of Opus 4.5 two weeks ago
In both of these cases, it felt like no big new unlocks were made. These releases aren’t like OpenAI’s o1, where they introduced reasoning models with entirely new capabilities, or their Pro offerings, which still feel like the smartest chatbots in the world to me.
Instead, these releases just brought a new user interface, and improved reliability. And yet these two releases mark the biggest increases in my AI usage. These releases caused the utility of AI for my work to pass thresholds where Claude Code became my default way to get LLMs to read my code, and then Opus 4.5 became my default way to make code changes.
I would add Gemini Nano Banna Pro to that list - (its words with image ability) is amazing..
Regarding horses vs. engines, what changed the game was not engine efficiency, but the widespread availability of fuel (gas stations) and the broad diffusion of reliable, cheap cars. Analogies can be made to technologies like cell phones, MP3 players, or electric cars: beyond just the quality of the core technology, what matters is a) the existence of supporting infrastructure and b) a watershed level of "good/cheap enough" where it displaces the previous best option.
funny how we have all of this progress yet things that actually matter (sorry chess fans) in the real world are more expensive: health care, housing, cars. and what meager gains there are seem to be more and more concentrated in a smaller group of people.
plenty of charts you can look at - net productivity by virtually any metric vs real adjusted income. the example I like are kiosks and self checkout. who has encountered one at a place where it is cheaper than its main rival and is directly attributable to (by the company or otherwise) to lower prices?? in my view all it did was remove some jobs. that's the preview. that's it. you will lose jobs and you will pay more. congrats.
even with year 2020 tech you could automate most work that needs to be done, if our industry wouldn't endlessly keep disrupting itself and have a little bit of discipline.
so once ai destroys desk jobs and the creative jobs, then what? chill out? too bad anyone who has a house won't let more be built.
To give backing i’m from Australia which has ~2.5x the median wealth per capita of US citizens but a lower average wealth. This shows through in the wealth of a typical citizen. Less homelessness, better living standards (hdi in australia is higher) etc.
Compare sorting by median vs average to get a sense of the issue; https://en.wikipedia.org/wiki/List_of_countries_by_wealth_pe...
This is a recent development where the median wealth of citizens in progressively taxes nations has quickly overtaken the median wealth of USA citizens.
All it takes is tax on the extremely wealthy and lessening taxes on the middle class… seems obvious right? Yet things gave consistently been going the other way for along time in the USA.
> All it takes is tax on the extremely wealthy and lessening taxes on the middle class… seems obvious right?
You could tax 100% of all of the top 1%'s income (not progressively, just a flat 100% tax) and it'd cover less than double the federal government's budget deficit in the US. There would be just enough left over to pay for making the covid 19 ACA subsidies permanent and a few other pet projects.
Of course, you can't actually tax 100% of their income. In fact, you'd need higher taxes on the top 10% than anywhere else in the West to cover the deficit, significantly expand social programs to have an impact, and lower taxes on the middle class.
It should be pointed out that Australia has higher taxes on their middle class than the US does. It tops out at 45% (plus 2% for medicare) for anyone at $190k or above.
If you live in New York City, and you're in the top 1% of income earners (taking cash salary rather than equity options) you're looking at a federal tax rate of 37%, a state tax rate of 10.9%, and a city income tax rate of 3.876% for a total of 51.77%. Some other states have similarly high tax brackets, others are less, and others yet use other schemes like no income tax but higher sales and property taxes.
Not quite so obvious when you look closer at it.
The point isn't to just cover the tax bill, it's that by shifting the burden up the class ladder, there is more capital available to the classes that spend and circulate their money in the economy rather than merely accumulate it
[dead]
I think by the time the wealthy realize they're setting themselves up for the local equivalent of the French Revolution it will be a bit late. It's a really bad idea to create a large number of people with absolutely nothing to lose.
I suspect the wealthy think they can shield themselves by exerting control over mass media, news outlets, the press, and domestic surveillance, all amplified by AI.
If all that fails, they have their underground bunkers on faraway islands and/or backup citizenships.
> I suspect the wealthy think they can shield themselves by exerting control over
Agreed and I think this is a result of a naive belief that we humans tend to have that controlling thoughts can control reality. Politicians still live by this belief but eventually reality does catch up. By that time all trust is long gone.
They already know, and do not care. Their plan is quite literally to retreat into bunkers with shock collars enforcing the loyalty of their guards.
The richest of the rich have purchased islands where they can hole up.
Stripped of their infinite freedom out here to hide in a bunker? No chance
The bunkers are in case of nuclear war or serious pandemics. Absolutely worst case last resort scenario, not just "oh I don't care if I end up there"
Moreover when you acting absolutely relentlessly like certain car maker.
People usually change their behavior after some pretty horrific events. So I would predict something like that in future. For both Europe and US too.
Those are all expensive because of artificial barriers meant to keep their prices high. Go to any Asian country and houses, healthcare and cars are priced like commodities, not luxuries.
Tech and AI have taken off in the US partially because they’re in the domain of software, which hasnt bee regulated to the point of deliberate inefficiency like other industries in the US.
If we had less regulation of insurance companies, do you think they’d be cheaper?
(I pick this example because our regulation of insurance companies has (unintuitively) incentivized them to pay more for care. So it’s an example of poor regulation imo)
Well, they'd be more functional as insurance, at least! The way insurance is supposed to work is that your insurance premium is proportional to the risk. You can't go uninsured and then after discovering that your house is on fire and about to burn down, sign up for an insurance plan and expect it to be covered.
We've blundered into a system that has the worst parts of socialized health care and private health insurance without any of the benefits.
Health care is the more complicated one of the examples cited, but housing definitely is an 'own goal' in how we made it too difficult to build in too many places - especially "up and in" rather than outward expansion.
Stuff like this isn't Wall Street or Billionaires or whatever bogeyman - it's our neighbors: https://bendyimby.com/2024/04/16/the-hearing-and-the-housing...
Health care is complicated, but I don't think it would hard to understand how less regulations could lower prices. More insurers could enter markets, could compete across state lines, and compliance costs could be lowered.
However regulation is helpful for those already sick or with pre-existing conditions. Developed countries with well-regulated systems also have better health outcomes than the US does.
> Go to any Asian country and houses, healthcare and cars are priced like commodities, not luxuries.
What do you mean? Several Asian cities have housing crises far worse than the US in local purchasing power, and I'd even argue that a "cheap" home in many Asian countries is going to be of a far lower quality than a "cheap" home in the US.
you mean the same Asia that has the same problem? USA enjoying arbitrage is not actually a solution nor is it sustainable. not to mention that if you control for certain things, like house size for instance relative to inflation adjusted income it isn't actually much different despite popular belief.
It would be kinda funny if not so tragic how economists will argue both "[productive improvement] will make things cheaper" and then in the next breath "deflation is bad and must be avoided at all costs"
But is it really, though? Dollars aren't meant to be held.
I think the idea of dollars as purely a trading medium where absolute prices don't matter wouldn't be such an issue if wages weren't always the last thing to rise with inflation.
As it is now anyone with assets is only barely affected by inflation while those who earn a living from wages have their livelihood eroded over time covertly.
Exactly as the current owners… ahem, leaders of this country want it.
Barely affected? They benefit massively from it. That is why the rich get richer.
True, in terms of share of the pie for sure
Housing is a funny old one and speaks to it being a human problem. One thing a lot of people dont truly engage with with the housing issue is that its a massive issue of distribution. Too many people want to live in too few places. Yes, central banks & interest rates (being too low and also now being relatively too high), nimbyism, and rent seeking play an important role too but solving the "too many people live in too few places" issue actually fixes that problem (slowly, and possibly unpalatably slow for some, but a fix nonetheless)
The key issue upstream is that too many good jobs are concentrated in too few places, and that leads to consumerism stimulating those places and making them further more attractive. Technology, through Covid, actually gave governments a get out of jail free card by allowing remote work to become more mainstream. Only to just not grasp the golden egg they were given. Pivot economies more to remote working more actively helps distribute people to other places with more affordable home. Over time, and again slowly, those places become more attractive because people now actually live there.
Existing homeowners can still wrap themselves in the warm glow of their high house prices which only loses "real" value through inflation which people tend not to notice as much.
But we decided to try to go back to the status quo so oh well
Food and clothes are much cheaper. People used to have to walk or hitchhike a lot more. People died younger, or were trapped with abusive spouses and/or parents. Crime was high. There was little economic mobility. It really sucked if you weren’t a straight white man. Houses had one bathroom. Power went out regularly. Travel was rare and expensive; people rarely flew anywhere. There was limited entertainment or opportunities to learn about the world.
yeah that my question to the author too - if A.I is to really earn its keep it means A.I should help in getting more physical products into people's hands & helping with producing more energy.
physical products & energy are the two things that are relevant to people's wellbeing.
right now A.I is sucking up the energy & the RAM - so is it gonna translate into a net positive ?
That's the question though isn't it. If everyone got a subscription to claude-$Latest would they be able to pay their rent with it?
No, because they’d be waiting in the lengthy queues that would be necessary for anyone to use it. There are hard constraints to this tech that make what you’re talking about infeasible.
No because nurses, mechanics, and janitors are still needed.
It's interesting to see Cyberpunk 2077 became somehow relatable more and more.
>in the real world are more expensive: health care, housing, cars.
Think of it another way. It's not that these things are more expensive. It's that the average US worker simply doesn't provide anything of value. China provides the things of value now. How the government corrected for this was to flood the economy with cash. So it looks like things got more expensive, when really it's that wages reduced to match reality. US citizens selling each other lattes back and forth, producing nothing of actual value. US companies bleeding people dry with fees. The final straw was an old man uniting the world against the USA instead of against China.
If you want to know where this is going, look at Britain: the previous world super power. Britain governed far more of the earth than the USA ever did, and now look at it. Now the only thing it produces is ASBOs. I suppose it also sells weapons to dictators and provides banking to them. That is the USA's future.
Yep. My grandma bought her house in ~1962 for $20k working at a factory making $2/hr. Her mortgage was $100/m; about 1 weeks worth of pay. $2/hr then is the equivalent of ~$21/hr today.
If you were to buy that same house today, your mortgage would be about $5100/m-- about 6 weeks of pay.
And the reason is exactly what you're saying: the average US worker doesn't provide as much value anymore. Just as her factory job got optimized/automated, AI is going to do the same for many. Tech workers were expensive for a while and now they're not. The problem is that there seems to be less and less opportunity where one can bring value. The only true winners are the factory owners and AI providers in this scenario. The only chance anybody has right now is to cut the middleman out, start their own business, and pray it takes off.
But the us is China's market, so the ccp goes along even though they are the producer. Because a domestic consumer economy would mean sharing the profits of that manufacturing with the workers. But that would create a middle class not dependent on the party leading (at least in their minds, and perhaps not wrongly) to instability. It is a dance of two, and neither can afford to let go. And neither can keep dancing any longer. I think it will be very bad everywhere.
It's inflation, simple as that. The US left the gold standard at the exact same time that productivity diverged from wages. Coincidence? No.
Pretty much everything gets more expensive, with the outliers being tech which has gotten much cheaper, mostly because the rate at which it progresses is faster than the rate at which governments can print money. But everything we need to survive, like food, housing, etc, keeps getting more expensive. And the asset class get richer as a result.
Well, politically, housing becoming cheaper is considered a failure. And this is true for all ages. As an example, take Reddit. Skews younger, more Democrat-voting, etc. You'd think they'd be for lower housing prices. But not really. In fact, they make fun of states like Texas whose cities act to allow housing to become cheaper: https://www.reddit.com/r/LeopardsAteMyFace/comments/1nw4ef9/...
That's just an example, but the pattern will easily repeat. One thing that came out of the post-pandemic era is that the lowest deciles saw the biggest rises in income. Consequently, things like Doordash became more expensive, and stuff like McDonald's stopped staffing as much.
This isn't some grand secret, but most Americans who post on Twitter, HN, or Reddit consider the results some kind of tragedy, though it is the natural thing that happens when people become much higher income: you can't hire many of them to do low-productivity jobs like bus a McD's table.
That's what life looks like when others get richer relative to you. You can't consume the fruits of their labor for cheap. And they will compete for you with the things that you decided to place supply controls on. The highly-educated downwardly-mobile see this most acutely, which is why you see it commonly among the educated children of the past elite.
Thank you, I've replied too many times that if people want low priced housing, it's easily found in Texas. The replies are empty or stating that they don't want to live there because... it's Texas.
So the young want cheap affordable housing, right in the middle of Manhattan, never going to happen.
[dead]
4000 questions a month from new hireds. How many of those were repeated many times. A lot. So if they'd built a wiki?
I am not an AI sceptic.. I use it for coding. But this article is not compelling.
I think the author's point is that each type of job will basically disappear roughly at once, shortly after AI crosses the bar of "good enough" in that particular field.
I think the turning point will be when AI assisted individuals or tiny companies are able to deliver comparable products/value as the goliaths.
That would be the ideal scenario; when you can build a small business more easily.
>>This was a five-minute lightning talk given over the summer of 2025 to round out a small workshop.
Glad I noticed that footnote.
Article reeks of false equivalences and incorrect transitive dependencies.
my favorite part was where the graphs are all unrelated to each other
Ironically, you could use the sigmoid function instead of horses. The training stimulus slowly builds over multiple iteration and then suddenly, flip: the wrong prediction reverses.
We still have chess grandmasters if you have noticed..
Yes, and we'll continue to have human coding competitions for entertainment purpose. Good luck trying to live off the prize money though.
Hikaru makes good money streaming on Twitch tho
I mean it's hard to argue that if we invented a human in a box (AGI) human work would be irrelevent. But I don't know how we could watch current AI and anyone can say we have that.
The big thing this AI boom has showed us that we can all be thankful to have seen is what a human in a box will eventually look like. The first generation of humans to be able to see that is a super lucky experience to have.
Maybe it's one massive breakthrough away or maybe it's dozens away. But there is no way to predict when some massive breakthrough will occur Illya said 5-20 that really just means we don't know.
Why a human in a box and not an android? A lot of jobs will require advanced robotics to fully automate. And then there are jobs where customer preference is for human interaction or human entertainment. It's like how superior chess engines have not reduced the profession of chess grandmasters, because people remain more interested in human chess competition.
I've never visited this blog before but I really enjoy the synthesis of programming skill (at least enough skill to render quick graphs and serve them via a web blog) and writing skill here. It kind of reminds me of the way xkcd likes to drive home his ideas. For example, "Surpassed by a system that costs one thousand times less than I do... less, per word thought or written, than ... the cheapest human labor" could just be a throwaway thought, and wouldn't serve very well on its own, unsupported, in a serious essay, and of course the graph that accompanies that thought in Jones's post here is probably 99.9% napkin math / AI output, but I do feel like it adds to the argument without distracting from it.
(A parenthetical comment explaining where he ballparked the measurements for himself, the "cheapest human labor," and Claude numbers would also have supported the argument, and some writers, especially web-focused nerd-type writers like Scott Alexander, are very good at this, but text explanations, even in parentheses, have a way of distracting readers from your main point. I only feel comfortable writing one now because my main point is completed.)
Terrible comparison.
Horses and cars had a clearly defined, tangible, measurable purpose: transport... they were 100% comparable as a market good, and so predicting an inflection point is very reasonable. Same with Chess, a clearly defined problem in finite space with a binary, measurable outcome. Funny how Chess AI replacing humans in general was never considered as a serious possibility by most.
Now LLMs, what is their purpose? What is the purpose of a human?
I'm not denying some legitimate yet tedious human tasks are to regurgitate text... and a fuzzy text predictor can do a fairly good job of that at less cost. Some people also think and work in terms of text prediction more often than they should (that's called bullshitting - not a coincidence).
They really are _just_ text predictors, ones trained on such a humanly incomprehensible quantity of information as to appear superficially intelligent, as far as correlation will allow. It's been 4 years now, we already knew this. The idea that LLMs are a path to AGI and will replace all human jobs is so far off the mark.
LLMs can only hallucinate and cannot reason or provide answers outside of their training set distribution. The architecture needs to fundamentally change in order to reach human equivalence, no matter how many benchmarks they appear to hit.
> 90% of the horses in the US disappeared
Where did they go?
they grew old and died ?
sometimes not nearly so pleasant for them.
Point taken, but it's hard to take a talk seriously when it has a graph showing AI becoming 80% of GDP! What does the "P" even stand for then?
Conclusion: Soylent..?
damn
Humans design the world to our benefit, horses do not.
Most humans don't. Only the wealthy and powerful are able to do this
And they often do it at the expense of the rest of us
Cool, now lets make a big list of technologies that didn't take off like they were expected to
Wait till the robots arrive. That they will know how to do a vast range of human skills, some that people train their whole lives for, will surprise people the most. The future shock I get from Claude Code, knowing how long stuff takes the hard way, especially niche difficult to research topics like the alternate applicable designs of deep learning models to a modeling task, is a thing of wonder. Imagine now that a master marble carver shows up at an exhibition and some sci-fi author just had robots make a perfect beautiful equivalent of a character from his novel, equivalent in quality to Michaelangelo's David, but cyberpunk.
I thought this was going to be about how much more intelligent horses are than AIs and I was disappointed
This post is kind of sad. It feels like he's advocating for human depopulation since the trajectory aligns with horse populations declining by 93% also.
Indeed. I do wonder if the inventors of the "transformer architecture" knew all the potential Pandora's boxes they were opening when they invented it. Probably not.
No one wants to say the scary potential logical conclusion of replacing the last value that humans have a competitive advantage in; that being intelligence and cognition. For example there is one future scenario of humanity where only the capital and resource holders survive; the middle and lower classes become surplus to requirements and lose any power. Its already happening slowly via inflation and higher asset prices after all - it is a very real possibility. I don't think a revolution will be possible in this scenario; with AI and robotics the rich could outnumber pretty much everyone.
Not advocating, just predicting. And not necessarily actual population, just population in paid employment.
yeah but machines don't produce horseshit, or do they? (said in the style of Vsauce)
> And not very long after, 93 per cent of those horses had disappeared.
> I very much hope we'll get the two decades that horses did.
> But looking at how fast Claude is automating my job, I think we're getting a lot less.
This "our company is onto the discovery that will put you all out of work (or kill you?)" rhetoric makes me angry.
Something this powerful and disruptive (if it is such) doesn't need to be owned or controlled by a handful of companies. It makes me hope the Chinese and their open source models ultimately win.
I've seen Anthropic and OpenAI employees leaning into this rhetoric on an almost daily basis since 2023. Less so OpenAI lately, but you see it all the time from these folks. Even the top leadership.
Meanwhile Google, apart from perhaps Kilpatrick, is just silent.
At this point "we're going to make all office work obsolete" feels more like a marketing technique than anything actually connected to reality. It's sort of like how Coca-Cola implies that drinking their stuff will make you popular and well-liked by other attractive, popular people.
Meanwhile, my own office is buried in busywork that there are currently no AI tools on the market that will do the work for us, and AI entering a space sometimes increases busywork workloads. For example, when writing descriptions of publications or listings for online sales, we have to put more effort now into not sounding like it was AI-generated or we will lose sales. The AI tools for writing descriptions / generating listings are not very helpful either. (An inaccurate listing/description is a nightmare.)
I was able to help set up a client with AI tools to help him generate basically a faux website in a few hours that has lots of nice graphic design, images, etc. so that his new venture looks like a real company. Well, except for the "About Us" page that hallucinated an executive team plus a staff of half a dozen employees. So I guess work like that does get done faster now.
Well, tbf the author was hired to answer newbie questions. Perhaps the position is that of an evangelist, not a scientist.
[dead]
It's astounding how subtly anti-AI HN has become over the past year, as the models keep getting better and better. It's now pervasive across nearly every AI thread here.
As the potential of AI technical agents has gone from an interesting discussion to extraordinarily obvious as to what the outcome is going to be, HN has comically shifted negative in tone on AI. They doth protest too much.
I think it's a very clear case of personal bias. The machines are rapidly coming for the lucrative software jobs. So those with an interest in protecting lucrative tech jobs are talking their book. The hollowing out of Silicon Valley is imminent, as other industrial areas before it. Maybe 10% of the existing software development jobs will remain. There's no time to form powerful unions to stop what's happening, it's already far too late.
I don't think is the case; I think what's actually going on is that the HN crowd are the people who are stuck actually trying to use AI tools and aware of their limitations.
I have noticed, however, that people who are either not programmers or who are not very good programmers report that they can derive a lot of benefit from AI tools, since now they can make simple programs and get them to work. The most common use case seems to be some kind of CRUD app. It's very understandable this seems revolutionary for people who formerly couldn't make programs at all.
For those of us who are busy trying to deliver what we've promised customers we can do, I find I get far less use out of AI tools than I wish I did. In our business we really do not have the budget to add another senior software engineer, and we don't the spare management/mentor/team lead capacity to take on another intern or junior. So we're really positioned to be taking advantage of all these promises I keep hearing about AI, but in practical terms, it saves me at an architect or staff level maybe 10% of my time and for one of our seniors maybe 5%.
So I end up being a little dismissive when I hear that AI is going to become 80% of GDP and will be completely automating absolutely everything, when what I actually spend my day on is the same-old same-old of trying to get some vendor framework to do what I want to get some sensor data out of their equipment and deliver apps to end customers that use enough of my own infrastructure that they don't require $2,000 a month of cloud hosting services per user. (I picked that example since at one customer, that's what we were brought in to replace: that kind of cost simply doesn't scale.)
I value this comment even though I don't really agree about how useful AI is. I recognise in myself that my aversion to AI is at least partly driven by fear of it taking my job.
I’m not anti-AI; I use it every day. But I also think all this hand-wringing is overblown and unbalanced. LLMs, because of what they are, will never replace a thoughtful engineer. If you’re writing code for a living at the level of an LLM then your job was probably already expendable before LLMs showed up.
except you know, you had a job. and coming out of college could get one… if you were graduating right now in compsci you’ll find a wasteland with no end in sight…
> The hollowing out of Silicon Valley is imminent
I think AI tools are great, and I use them daily and know their limits. Your view is commonly held by management or execs who don't have their boots on the ground.
That's what I've observed. I currently have more work booked than I can reasonably get done in the next year, and my customers would be really delighted if I could deliver it to them sooner, and take on even more projects. But I have yet to find any way that just adding AI tools to the mix makes us orders-of-magnitude better. The most I've been able to squeeze out is a 5% to 10% increase.
But they do have their hands on your budget, and they are responsible for creating and filling positions.
>It's astounding how subtly anti-AI HN has become over the past year, as the models keep getting better and better. It's now pervasive across nearly every AI thread here.
I don't think you can characterise it as a sentiment of the community as a whole. While every AI thread seems to have it's share of AI detractors, the usernames of the posters are becoming familiar. I think it might be more accurate to say that there is a very active subset of users with that opinion.
This might hold true for the discourse in the wider community. You see a lot of coverage about artists outraged by AI, but when I speak to artists they have a much more moderate opinion. Cautious, but intrigued. A good number of them are looking forward to a world that embraces more ambitious creativity. If AI can replicate things within a standard deviation of the mean, the abundance of that content there will create an appetite for something further out.
It's not subtle.
But the temptation of easy ideas cuts both ways. "Oldsters hate change" is a blanket dismissal, and there are legitimate concerns in that body of comments.
hello faster horses