96

AI eats the world

The question is how fast.

I remember in the late 90s, me and some friends discovered the internet and went "Woah! Letters, books, shops, work, study, gaming ... pretty much everything will take place in this virtual realm soon!".

We were right. Except for the "soon".

When the first internet café opened, we started spending a lot of our time there, meeting lots of interesting people. Everything outside the internet already seemed anachronistic to us.

What I didn't expect back then was that even replacing letters with email (which already worked!) would take another painstaking ... 20 years!

Today, my work is already massively transformed via neural networks. Pretty much everything I do starts with an interaction with a neural network. Usually a question to an LLM. And then LLMs or other networks are involved all the way up to finishing the task. Thinking about how I worked before these large neural networks came up makes me shiver. How cumbersome. How anachronistic.

But when I talk to people outside the tech sphere, most are not using LLMs or other neural networks at all.

Everything humans do will be deeply transformed. Even more so than via the internet. Will it take 20 years again? Or is technological progress accelerating and this time it will take only 10 years? So far, my feeling is that it will be a bit faster, but not twice as fast.

But maybe it will be even more surprising. One future I can imagine is that people don't even need to adapt. Maybe we will see software simply do all aspects of a job. One can already imagine it for driving cars or making movies. It is harder to imagine it for managing a company, for example. But who knows. LLMs are already surprisingly good at creating chains of thoughts. Maybe we'll find ourselves in an unexpected future in a few years already, where human involvement in anything is just a burden to the process.

3 days agomg

> Pretty much every thing I do starts with an interaction with a neural network.

Pretty much nothing I do starts this way.

Look LLM's are interesting. I sure spend a lot less time writing basic one off scripts because of them. The "extra step" of tossing emails to an LLM is just proof reading with less tedium.

LLMs gave every one an intern that does middling work quickly, never complains and doesn't get coffee.

We need them to be cheap (to run) and localy/owned hardware (for security and copy right reasons).

3 days agozer00eyz
[deleted]
3 days ago

If you go spend $5k on a MacBook Pro m4 max with 128 gigs of ram, and toss on Ollama with Qwen2.5-72b, you have your local LLM, free to run as much as you like.

At first glance that might seem expensive, but then consider how insane it is that you can ask your laptop arbitrary questions and have it respond with really cogent answers, on almost any topic you can think of, without relying on a massive rack of gpu machines behind an api. It uses barely more power than an old incandescent bulb while doing it!

3 days agoericd

This. Some of the biggest arguments against AI/LLMs being ready for prime time are a result of ignorance around the current SoTA.

3 days agorohansood15

This. I’m so glad I got a lower cpu but maxed my ram. I use llms over search engines.

Even just getting cursor or boltai and you have unlimited llm cloud access for the price of a few coffees.

3 days agoJoeMattiello

Hi! Just curious, you use boltai as a client for your on-device LLM?

3 days agorubymamis

I don't think it will be as massive as you and a lot of people think.

The problem is reliability,you can't bulid of unstable foundations.

We would need some kind of library of "verified sentences" in LLMs, like facts verified by experts for them to be useful for a tons of stuff.

Right now if you are using LLMs for anything else that is not verifiable in a objective way (like programming, a program works or does not [let's not even talk efficency or security]) or fluff (generating fancy text for non crucial reasons) you are using a tool wrong.

3 days agoIulioh

If you want to work with verified facts, feed those facts in as part of your prompt. Most good LLMs accept 100,000 tokens now, and some accept 1,000,000+.

3 days agosimonw

There is still no guarantee that what comes out will be fact based.

a day agoSateeshm

Right, if you want guarantees of anything you don't want statistical machine learning models.

In practice, I've found that the risk of LLMs hallucinating against well chosen context in low enough that I rarely worry about it.

17 hours agosimonw

That's not true. The Internet was inherently unreliable for a long time—connection drops, packet losses, hardware failures—but that didn't stop it from being a platform for incredible value.

You can build valuable, reliable systems on top of unreliable foundations. That's how humanity has progressed over the centuries.

3 days agorohansood15

It's still unreliable because humans are but a human lying or saying incorrect things is different from an AI confidently doing it.

First of all humans or websites have reputations, with GPT you just hit refresh and you are talking to a entirely different entity and everything they said is gone.

I feel like there's a difference.

3 days agoIulioh

Yeah we are seeing a massive hype wave, with tons of folks trying new ideas and startups and obviously place like HN can make you feel a massive revolution across whole mankind is happening, on at least the level of internet coming. Novelty for most folks will eventually wear off though, this ain't cute cats videos level of continuous interest for the masses, i'd say more like wikipedia, now its just the place to get some unbiased info.

When reality is you get a bit smarter assistant than ever before, still not very trustworthy but already can be saving some time if used right. Good luck hearing this ever from folks riding that hype wave.

3 days agojajko

Definitely a hype wave, and we've seen these things come and go before. Crypto, NFTs, Blockchain, Virtual Assistants, Autonomous Vehicles, Wearables, Big Data, VR/AR, 3D TV. In five years, we'll be able to add LLMs/AI to the list.

I'm sure if we look, we can find a five year old HN post that starts out: "My work is already massively transformed via crypto. Pretty much everything I do starts with an interaction with a blockchain."

3 days agoryandrake

In the mid-90s, I decided to _not_ pick AI as a major because we had just gone through the original AI winter and most academic research on the topic was getting defunded in favor of high-speed networking and video conferencing.

To this day, and even though I am deeply involved with AI, I do not regret that decision—but I am constantly reminded that the expectations practitioners have are _way_ off what most people will actually want.

3 days agorcarmo

See slides 58 and 59 - this can take a while.

ChatGPT got to 100m users much faster than anything else because it's riding on all the infrastructure we already built in the last 20 years. To a consumer, it's 'just' a website, and you don't have to wait for telcos to build broadband networks or get everyone to buy a $600 smartphone.

But, most people go to the website and say 'well, that's very cool, but I don't know what I'd use it for'. It's very useful for coding and marketing, and a few general purposes, but it isn't - YET - very helpful for most of the things that most people do all day. A lot of the presentation is wondering about this.

3 days agobenedictevans

Only OpenAI knows for sure, but so many non-tech people I know use ChatGPT for a sounding board for whatever. "My boyfriend sent me this text, how should I respond?" or "Teach me about investing." There are a bunch of people I know that don't use ChatGPT, I'm just surprised at the uptake by people who I didn't think would have as use for it have found it very useful.

3 days agofragmede

How long is a while and what is it, that most people do all day?

A quick Google search for "most common job" came back with

    Cashier

    A cashier works in a retail environment and
    processes transactions for a customer's purchase.
I wouldn't be surprised if robots can do that on their own in 10 years.
3 days agomg

Robots can already do that, they are used at large chains (McDonalds) and they are used all the time.

What they can't do is call the police when the hobo gets too wild, can't fix the inevitable bug in the process(by doing some 4th level menu bypass) and other random stuff that might pop up.

And when the robot can do all that humans are no longer viable as economic entities and will be out competed.

3 days agoGud

The problem is, the robot has to know what I want it to do without me having to dictate it.

That's the beauty of human interaction, it can't be massively truncated down to just even finger pointing.

2 days agodelfinom

I think they are, though, just not consciously. I’m finding more and more it is rare that I find myself typing on a device and there isn’t some AI lurking in it. Email, docs, mobile keyboards, soon to be also Apple intelligence searching your photos, etc.

I can’t even type a note to myself in gmail without “polish!” Popping in to tell me that I’m shit at writing a good note-to-self and it needs polishing. (Wherein it typically reforms to , “dear customer service, …”)

Every Google search is using llm now right? At least I can’t figure out how to stop it from happening.

3 days agocryptoz

I made a note tool last week that was designed with this problem in mind. Push the polishing step to later.

It's an append only system and the happy path for note taking is a simple keybind pop up that only displays 30 characters of input on screen. I cannot go back and edit the notes later, but I can open a larger app to write new ones that reference(/supercede) the originals, which allow me to refine the note

3 days agoconradludgate

The Internet was fun and interesting because there were people on it creating fun and interesting projects and there were very few restrictions. Now it's just a shopping mall where people shout at each other.

AI is a trash compactor run through an HR department. It's already neutered and censored to hell. If you want to get the official, approved version of something, ask an AI. If you think that's fun, you've probably forgotten what it was like to be young.

3 days agosincerecook

I think this is a good observation of the past, but applying these patterns to AI is like when people applied them liberally to cryptocurrency and claimed it would replace the internet protocol, which was a certain kind of exuberance of the ill informed.

3 days agoth0ma5

The difference, as always, is that AI is useful - in many different ways.

Crypto was useful for speculative investment and occasionally for working around money transfer restrictions.

3 days agosimonw

IMO they are the same, they make certain things easier than they used to be. Not everyone needs to make transactions without a bank looking over their shoulder, and I don't personally know anyone whose life is changed by being able to type 100x faster (or conversely, produce summaries 1/100th the length of the input)

I continue to try applying small models to tasks life producing structured data from unstructured (using lmstudio's schema parameter) and it's really neat that I can extract data from work emails, but every once in a while it will hallucinate a new phone number when the one I needed was in the prompt, so it still hasn't met the kind of reliability I want in order to actually automate stuff where dollars are on the line

a day agojazzyjackson

What AI is useful for makes cryptocurrencies look marginally more useful by comparison.

3 days agoxk_id

I agree with everything except the use of the word “anachronistic”. It’s using LLM for everything today that is anachronistic, because almost everyone today does not ;)

3 days agoilliac786

> ... replacing letters with email ...

Some true, but not totally. The role of email has many differences from old-age letters. And letters are not only replaced with email, but also more modern communication manners. And the more modern communication manners not only replace letters, but telephone etc.

And the letter case is specified case. There are many old-age things which are not replaced totally.

The same is for AI. It will replace something, but not all.

3 days agotapirl

Even when the potential is obvious, it takes time for people to adapt, trust and integrate it into daily life

3 days agointerludead

If everything you do starts by asking an LLM, then you start with superficial research, because frankly it was never anything better than a fancy google search.

3 days agoepigramx

This is probably related to cynicism (but also maybe wisdom from a life that at times was not easy) I've developed over the years, but I can't help but get seriously dystopian vibes from your post. More than the usual boosters, because it seems to be a little more honest and a little less cognizant of what I think the real moral hangups are that people tend to have with AI. So you seem genuine in some sense in a way many others aren't.

I don't for one second really suspect it will be the case (not for the usual technical criticisms although I'm skeptical there as well, but more that I don't think it would be socially sustainable for an extended period of time)...but let's for a moment take your last paragraph at face value and in good faith.

I mean...what exactly is it that you are advocating for or accepting? Even if we get some kind of very generous UBI, there's something about human nature that makes me suspect the consequences of this would be an almost guaranteed miserable existence for pretty much everyone.

Even in the best case scenario, where the results of this transformation are kept under control and distributed in a reasonable manner and the whole thing doesn't cause a social and political meltdown...what is everybody going to do? There's some amount of wisdom in the old saying that "idle hands are the devil's plaything".

Thats the real issue I am the most concerned about and that seems to be the least often addressed by big AI boosters and detractors (I realize both of these camps often have ulterior motives). I suspect many are feeling some amount of concern like that...why is this (I would argue most fundamental) question about the impact of AI never talked about?

I don't want to hear anything about some big terminator style fight against AI or about how wonderful and unpredictable the inevitable future of WALL-E style luxury gay space communism is going to be...none of those discussion points get to the heart of what makes many people so uncomfortable with the concept...and I think the people believing in some version of that second scenario being at all socially plausible is what gives me the most pause. It makes the terminator scenario almost seem like a preferred outcome if we were given a binary choice...in reality I think most would prefer neither and would agree with me in saying that we aren't even discussing the right issues w.r.t an "AI gets much better" potential future.

3 days agozifpanachr23

I'm not advocating, just trying to look into the future.

Superhuman AI seems to be a building block of it. And deep transformation of how we work will come with it.

So I raised the question of the timeline.

That's all.

3 days agomg

Fair enough. I guess the "deep transformation of how we work" part is what I don't really understand. Like I said, there's certainly some cynicism I've got and so I am going to have a tendency to extrapolate potential negatives from everything left unsaid about that "transition".

I think it's partly these hangups about the situation that I have that give me the tendency to assume a post like yours is necessarily "advocating" as well, so my apologies if that was not your intent.

3 days agozifpanachr23

    I guess the "deep transformation of how we work"
    part is what I don't really understand.
At the moment, it looks like we'll either become completely obsolete in the context of work, or we'll morph into cyborgs with vastly greater knowledge and processing capabilities than we have today.
3 days agomg

Right. Those both sound horrifying to me.

I suppose I'll leave it at that.

If those don't terrify you then I guess we just don't really have similar temperament as people and that's totally okay with me. I did enjoy discussing it with you.

3 days agozifpanachr23

I’m the same. I haven’t read the responses to your comment yet but I guarantee there are some folks pulling out their hair in disbelief about what you could possibly be using LLMs for, how the code can never stand up to what they’re writing etc. I don’t understand how you can’t find chatGPT useful. I use it at least 30 times in any given day.

3 days agodyauspitr

It's hilariously wrong at times, but the problem is when people take what an LLM model spits out as fact.

Just an example was in a mild debate I was having regarding cars, the other person posed a question about how fast a Golf Type R could get in a specific distance and Chat GPT spat out a number that the other person accepted as fact, but I already knew it was too high.

What ChatGPT done was taken the posted 0 to 60 time and extrapolated a linear distance vs velocity formula. Which was impressive granted, but wrong; velocity over distance is logarithmic at best.

It's a great tool, but I think a lot of people are just taking what it spits out without slowing down to question if the output makes sense or not.

3 days agosomerandomqaguy

It's highly dependent on what you are using it for, so I think the variability in usefulness is totally predictable.

That doesn't make me some fancy scientist level programmer (I'm definitely not) that I often find my attempts at using AI falling into that category...a lot of the time it's just due to niche platforms and libraries and things that are specific to our shop or the regulatory environment or a thousand other issues of that nature. I imagine that similar issues are incredibly widespread for basically anybody that is not doing greenfield work that is somewhat isolated and at young companies and isn't spending tens of millions to do custom training on their specific environments.

The whole "everything web, most everything open source, ship ship ship new code" style work environments you tend to find among young start ups are not as common as I think they seem if you guage your view of technology jobs based off of hacker news. Given that most of the training of the most powerful models is basically scraping the web, it's not at all surprising that they are seriously lacking in other areas. And I'm not sure to what extent they can seriously be expected to improve there...besides the obvious issue of uploading internal documentation to give an external LLM better prompting...the thing has still got to be able to use public training data to make predictions about internal libraries and whatnot that may very well be old or anachronistic or batshit crazy, because the difference in volume of data between say your internal software and everything posted publicly on the internet is massive.

3 days agozifpanachr23

Lower on the page, there are links to his "Mobile is eating the world" presentation, 2015-ish.

How does this work? Is AI eating Mobile now? Is there a new world to be eaten every decade or so?

Ah, wait. It's just meaningless hyperbole.

3 days agolucianbr

It’s just a play on the title of the “Why Software is Eating the World” article [1] that went viral in the tech industry 13 years ago. It’s no different from all the “[…] is all you need” ML papers. It’s a metaphor, don’t read too much into it.

[1] https://a16z.com/why-software-is-eating-the-world/

3 days agothrowup238

"X is eating the world" articles considered harmful?

3 days agoLambdaComplex

"X is eating the world" articles are eating the world

3 days agonotfed

If you look past the hyperbole, I think there are some interesting data points in there. For example, fewer enterprises claim to have AI systems in production this year vs last year.

3 days agorohansood15

Mobile ate the world though, most people daily interactions with computer-related stuff is through a mobile.

Back in the 2000s-2010s the hyperbole was that software was eating the world. And it did, almost everything on the modern world depends on software to function.

Will AI eat the world? We will only know in hindsight, right now it's too much in flux but it will be clear in another 20 years if the hyperbole is true or not.

3 days agopiva00

Incidentally, are those three separate talks in 2013, 2015 and 2016 with identical titles, or the same talk revised and given three times?

3 days agozahlman

AI resides inbetween the mobile layer [hardware devices through which we will consume content] and above internet layer [information / content]. In that sense AI will eat stuff at the layer, tasks which were generally done differently.

3 days agojumping_frog

The world is like a fruit that can be eaten over and over ad infinitum.

Though some people want to save it.

I prefer eating.

3 days agoindigoabstract

Are you a bat, by chance? Maybe a tortoise?

3 days agowhtsthmttrmn

No. Just a programmer. Sorry to disappoint!

3 days agoindigoabstract

Most of the presentation is saying that is isn't clear how this will work, it will take a long time, and it probably won't do everything.

Indeed, you would see that if you'd read even the first half-dozen 5 slides ;)

3 days agobenedictevans

I've decided to not read the article/slides because the title, in conjuction with the other titles on the page, sounded stupid to me.

My time is not free, sorry.

3 days agolucianbr

Thank you - I will add this to my file of people expressing strong opinions of things they haven’t read and know nothing about.

3 days agobenedictevans

I don’t like the guy. But don’t you think he was right about mobile?

3 days agobaxtr

A lot of mobile has happened. I don't think it "ate the world" - that would mean everything would be mobile now, and it is not. What's all the discussion I read on HN about rewriting everything in rust, LLMs, Postgres replacing everything, and many others? Not mobile, is what it is.

It's a meaningless soundbite.

3 days agolucianbr

1: It's the TITLE of a 100 slide presentation. It's not the only thing it said, and it's a way to think about what was happening.

2: Mobile replaced the PC as the main way people use the internet and do their day-to-day computing. The consumer Internet runs on smartphone apps, not PCs. In 2013 a lot of people didn't understand that that was happening, so it was worth saying.

3 days agobenedictevans

> Is AI eating Mobile now?

Yes

3 days agobhaney

The only thing AI is eating is power.

3 days agophito

I like your words which have metaphors. Does "Power" here mean Electricity Power, Political Power or Financial Power?

3 days agoeichi

All of the above?

3 days agopabs3

Soul power.

3 days agowhtsthmttrmn

"AI uses the same amount of power as Bolivia per day to craft 80% accurate powerpoint slides for consultants to present to executives who are outsourcing their primary job function for 900/hr"

3 days agored-iron-pine

Used to really love his stuff, but this is all pretty well-trod ground and he doesn't have much new to say.

3 days agowhy_only_15

Well, I'm not trying to explain the state of the science and the engineering, but to work out what this means to everyone else. There are no products to analyse yet - which is part of the problem.

3 days agobenedictevans

I think the talk is meant for executives who are more cautious and want the fluff and dust to settle before they deploy resources at their companies at the next shiny thing.

3 days agojumping_frog

> What's our AI strategy?

In most cases, probably giving OpenAI a bunch of money.

For whatever reason, the full stack hasn't been commoditized yet to a degree where you could self-host easily. For example, I can put the paid or free version of GitLab on my servers and get repo management, issue tracking, CI/CD, Wiki and a bunch of other stuff. It covers most use cases and works out of the box, even if not always in the ways I want.

As for AI... there's OpenAPI and GitHub Copilot, even JetBrains has their AI solutions. You pay for access to the back end component and there's IDE plugins that integrate with that, even custom IDE's or editors like Cursor. But what if you want an editor/plugin that talks to models running on your own servers? Sure, you can get models off of HuggingFace and hook them up to run locally on a machine that has the hardware to take advantage of them... but then what? What about integrating with merge requests in the aforementioned GitLab instance? Obviously it's all possible, but somehow I haven't seen many solutions that offer you something similar to GitLab but for AI.

Even GitLab's own solution talks to their servers: https://about.gitlab.com/solutions/code-suggestions/

> Code Suggestions is available to self-managed GitLab instances via a secure connection to GitLab.com.

I'm guessing CodeGPT is probably a piece of that puzzle, or maybe the Tabnine enterprise setup.

3 days agoKronisLV

The core of the issue is that you need beefy GPUs to really run these models at production workloads.

So I think what you're currently imagining won't happen until GPU prices go down massively

3 days agoffsm8

"[hot new thing] eats the world" (˚0˚)!!

3 days agotropicalfruit

Typical a16z. Also brought us The Techno-Optimist Manifesto and please vote Trump because of muh crypto.

Edit: Benedict Evans no longer works for a16z. However I think it must be a reference to Why Software Is Eating the World.

3 days agobenatkin
[deleted]
3 days ago

That slide deck is more questions than answers.

Here's a useful question: Suppose the LLM hallucination problem is not solved in the next 10 years. What happens to the AI boom?

3 days agoAnimats

I tried to capture this on the last slide before the conclusion - maybe all AI questions have one of two answers - "no-one knows" or "it will be the same as the last time"

this is one of the "no-one knows" questions

3 days agobenedictevans

The question I'm asking isn't whether hallucinations can be fixed. It's what, if they are not fixed, are the economic consequences for the industry? How necessary is it that LLMs become trustworthy? How much valuation assumes that they will?

3 days agoAnimats

And is it even fixable?

a day agoSateeshm

Therefore, after being digested the world will have turned to ____?

3 days agoseydor

Therefore, after being digested, the world will have turned to data

3 days agointerludead

Indeed, and after that shit turns cold, it'll be the "heat death" of the Information Universe that we called the Internet.

3 days agodisqard

...the ending from Marshall Brain's short story, Manna

3 days agored-iron-pine

But I do think our current path on these huge networks are far from leading us to the long-desired AGI, this is just doubling the compute as well as cost as time goes on.

3 days agosmgpie
[deleted]
3 days ago

If only AI were used for good and not for evil. Think of the jobs replaced with AI. Please think of the students cheating by letting AI write their papers. Please think of the high school student who creates porn using Deepfake AI of the girls in his class instant kiddie porn.

We need AI to solve climate change, but the energy costs will contribute to climate change. We need a cure for cancer, and an end to COVID and other dangerous viruses, a way for a personal AI that does tasks and earns the user money for basic income or something.

3 days agoorionblastar

> We need AI to solve climate change

It's a chat bot. Solving climate change is way above its paygrade.

3 days agootabdeveloper4

LLMs aren’t the only game in town

3 days agocoffeebeqn

but LLMs seem to be the only one delivering.

something something RNA Moderna something -- yeah, okay, in niche fields.

what, specifically, is AI doing to fix climate change? outside of creating a shit-ton of carbon burning through electricity, that is.

3 days agored-iron-pine

I feel it would only repeat the same things the boomers are too hard headed to understand

And probably also note that the proposals of the climate-justified-vandals are too unrealistic

There's nothing more nonsense than a solar-nuclear antagonism. Build more solar. Build more nuclear, preferably SMRs, and do more research on ways of making it safer and cost-effective.

Fill the dry sunny areas with solar and batteries. Fill the windy areas with wind turbines and treat anyone opposing this with the same contempt as people vandalizing paintings "for the climate"

Every car should be at least a hybrid, because there's nothing more clunky than plugging an engine that deals badly with varying RPM and torque directly to a variable load.

3 days agoraverbashing

> Fill the dry sunny areas with solar and batteries. Fill the windy areas with wind turbines and treat anyone opposing this with the same contempt as people vandalizing paintings "for the climate"

The environmental impact of solar and wind turbines is huge, much greater than coal or nuclear.

(By "environmental impact" I mean "natural habitat destruction", not "my property values went down because the air is icky".)

2 days agootabdeveloper4

> The environmental impact of solar and wind turbines is huge, much greater than coal or nuclear.

See, this is the kind of BS boomers come up with to justify the current idiocy and that other boomers will eat hook, line and sinker without any critical thought

Pray what kind of "environmental destruction" putting solar panels in the desert will cause that shadows actually digging the soil for coal?

The fact that a coal plant emits more radiation than a nuclear plant should be sufficient to establish how much of a BS this fact it

But I know, some people think training AI in space is cheaper, some will believe anything they're paid enough to believe

(I obviously am in favour of nuclear plants and your statement might be correct in part there)

2 days agoraverbashing

> Pray what kind of "environmental destruction" putting solar panels in the desert will cause that shadows actually digging the soil for coal?

Habitats of desert animals will be destroyed and they will go exinct. (But humans don't live in the desert and desert animals hold no cultural significance, so who cares, amirite?)

> The fact that a coal plant emits more radiation than a nuclear plant

Irrelevant. Chernobyl was the best thing to happen to Europe's biodiversity in centuries.

2 days agootabdeveloper4

Do you know why we do not put much solar panels in the desert ?

Because we do not live in the desert, and moving electricity is terrible (loss is insane)

2 days agoJackSlateur

It already does. Ask it if trickle down economics works, or what are the benefits of economic redistribution and how it’s different from socialism.

The problem is that people’s don’t use it for what it’s good for, but they try to make it code and play chess, because they can’t accept that it can’t think (yet).

3 days agokubb

I hope this is irony. If not you could write the same with little changes about the steam engine. It is a technological advance same as every one prior to it. Those that embrace it and use it will come out on top (not without personal risks of having your own skills atrophy). Others will loose. Just like every person doing horse stables work when automobiles became widely used.

3 days agoRoark66

Except this isn't the steam engine or the internal combustion engine/the car, it's tech, and it replaces most jobs rather than just a few.

If you take its future promise as true (I do not, but for sake of argument lets pretend it is as powerful as you're saying), then it's impact is on a completely different scale and ability than even industrialisation. Its silly to retreat to "no one shoes horses anymore" platitudes as if this tool isn't widly more capable (again, if we take its future promise as written) and disruptive.

3 days agogirvo

So we have in the big shifts:

Agriculture workers, became industry workers. (Mechanized agriculture).

Industry workers, became service and white collar workers. (Automatization).

I don't see a sector for white collar workers to shift to now. Even more service workers?

There is room in the serf and capitalist classes though. Hopefully we end up in the later rather than former.

But there is a need for political reform to open up the later I guess.

3 day work week and so on the lower the lowering pressure on wages. Progressive corp. tax? But I don't see that happening in the US.

3 days agorightbyte

It might be useful to check how linear regression, the first form of "AI" has eaten the world. Lets call its modern incarnation LLR (Large Linear Regression) for kicks.

In LLR you fit a massive chunk of data, you do an interpolation and you use the result with care, or, you fit the data, you do an extrapolation and you live dangerously. So its all good fun, but how to you eat the universe with it?

You might try to become the Matlab or SAS or Excel that brings Large Linear Regression to the world. Godspeed, a giant Python has already swallowed C++ and they will be coming be after you.

You may try to eat all the world's data so that nobody but you can do Large Linear Regressions. Good luck with that pharaonic quest as well.

Or you might try to print silicon that does LLR quickly, hoping that nobody else can master that dark art. But silicon is as plentiful as sand and keeping dark secrets dark is not easy when you want to eat the world.

Thats about it. No melodrama, no medieval moats, no megazillion dollars. LLR will be everywhere and yet nobody will care.

The bandwagon will move to the next big tech thing. Quantum Coin maybe?

Assuming the deteriorating reality of our actual condition does not catch up with this clown circus. Because the end condition of the world being serially eaten by the tech bros is... the world being dead.

3 days agoopenrisk

'you are an expert in slide summarzing...'