FWIW, these studies are too early. Large orgs have very sensitive data privacy considerations and they're only right now going through the evaluation cycles.
Case in point, this past week, I learned Deloitte only recently gave the approval in picking Gemini as their AI platform. Rollout hasn't even begun yet which you can imagine is going to take a while.
To say "AI is failing to deliver" because only 4% efficiency increase is a pre-mature conclusion.
Yeah. We are only just beginning to get the most out of the internet, and the WWW was invented almost 40 years ago - other parts of it even earlier. Adoption takes time, not to speak of the fact that the technology itself is still developing quickly and might see more and more use cases when it gets better.
I'm not sure this is even measuring LLMs in the first place! They say the definition is "big data analytics and AI".
Is putting Google Analytics onto your website and pulling a report 'big data analytics'...?
Exactly, my company started carefully dipping their toes in to org wide AI mid last year (IT has been experimenting earlier than that, but under pretty strict guidelines from infosec). There is so much compliance and data privacy considerations involved.
And for the record I think they are absolutely right to be cautious, a mistake in my industry can be disastrous so a considered approach to integrating this stuff is absolutely warranted. Most established companies outside of tech really can’t have the “move fast break things” mindset.
Looking at the study, +4% is what they get when they chose to adopt AI, not overall.
Meanwhile, "shadow" AI use is around 90%. And if you guess IT would lead the pack on that, you are wrong. It's actually sales and hr that are the most avid unsactioned AI tool users.
Yes I was recently talking to a person who was working as a BA who specializes in corporate AI adoption- they didn’t realize you could post screenshots to ChatGPT
These are not the openclaw folks
What does it even mean to specialise in something and know so little about it? What exactly is this BA person doing?
Genuinely confused, I don't get it
The “corporate” in “corporate AI” can mean tons of work building metrics decks, collecting pain points from users, negotiating with vendors…none of which requires you to understand the actual tool capabilities. For a big company with enough of a push behind it, that’s probably a whole team, none of whom know what they are actually promoting very well.
It’s good money if you can live with yourself, and a mortgage and tuitions make it easy to ignore what you are becoming. I lived that for a few years and then jumped off that train.
Sounds like a perfect job for AI!
I wonder if web searches used to be pretty productive, then declined as sponsored results and SEO degraded things.
Nowadays an ai assist with a web search usually eliminates the search altogether and gives you a clear answer right away.
for example, "how much does a ford f-150 cost" will give you something ballpark in a second, compared to annoying "research" to find the answer shrouded in corporate obfuscation.
The turning point was around when google stopped honoring Boolean ops and quotation marks
The killer app for AI might just be unenshittifying search for a couple of years.
Then SEO will catch up and we'll have spam again, but now we'll be paying by the token for it. Probably right around the time hallucination drops off enough to have made this viable.
I kind of want to become Amish sometimes.
then declined as sponsored results and SEO degraded things
It didn't decline because of this. It declined because of a general decade long trend of websites becoming paywalled and hidden behind a login. The best and most useful data is often inaccessible to crawlers.
In the 2000s, everything was open because of the ad driven model. Then ad blockers, mobile subscription model, and the dominance of a few apps such as Instagram and Youtube sucking up all the ad revenue made having an open web unsustainable.
How many Hacker News style open forums are left? Most open forums are dead because discussions happen on login platforms like Reddit, Facebook, Instagram, X, Discord, etc. The only reason HN is alive is because HN doesn't make need to make money. It's an ad for Y Combinator.
SEO only became an issue when all there is for crawlers is SEO content instead of true genuine content.
I was just thinking exactly the same. Basic web search has become so horrible that AI is being used as its replacement.
I found it a sad condemnation of how far the tech industry has fallen into enshittification and is failing to provide tools that are actually useful.
We always had the technology to do things better, it's the money making part that has made things worse technologically speaking. In this same way, I don't see how AI will resolve the problem - our productivity was never the goal, and that won't change any time soon.
And it'll happen again when AI models start resorting to ads once again.
Yup. Any LLM recommendation for a product or service should be viewed with suspicion (no different than web search results or asking a commission-based human their opinion). Sponsored placements. Affiliate links. Etc.
Or when asking an LLM for a comparison matrix or pros and cons between choices ... beware paid placements or sponsors. Bias could be a result of available training data (forgivable?) or due to paid prioritization (or de-prioritizing of competitors!)
Their tools are very useful. To their customers. Not to their users.
Its depressing when people are hearing managers are openly asking all employees to pitch in ideals for AI in order to reduce employee headcount.
For those hearing this at work, better prepare an exit plan.
I've all-ways asked the managers can you kindly disclose all confidential business information. In which they obviously respond with condescending remarks. Then I respond with, then how am I going to give you a answer without all the knowledge of how the business runs and operates? You can go away and figure out what is going to make work for the business then you can delegate what you want me to do, it is the reason why you pay me money.
"Ideas for AI to help reduce headcount" sounds like the title everyone should start using on resignation letters.
If anyone still resigns that is. They seem to have automated that too.
> Its depressing when people are hearing managers are openly asking all employees to pitch in ideals for AI in order to reduce employee headcount.
If the manager doesn’t have ideas, it is they who deserve the boot.
You know it's a EU study because they bring up "AI patents" in the first 2 minutes of it, as if they mean anything
AI is affecting everything the same as Covid, as we've been in one single-topic hysteria since 2020. With one short break for attaching bottle caps to their bottles.
Not even Russian invasion or collapse of their automotive industry rattled them.
[dead]
[flagged]
that's not what the article said, not even close, not sure why you need to push this emotional and wrong framing.
[flagged]
You trust these stochastic text/slot machines for scheduling and follow-ups? Human intention is important for both of these. triage and reminders I can see, but if you send me an llm generated follow up, I'm just going to assume you don't care.
> if you send me an llm generated follow up, I'm just going to assume you don't care.
Ironically you just replied to an automated message on a forum and didn't realise :) (hint: click on the user, go to their comment history, you'll see the pattern)
Yes. Other humans are generally accepting of mistakes below some frequency threshold, and frontier models are very robust in my experience
One process redesign that may be considered a moat for AI is employees intending to communicate through a sentence or two first passing the text into their AI of choice and asking it to elaborate. On the other end the colleague uses their AI to summarize the email into a bullet point or two. It's challenging for those that don't use AI to keep up.
Imagine explaining AI to 1997 you.
"It's like PKZIP, but backwards"
Easy - "It's like in the movie, but the voice is actually human like rather than robotic."
FWIW, these studies are too early. Large orgs have very sensitive data privacy considerations and they're only right now going through the evaluation cycles.
Case in point, this past week, I learned Deloitte only recently gave the approval in picking Gemini as their AI platform. Rollout hasn't even begun yet which you can imagine is going to take a while.
To say "AI is failing to deliver" because only 4% efficiency increase is a pre-mature conclusion.
Yeah. We are only just beginning to get the most out of the internet, and the WWW was invented almost 40 years ago - other parts of it even earlier. Adoption takes time, not to speak of the fact that the technology itself is still developing quickly and might see more and more use cases when it gets better.
I'm not sure this is even measuring LLMs in the first place! They say the definition is "big data analytics and AI".
Is putting Google Analytics onto your website and pulling a report 'big data analytics'...?
Exactly, my company started carefully dipping their toes in to org wide AI mid last year (IT has been experimenting earlier than that, but under pretty strict guidelines from infosec). There is so much compliance and data privacy considerations involved.
And for the record I think they are absolutely right to be cautious, a mistake in my industry can be disastrous so a considered approach to integrating this stuff is absolutely warranted. Most established companies outside of tech really can’t have the “move fast break things” mindset.
Looking at the study, +4% is what they get when they chose to adopt AI, not overall.
Meanwhile, "shadow" AI use is around 90%. And if you guess IT would lead the pack on that, you are wrong. It's actually sales and hr that are the most avid unsactioned AI tool users.
Yes I was recently talking to a person who was working as a BA who specializes in corporate AI adoption- they didn’t realize you could post screenshots to ChatGPT
These are not the openclaw folks
What does it even mean to specialise in something and know so little about it? What exactly is this BA person doing?
Genuinely confused, I don't get it
The “corporate” in “corporate AI” can mean tons of work building metrics decks, collecting pain points from users, negotiating with vendors…none of which requires you to understand the actual tool capabilities. For a big company with enough of a push behind it, that’s probably a whole team, none of whom know what they are actually promoting very well.
It’s good money if you can live with yourself, and a mortgage and tuitions make it easy to ignore what you are becoming. I lived that for a few years and then jumped off that train.
Sounds like a perfect job for AI!
I wonder if web searches used to be pretty productive, then declined as sponsored results and SEO degraded things.
Nowadays an ai assist with a web search usually eliminates the search altogether and gives you a clear answer right away.
for example, "how much does a ford f-150 cost" will give you something ballpark in a second, compared to annoying "research" to find the answer shrouded in corporate obfuscation.
The turning point was around when google stopped honoring Boolean ops and quotation marks
The killer app for AI might just be unenshittifying search for a couple of years.
Then SEO will catch up and we'll have spam again, but now we'll be paying by the token for it. Probably right around the time hallucination drops off enough to have made this viable.
I kind of want to become Amish sometimes.
In the 2000s, everything was open because of the ad driven model. Then ad blockers, mobile subscription model, and the dominance of a few apps such as Instagram and Youtube sucking up all the ad revenue made having an open web unsustainable.
How many Hacker News style open forums are left? Most open forums are dead because discussions happen on login platforms like Reddit, Facebook, Instagram, X, Discord, etc. The only reason HN is alive is because HN doesn't make need to make money. It's an ad for Y Combinator.
SEO only became an issue when all there is for crawlers is SEO content instead of true genuine content.
I was just thinking exactly the same. Basic web search has become so horrible that AI is being used as its replacement.
I found it a sad condemnation of how far the tech industry has fallen into enshittification and is failing to provide tools that are actually useful.
We always had the technology to do things better, it's the money making part that has made things worse technologically speaking. In this same way, I don't see how AI will resolve the problem - our productivity was never the goal, and that won't change any time soon.
And it'll happen again when AI models start resorting to ads once again.
Yup. Any LLM recommendation for a product or service should be viewed with suspicion (no different than web search results or asking a commission-based human their opinion). Sponsored placements. Affiliate links. Etc.
Or when asking an LLM for a comparison matrix or pros and cons between choices ... beware paid placements or sponsors. Bias could be a result of available training data (forgivable?) or due to paid prioritization (or de-prioritizing of competitors!)
Their tools are very useful. To their customers. Not to their users.
Its depressing when people are hearing managers are openly asking all employees to pitch in ideals for AI in order to reduce employee headcount.
For those hearing this at work, better prepare an exit plan.
I've all-ways asked the managers can you kindly disclose all confidential business information. In which they obviously respond with condescending remarks. Then I respond with, then how am I going to give you a answer without all the knowledge of how the business runs and operates? You can go away and figure out what is going to make work for the business then you can delegate what you want me to do, it is the reason why you pay me money.
"Ideas for AI to help reduce headcount" sounds like the title everyone should start using on resignation letters.
If anyone still resigns that is. They seem to have automated that too.
> Its depressing when people are hearing managers are openly asking all employees to pitch in ideals for AI in order to reduce employee headcount.
If the manager doesn’t have ideas, it is they who deserve the boot.
You know it's a EU study because they bring up "AI patents" in the first 2 minutes of it, as if they mean anything
AI is affecting everything the same as Covid, as we've been in one single-topic hysteria since 2020. With one short break for attaching bottle caps to their bottles.
Not even Russian invasion or collapse of their automotive industry rattled them.
[dead]
[flagged]
that's not what the article said, not even close, not sure why you need to push this emotional and wrong framing.
[flagged]
You trust these stochastic text/slot machines for scheduling and follow-ups? Human intention is important for both of these. triage and reminders I can see, but if you send me an llm generated follow up, I'm just going to assume you don't care.
> if you send me an llm generated follow up, I'm just going to assume you don't care.
Ironically you just replied to an automated message on a forum and didn't realise :) (hint: click on the user, go to their comment history, you'll see the pattern)
Yes. Other humans are generally accepting of mistakes below some frequency threshold, and frontier models are very robust in my experience
One process redesign that may be considered a moat for AI is employees intending to communicate through a sentence or two first passing the text into their AI of choice and asking it to elaborate. On the other end the colleague uses their AI to summarize the email into a bullet point or two. It's challenging for those that don't use AI to keep up.
Imagine explaining AI to 1997 you.
"It's like PKZIP, but backwards"
Easy - "It's like in the movie, but the voice is actually human like rather than robotic."