361

The slow collapse of critical thinking in OSINT due to AI

> Participants weren’t lazy. They were experienced professionals.

Assuming these professionals were great critical thinkers until the AI came along and changed that is a big stretch.

In my experience, the people who outsource their thinking to LLMs are the same people who outsourced their thinking to podcasts, news articles, Reddit posts, Twitter rants, TikTok videos, and other such sources. LLMs just came along and offered them opinions on demand that they could confidently repeat.

> The scary part is that many users still believed they were thinking critically, because GenAI made them feel smart

I don’t see much difference between this and someone who devours TikTok videos on a subject until they feel like an expert. Same pattern, different sources. The people who outsource their thinking and collect opinions they want to hear just have an easier way to skip straight to the conclusions they want now.

15 hours agoAurornis

> In my experience, the people who outsource their thinking to LLMs are the same people who outsourced their thinking to podcasts, news articles, Reddit posts, Twitter rants, TikTok videos, and other such sources

He's talking specifically about OSINT analysts. Are you saying these people were outsourcing their thinking to podcasts, etc. before AI came along? I have not heard anyone make that claim before.

13 hours agokaraterobot

Having a surface level understanding of what you're looking at is a huge part of OSINT.

These people absolutely were reading Reddit comments from a year ago to help them parse unfamiliar jargon in some document they found or make sense of what's going on in an image or whatever.

2 hours agopotato3732842

At least if you're on reddit you've got a good chance of Cunningham's Law[1] giving you a chance at realizing it's not cut and dry. In this case, I refer to what you might call a reduced-strength version of Cunningham's Law, which I would phrase as "The best way to get the right answer on the Internet is not to ask a question; it's to post what someone somewhere thinks is the wrong answer." my added strength reduction in italics. At least if you stumble into a conversation where people are arguing it is hard to avoid needing to apply some critical thought to the situation to parse out who is correct.

The LLM-only AI just hands you a fully-formed opinion with always-plausible-sounding reasons. There's no cognitive prompt to make you consider if it's wrong. I'm actually deliberately cultivating an instinctive negative distrust of LLM-only AI and would suggest it to other people because even though it may be too critical on a percentage basis, you need it as a cognitive hack to remember that you need to check everything coming out of them... not because they are never right but precisely because they are often right, but nowhere near 100% right! If they were always wrong we wouldn't have this problem, and if they were just reliably 99.9999% right we wouldn't have this problem, but right now they sit in that maximum danger zone of correctness where they're right enough that we cognitively relax after a while, but they're nowhere near right enough for that to be OK on any level.

[1]: https://en.wikipedia.org/wiki/Ward_Cunningham#Law

2 hours agojerf

What you're describing for Reddit is farcically charitable except in cases where you could just google it yourself. What you're describing for the LLM is what Reddit does when any judgement is involved.

I've encountered enough instances in subjects I am familiar with where the "I'm 14 and I just googled it for you" solution that's right 51% of the time and dangerously wrong the other 49 is highly upvoted and the "so I've been here before and this is kind of nuanced with a lot of moving pieced, you'll need to understand the following X, the general gist of Y is..." type take that's more correct is highly downvoted that I feel justified in making the "safe" assumption that this is how all subjects work.

On one hand at least Reddit shows you the downvoted comment if you look and you can go independently verify what they have to say.

But on the other hand the LLM is instant and won't screech at you if you ask it to cite sources.

an hour agopotato3732842

[dead]

an hour agothrowaway29812

The pull is too strong, especially when you factor in the fact that (a) the competition is doing it and (b) the recipients of such outcomes (reports, etc) are not strict enough to care whether AI was used or not. In this situation, no matter how smart you are, not using the new tool of the trade would be basically career suicide.

9 hours agolow_tech_love

And these people in positions of 'responsibility' always need someone or something to point to when shit goes sideways so they might as well.

5 hours agotorginus

Yeah it's similar to how Facebook is blamed for social malaise. Or how alcohol was blamed before that.

It's always more comfortable for people to blame the thing rather than the person.

13 hours agojart

More than one thing can be causing problems in a society, and enterprising humans of lesser scruples have a long history of preying on the weaknesses of others for profit.

13 hours agoInitialLastName

Enterprising humans have a long history of giving people what they desire, while refraining from judging what's best for them.

13 hours agojart

Ah yeah, fentanyl drug adulterers, what great benefactors of society.

Screaming "no one is evil, its just markets!" probably helps people who base their lives on exploiting the weak sleep better at night.

https://en.wikipedia.org/wiki/Common_good

11 hours agoZYbCRq22HbJ2y7

No one desires adulterated fentanyl.

10 hours agojart

No one has desire for adulteration, but they have a desire for an opiate high, and are willing to accept adulteration as a side effect.

You can look to the prohibition period for historical analogies with alcohol, plenty of enterprising humans there.

10 hours agoZYbCRq22HbJ2y7
[deleted]
10 hours ago

Fentanyl adulterators, market creators and resellers certainly do, for higher margin selling and/or increased volume.

4 hours agoharperlee

The traffickers looking to pack more punch into each shipment that the government fails to intercept do.

Basically it's a response to regulatory reality, little different from soy wire insulation in automobiles. I'm sure they'd love to deliver pure opium and wire rodents don't like to eat but that's just not possible while remaining in the black.

2 hours agopotato3732842

I like the facebook comparison, but the difference is you don't have to use facebook to make money and survive. When the thing is a giant noisemaker crapping out trash that screws up everyone else's work (and thus their livelihood), it becomes a lot more than just some nuisance you can brush away.

9 hours agoPeeMcGee

If you are in the news business you basically have to.

7 hours agofriendzis

I think humans actually tend to prefer blaming individuals rather than addressing societal harms, but they're not in any way mutually exclusive.

2 hours agoitishappy
[deleted]
13 hours ago

The big problem in open source intelligence is not in-depth analysis. It's finding something worth looking at in a flood of info.

Here's the CIA's perspective on this subject.[1] The US intelligence community has a generative AI system to help analyze open source intelligence. It's called OSIRIS.[2] There are some other articles about it. The previous head of the CIA said the main use so far is summarization.

The original OSINT operation in the US was the Foreign Broadcast Monitoring Service from WWII. All through the Cold War, someone had to listen to Radio Albania just in case somebody said something important. The CIA ran that for decades. Its descendant is the current open source intelligence organization. Before the World Wide Web, they used to publish some of the summaries on paper, but as people got more serious about copyright, that stopped.

DoD used to publish The Early Bird, a daily newsletter for people in DoD. It was just reprints of articles from newspapers, chosen for stories senior leaders in DoD would need to know about. It wasn't supposed to be distributed outside DoD for copyright reasons, but it wasn't hard to get.

[1] https://www.cia.gov/resources/csi/static/d6fd3fa9ce19f1abf2b...

[2] https://apnews.com/article/us-intelligence-services-ai-model...

8 hours agoAnimats

The really big problem in open source intelligence has been for some time that data to support just about anything can be found. OSINT investigations start with a premise, look for data that supports the premise and rarely look for data that contradicts it.

Sometimes this is just sloppy methodology. Other times it is intentional.

7 hours agoD_Alex

I think OSINT makes it sound like a serious military operation, but I think political opposition research is a much more accurate term for this sort of thing.

2 hours agodughnut

> listen to Radio Albania just in case somebody said something important

... or just to know what they seem to be thinking, which is also important.

4 hours agoB1FF_PSUVM

I got Radio Tirana once (1990-ish) on my shortwave. The program informed me something to the effect that that Albania is often known as the Switzerland of the Balkans because of its crystal-clear mountain lakes.

an hour agoeuroderf

"""

• Instead of forming hypotheses, users asked the AI for ideas.

• Instead of validating sources, they assumed the AI had already done so.

• Instead of assessing multiple perspectives, they integrated and edited the AI’s summary and moved on.

This isn’t hypothetical. This is happening now, in real-world workflows.

"""

Amen, and OSINT is hardly unique in this respect.

And implicitly related, philosophically:

https://news.ycombinator.com/item?id=43561654

21 hours agojruohonen

>This isn’t hypothetical. This is happening now, in real-world workflows.

Yes, thars a part of why AI has its bad rep. It has uses to streamline workflow but people are treating it like an oracle. When it very very very clearly is not.

Worse yet, people are just being lazy with it. It's the equi talent of googling a topic and pasting the lede of the Wikipedia article. Which is tasteless, but still likely to be more right than an unfiltered LLM output

6 hours agojohnnyanmac

Anyone using these tools would do well to take this article to heart.

17 hours agocmiles74

I think there’s a lot of people who use these tools because they don’t like to read.

6 hours agomr_toad

Reads like it was written by AI.

15 hours agogneuron

One way to achieve superhuman intelligence in AI is to make humans dumber.

18 hours agopalmotea

This reminds me of the guy who said he wanted computers to be as reliable as TVs. Then smart TVs were made and TV quality dropped to satisfy his goal.

12 hours agoryao

The TVs prior to the 1970s/solid state era were not very reliable. They needed repair often enough that "TV repairman" was a viable occupation. I remember having to turn on the TV a half hour before my dad got home from work so it would be "warmed up" so he could watch the evening news. We're still at that stage of AI.

12 hours agoSoftTalker

The guy started saying it in the 80s or 90s when that issue had been fixed. Ge is the Minix guy if I recall correctly.

10 hours agoryao

If you came up with that on your own then I'm very impressed. That's very good. If you copied it, I'm still impressed and grateful you passed it on.

13 hours agoxrd

What if ChatGPT came up with it?

12 hours agoBrenBarn

I don't use LLMs, because I don't want to let my biggest advantages atrophy.

12 hours agopalmotea

while gleefully watching the bandwagon fools repeatedly ice-pick themselves in the brain.

an hour agoMrMcCall

The cultural revolution approach to AI.

15 hours agoboringg

That’s only if our stated goal is to make superhuman AI and we use AI at every level to help drive that goal. Point received.

17 hours agoimoverclocked

I thought: A group working together poorly isn't smarter than the smartest person in that group.

But it's worse, A group working together poorly isn't smarter than the fastest participant in the group.

15 hours ago6510

That's a fascinatingly obvious idea and I'd like to see data that supports it. I assume there must be some.

13 hours agotrentlott

anybody who's ever tried to play bar trivia with a team should recognize this

14 hours agojimmygrapes

Being timid in bar trivia is the same as being wrong.

an hour agotengbretson

What do you mean? You can protest against bad but fast answers and check another box with the pen.

6 hours agorightbyte

Right, superhuman would be relative to humans

but intelligence as a whole is based on a human ego of being intellectually superior

15 hours agoyieldcrv

That’s an interesting point. If we created super-intelligence but it wasn’t anthropomorphic, we might just not consider it super-intelligent as a sort of ego defence mechanism.

Much good (and bad) sci-fi was written about this. In it, usually this leads to some massive conflict that forces humans to admit machines as equals or superiors.

If we do develop super-intelligence or consciousness in machines, I wonder how that will all go in reality.

15 hours agocaseyy

Some things I think about are how different the goals could be

For example, human and biological based goals are around self-preservation and propagation. And this in turn is about resource appropriation to facilitate that, and systems of doing that become wealth accumulation. Species that don't do this don't continue existing.

A different branch of evolution of intelligence may take a different approach, that allows its affects to persist anyway.

10 hours agoyieldcrv

This reminds me of the "universal building blocks of life" or the "standard model of biochemistry" I learned at school in the 90s. It held that all life requires water, carbon-based molecules, sunlight, and CHNOPS (carbon, hydrogen, nitrogen, oxygen, phosphorus and sulfur).

Since then, it's become clear that much life in the deep sea is anaerobic, doesn't use phosphorus, and may thrive without sunlight.

Sometimes anthropocentrism blinds us. It's a phenomenon that's quite interesting.

6 hours agocaseyy

It sure seems like the use of GenAI in these scenarios is a detriment rather than a useful tool if, in the end, the operator must interrogate it to a fine enough level of detail that she is satisfied. In the author's Scenario 1:

> You upload a protest photo into a tool like Gemini and ask, “Where was this taken?”

> It spits out a convincing response: “Paris, near Place de la République.” ...

> But a trained eye would notice the signage is Belgian. The license plates are off.

> The architecture doesn’t match. You trusted the AI and missed the location by a country.

Okay. So let's say we proceed with the recommendation in the article and interrogate the GenAI tool. "You said the photo was taken in Paris near Place de la République. What clues did you use to decide this?" Say the AI replies, "The signage in the photo appears to be in French. The license plates are of European origin, and the surrounding architecture matches images captured around Place de la République."

How do I know any better? Well, I should probably crosscheck the signage with translation tools. Ah, it's French but some words are Dutch. Okay, so it could be somewhere else in Paris. Let's look into the license plate patterns...

At what point is it just better to do the whole thing yourself? Happy to be proven wrong here, but this same issue comes up time and time again with GenAI involved in discovery/research tasks.

EDIT: Maybe walk through the manual crosschecks hand-in-hand? "I see some of the signage is in Dutch, such as the road marking in the center left of the image. Are you sure this image is near Place de la République?" I have yet to see this play out in an interactive session. Maybe there's a recorded one out there...

13 hours ago0hijinks

The advantage of the AI in this scenario is the starting point. You now can start cross referencing signage, language, license plates, landmarks. To verify or disprove the conclusion.

A further extension to the AI "conversation" might be: "What other locations are similar to this?" And "Why isn't it those locations?" Which you can then cross reference again.

Using AI as an entry point into massive datasets (like millions of photos from around the world) is actually useful. Correlation is what AI is good, but not infallible, at.

Of course false correlations exist and correlation is not causation but if you can narrow your search space from the entire world to the Eiffel tower in Paris or in Vegas you're ahead of the game.

11 hours agoMadnessASAP

This resonates with me. I feel like AI is making me learn slower.

For example, I am learning Rust, for quite awhile now. While AI has been very helpful in lowering the bar to /begin/ learning Rust, it's making it slower to achieve a working competence with it, because I always seem reliant on the LLM to do the thinking. I think I will have to turn off all the AI and struggle struggle struggle, until I don't, just like the old days.

13 hours agopcj-github

I've found the same effect when I ask the LLM to do the thinking for me. If I say "rewrite this function to use a list comprehension", I don't retain anything. It's akin to looking at Stack Overflow and copying the first result, or going through a tutorial that tells you what to write without ever explaining it.

The real power I've found is using it as a tutor for my specific situation. "How do list comprehensions work in Python?" "When would I use a list comprehension?" "What are the performance implications?" Being able to see the answers to these with reference to the code on my screen and in my brain is incredibly useful. It's far easier to relate to the business logic I care about than class Foo and method Bar.

Regarding retention, LLMs still doesn't hold a candle to properly studying the problem with (well-written) documentation or educational materials. The responsiveness however makes it a close second for overall utility.

ETA: This is regarding coding problems specifically. I've found LLMs fall apart pretty fast on other fields. I was poking at some astrophysics stuff and the answers were nonsensical from the jump.

13 hours agoimadethis

Try using the LLM as a learning tool, rather than asking it to do your job.

I don't really like the way LLMs code. I like coding. So I mostly do that myself.

However I find it enormously useful to be able to ask an LLM questions. You know the sort of question you need to ask to build an intuition for something? Where it's not a clear problem answer type question you could just Google. It's the sort of thing where you'd traditionally have to go hunt down a human being and ask them questions? LLMs are great at that. Like if I want to ask, what's the point of something? An LLM can give me a much better idea than reading its Wikipedia page.

This sort of personalized learning experience that LLMs offer, your own private tutor (rather than some junior developer you're managing) is why all the schools that sit kids down with an LLM for two hours a day are crushing it on test scores.

It makes sense if you think about it. LLMs are superhuman geniuses in the sense of knowing everything. So use them for their knowledge. But knowing everything is distracting for them and, for performance reasons, LLMs tend to do much less thinking than you do. So any work where effort and focus is what counts the most, you're better off doing that yourself, for now.

13 hours agojart

Why are you using an LLM at all when it’ll both hamper your learning and be wrong?

13 hours agoeschaton

> While AI has been very helpful in lowering the bar to /begin/ learning Rust

13 hours agodwaltrip

Nah you are getting it wrong the issue here is YOU NO LONGER NEED TO LEARN RUST thats why you are learning it slow.

9 hours agoneevans

Yeah. AI will write Rust and then you only have to review .. oh.

But AI will review it and then you only have to .. oh

But AI will review AI and then you .. oh ..

10 minutes agowhatnow37373

The world will slowly, slowly converge on this but not before many years of hyping and preaching about how this shit is the best thing since sliced bread and shoving it into our faces all day long, but in the meantime I suggest we be mindful of our AI usage and keep our minds sharp. We might be the only ones left after a decade or two of this.

10 hours agowhatnow37373

1. I've worked with analysts and done analysis for 20+ years. I have used Machine Learning with OSINT as far back as 2008 and use AI with OSINT today. I also work with many related analysts.

2. Most analysts in a formal institution are professionally trained. In Europe, Canada and some parts of the US it's a profession with degree and training requirements. Most analysts have critical thinking skills, for sure the good ones.

3. OSINT is much more accessible because the evidence ISN'T ALWAYS controlled by a legal process so there are a lot of people who CAN be OSINT analysts or call themselves that and are not professionally trained. They are good at getting results from Google and a handful of tools or methods.

4. MY OPINION: The pressure to jump to conclusions in AI whether financially motivated or not comes from perceived notion that with technology everything should be faster and easier. In most cases it is, however, just as technology is increasing so is the amount of data. So you might not be as efficient as those around you expect, especially if they are using expensive tools, so there will be pressure to give into AI's suggestions.

5. MY OPINION: OSINT and analysis is a Tradecraft with a method. OSINT with AI makes things possible that weren't possible before or took way too much time for it to be worth it. Its more like, here are some possible answers where there were none before. Your job is to validate it now and see what assumptions have been made.

6. These assumptions have existed long before AI and OSINT. I seen many cases where we have multiple people look at evidence to make sure no one is jumping to conclusions and to validate the data. MY OPNION: So this lack of critical thinking might also be because there are less people or passes to validate the data.

7. Feel Free to ask me more.

an hour agoLurkandComment

1. I think you are onto something here.

a few seconds agowhatnow37373

Well, if I want to first understand the basics, such as “what do the letters OSINT mean,” I’d think the homepage (https://osintframework.com/) would tell me. But alas, it does not, and a simple chatgpt query would have told me the answer without the wasted effort.

17 hours agotreyfitty

Similar criticisms that outsiders need to do their own research to acquire foundational-level understanding before they start on the topic can be made about other popular topics on Hn that frequently use abbreviations, such as TLS, BSDs, URL and MCP, but somehow those get a pass.

Is it unfair to make such demands for the inclusion of 101-level stuff in non-programming content, or is it unfair to give IT topics a pass? Which approach fosters a community of winners and which one does the opposite? I'm confident that you can work it out.

17 hours agoOgsyedIE

I think if I can expect my mom to know what it is, I shouldn’t have to define it in articles any more.

So TLS and URL get a pass, BSD’s and MCP need to be defined at least once.

15 hours agoAeolun

Your mom knows what TLS is? I'm not even sure that more than 75% of programmers do.

14 hours agoChadNauseam

Does your mom really know what TLS means? I would guess that even "tech savvy" members of the general public don't.

14 hours agojonjojojon

OSINT = open source intelligence. It’s the whole of openly accessible data fragments about a person or item of interest, including their use for intelligence-gathering objectives.

For example, suppose a person shares a photo online, and your intelligence objective is to find where they are. In that case, you might use GPS coordinates in the photo metadata or a famous landmark visible in the image to achieve your goal.

This is just for others who are curious.

15 hours agocaseyy

GPU-free URL: https://en.wikipedia.org/wiki/OSINT

Offline version: https://www.kiwix.org

17 hours agowalterbell

> Offline version: https://www.kiwix.org

That doesn't actually work though. Try to set it up and it just fails to download.

14 hours agolmm

On which platform? It's a mature project that has been working for years on desktops and phones, with content coverage that has expanded beyond wikipedia, e.g. stackoverflow archives. Downloadable from the nearest app store.

14 hours agowalterbell

Ironically, my local barber shop also wouldn't explain to me what OSINT stands for.

16 hours agodullcrisp

There is a lot to be said for the academic tradition of only using an acronym/abbreviation after you have first used the complete term.

6 hours agoDaub

The OSINT framework isn’t meant to be an intro to OSINT. This is like getting mad that https://planningpokeronline.com/ doesn’t explain what Kanban is.

If anything you’ve just pointed out how over reliance on AI is weakening your ability to search for relevant information

16 hours agohmcq6

Volunteering “I give up if the information I want isn’t on the first page of the first website that I think of” in a thread about AI tools eroding critical thinking isn’t the indictment of the site that you linked to that you think it is.

There is a whole training section right there like you just didn’t feel like clicking on it

12 hours agojrflowers
[deleted]
12 hours ago

I think this post isn't limited to OSINT. It's widely applicable, probably where AI is being adopted as a new set of tools.

16 hours agoridgeguy

The final essay for my OSINT cert was to pick a side: critical thinking can/cannot be taught.

12 hours agottyprintk

> Participants weren’t lazy. They were experienced professionals. But when the tool responded quickly, confidently, and clearly they stopped doing the hard part.

This seems contradictory to me. I suspect most experienced professionals start with the premise that the LLM is untrustworthy due to its nature. If they didn't research the tool and its limitations, that's lazy. At some point, they stopped believing in this limitation and offloaded more of their thinking to it. Why did they stop? I can't think of a single reason other than being lazy. I don't accept the premise that it's because the tool responded quickly, confidently, and clearly. It did that the first 100 times they used it when they were probably still skeptical.

Am I missing something?

14 hours agosepositus

The idea that everyone is either full lazy or not lazy is a bit reductionist. People change their behavior with the right (or wrong) stimulus.

Also, I won't remotely claim that it's the case here, but external pressures regularly push people into do the wrong thing. It doesn't mean anyone is blameless, but ignoring those pressures or the right (or wrong) stimuli makes it a lot harder to actually deal with situations like this.

14 hours agoNegativeK

> The idea that everyone is either full lazy or not lazy is a bit reductionist.

Fair point. My intention isn't to be absolute, though. Even in a relative sense, I can't imagine a scenario where some level of laziness didn't contribute to the problem, even in the presence of external factors.

It seems like the author was eliminating laziness with their statement and instead putting the primary force on the LLM being "confident." This is what I'm pushing back against.

14 hours agosepositus

> I suspect most experienced professionals start with the premise that the LLM is untrustworthy due to its nature.

Most people don't actually critically evaluate LLMs for what they are, and actually buy into the hype that it's a super-intelligence.

14 hours agolambda

Yeah, which I consider a form of intellectual laziness. Another reason to doubt that these professionals "were not being lazy."

14 hours agosepositus

No true Scotsman.

13 hours agodwaltrip

Could have performed accurately in their past usage, building trust. Sometimes it will also get something right that is downright shocking, far beyond what you hoped.

11 hours agoip26

It's deceptively easy to trust the AI when it gives you mostly plausible answers.

13 hours agoesafak

OSINT is a symptom of it. When GPT-2 came along, I was worried that at some point the internet will get spammed with AI-crap. Boy, was I naive... I see this incredibly frequently and I get a ton of hate for saying this (including here on HN): LLMs and AI in general is a perfect demonstration of a shiny-new-toy. What people fail to acknowledge is that the so called "reasoning" is nothing more then predicting the most likely next token, which works reasonably well for basic one-off tasks. And I have used LLMs in that way - "give me the ISO 3166-1 of the following 20 countries:". That works. But as soon as you throw something more complex and start analyzing the results(which look reasonable at first glance), the picture becomes very different. "Oh just use RAGs, are you dumb?", I hear you say. Yeah?

class ParsedAddress(BaseModel):

    street: str | None

    postcode: str | None

    city: str | None

    province: str | None

    country_iso2: str | None

Response:

{

  "street": "Boulevard",

  "postcode": 12345,

  "city": "Cannot be accurately determined from the input",

  "province": "MY and NY are both possible in the provided address",

  "country_iso2": "US"
}

Sure, I can spend 2 days trying out different models and tweaking the prompts and see which one gets it, but I have 33 billion other addresses and a finite amount of time.

The issue occurs in OSINT as well: A well structured answer lures people into a mental trap. Anthropomorphism is something humans have fallen for since the dawn of mankind and is doing so yet again with AI. The thought that you have someone intelligent nearby with god-like abilities can be comforting but... Um... LLMs don't work like that.

43 minutes agoaxegon_

It's become almost comical to me to read articles like this and wait for the part that, in this example, comes pretty close to the beginning: "This isn’t a rant against AI."

It's not? Why not? It's a "wake-up call", it's a "warning shot", but heaven forbid it's a rant against AI.

To me it's like someone listing off deaths from fentanyl, how it's destroyed families, ruined lives, but then tossing in a disclaimer that "this isn't a rant against fentanyl". In my view, the ways that people use and are drawn into AI usage has all the hallmarks of a spiral into drug addiction. There may be safe ways to use drugs but "distribute them for free to everyone on the internet" is not among them.

12 hours agoBrenBarn

It’s already becoming politicized, in the lowercase-p sense of the word. One is assumed to be either pro- or anti-AI, and so you gotta do your best to signal to the reader where you lie.

12 hours agoketzo

> so you gotta do your best to signal to the reader where you lie

Or what?

11 hours agoZYbCRq22HbJ2y7

Or the reader will put you into a category yourself and won't be willing to look at the essence of the argument.

I'd say the better word for that is polarising than political, but they synonims these days.

11 hours agobrain5ide

Well I mean, nitpick, but Fentanyl is a useful medication in the right context. It's not inherently evil.

I think my biggest concern with AI is its biggest proponents have the least wisdom imaginable. I'm deeply concerned that our technocrats are running full speed at AGI with like zero plan for what happens if it "disrupts" 50% of jobs in a shockingly short period of time, or worse outcomes (theres some evidence the new tariff policies were generated with LLMs.. its probably already making policy. But it could be worse. What happens when bad actors start using these things to intentionally gaslight the population?)

But I actually think AI (not AGI) as an assistant can be helpful.

12 hours agoovergard

> I think my biggest concern with AI is its biggest proponents have the least wisdom imaginable. [...] (not AGI)

Speaking of Wisdom and a different "AGI", I think there's an old Dungeons and Dragons joke that can be reworked here:

Intelligence is knowing than an LLM uses vector embeddings of tokens.

Wisdom is knowing LLMs shouldn't be used for business rules.

11 hours agoTerr_

Are we talking about structural things or about individual perspective things?

At individual perspective - AI is useful as a helper to achieve your generative tasks. I'd argue against analytic tasks, but YMMV.

At the societal perspective, e.g. you as individual can not trus anything the society has produced, because it's likely some AI generated bullshit.

Some time ago, if you were not trusting a source, you could build your understanding by evaluating a plurality of sources and perspectives and get to the answer in a statistical manner. Now every possible argument can be stretched in any possible dimension and your ability to build a conclusion has been ripped away.

11 hours agobrain5ide

> build your understanding by evaluating a plurality of sources and perspectives and get to the answer in a statistical manner

A few thousand years of pre-LLM primary sources remain available for evaluation by humans and LLMs.

11 hours agowalterbell

You and I remember pre-AI famous works. "Hey, I'm pretty sure Odysseus took a long time to get home". Somebody goes and prints 50 different AI-generated versions of the _Odyssey_, how are future generations supposed to know which is real and which is fake?

10 hours agocoryrc

> how are future generations supposed to know which is real

Reality/truth/history has always been an expensive pursuit in the face of evolving pollutants.

10 hours agowalterbell

This is literally how the Odyssey was passed down for the 2000 years before the printing press was invented.

Every work had multiple versions. All versions were different. Some versions were diametrically opposed to others.

Have a look at Bible scholarship to see just _how_ divergent texts can become by nothing more than scribe errors.

10 hours agonoosphr

99.9999999% sure that was their point? Why else would they bring up that particular work?

10 hours agosamtheprogram

What evidence is there that tarrif policy was LLM generated?

8 hours agospooky_action

There are uninhabited islands on the list.

7 hours agocalcifer

Despite people's ridicule this is normal practice, prevents loopholes being exploited.

6 hours agoKoolKat23

It seems more likely that bad data was involved.

There are actually export statistics (obviously errors, possibly fraud) for these islands. Someone probably stuck the numbers in a formula without digging a little deeper.

5 hours agomr_toad

Well in this case it's also beneficial given how automated supply chains are.

It's probably the most sane aspect of the whole thing.

3 hours agoKoolKat23

There are people who asked several AI engines (ChatGPT, Grok etc.) “what should the tariff policy be to bring the trade balance to zero?” (quoting from memory) an the answer was the formula used by the Trump administration. If I find the references I will post them as a follow-up.

Russia, North Korea and handful of other countries were spared, likely because they sided with the US and Russia at the UN General Assembly on Feb 24 of this year, in voting against “Advancing a comprehensive, just and lasting peace in Ukraine.” https://digitallibrary.un.org/record/4076672

EDIT: Found it: https://nitter.net/krishnanrohit/status/1907587352157106292

Also discussed here: https://www.latintimes.com/trump-accused-using-chatgpt-creat...

The theory was first floated by Destiny, a popular political commentator. He accused the administration of using ChatGPT to calculate the tariffs the U.S. is charged by other countries, "which is why the tariffs make absolutely no fucking sense."

"They're simply dividing the trade deficit we have with a country with our imports from that country, or using 10%, whichever is greater," Destiny, who goes by @TheOmniLiberal on X, shared in a post on Wednesday.

> I think they asked ChatGPT to calculate the tariffs from other countries, which is why the tariffs make absolutely no fucking sense.

> They're simply dividing the trade deficit we have with a country with our imports from that country, or using 10%, whichever is greater. https://t.co/Rc45V7qxHl pic.twitter.com/SUu2syKbHS

> — Destiny | Steven Bonnell II (@TheOmniLiberal) April 2, 2025

He attached a screenshot of his exchange with the AI bot. He started by asking ChatGPT, "What would be an easy way to calculate the tariffs that should be imposed on other countries so that the US is on even-playing fields when it comes to trade deficit? Set minimum at 10%."

"To calculate tariffs that help level the playing field in terms of trade deficits (with a minimum tariff of 10%), you can use a proportional tariff formula based on the trade deficit with each country. The idea is to impose higher tariffs on countries with which the U.S. has larger trade deficits, thus incentivizing more balanced trade," the bot responded, along with a formula to use.

John Aravosis, an influencer with a background in law and journalism, shared a TikTok video that then outlined how each tariff was calculated; by essentially taking the U.S. trade deficit with the country divided by the total imports from that country to the U.S.

"Guys, they're setting U.S. trade policy based on a bad ChatGPT question that got it totally wrong. That's how we're doing trade war with the world," Aravosis proclaimed before adding the stock market is "totally crashing."

7 hours agoaf78

Honestly this post seems like misplaced wisdom to me: your concern is the development of AGI displacing jobs and not the numerous reliability problems with the analytic use of AI tools in particular the overestimate of LLM capabilities because they're good at writing pretty prose?

If we were headed straight to the AGI era then hey, problem solved - intelligent general machines which can advance towards solutions in a coherent if not human like fashion is one thing but that's not what AI is today.

AI today is enormously unreliable and very limited in a dangerous way - namely it looks more capable then it is.

9 hours agoXorNot

It’s a rant against the wrong usage of a tool not the tool as such.

11 hours agocroes

It's a tool that promotes incorrect usage though, and that is an inherent problem. All of these companies are selling AI as a tool to do work for you, and the AI _sounds confident_ not matter what it spits out.

11 hours agoTurskarama

My personal pet-peeve is how a great majority of people--and too many developers--are being misled into believing a fictional character coincidentally named "Assistant" inside a story-document half-created by an LLM is the author-LLM.

If a human generates a story containing Count Dracula, that doesn't mean vampires are real, or that capabilities like "turning into a cloud of bats" are real, or that the algorithm "thirsts for the blood of the innocent."

The same holds when the story comes from an algorithm, and it continues to hold when story is about a differently-named character named "AI Assistant" who is "helpful".

Getting people to fall for this illusion is great news for the companies though, because they can get investor-dollars and make sales with the promise of "our system is intelligent", which is true in the same sense as "our system converts blood into immortality."

9 hours agoTerr_

That's the real danger of AI.

The false promises of the AI companies and the false expectations of the management and users.

Had it just recently for a data migration where the users asked if they still need to enter meta data for documents they just could use AI to query data that was usually based on that meta data.

They trust AI before it's even there and don't even consider a transition period where they check if the result are correct.

Like with security convenience prevails.

9 hours agocroes

But isn’t this just par for the course with every new technological revolution?

“It’ll change everything!” they said, as they continued to put money in their pockets as people were distracted by the shiny object.

9 hours agoblackqueeriroh

With every revolution and with every fake revolution.

NFTs didn't change much, money changed its owner

2 hours agocroes

> All of these companies are selling AI as a tool to do work for you, and the AI _sounds confident_ not matter what it spits out.

If your LLM + pre-prompt setup sounds confident with every response, something is probably wrong; it doesn't have to be that way. It isn't for me. I haven't collected statistics, but I often get decent nuance back from Claude.

Think more about what you're doing and experiment. Try different pre-prompts. Try different conversation styles.

This is not dismissing the tendency for overconfidence, sycophancy, and more. I'm just sharing some mitigations.

7 hours agoxpe

Here's the root of the problem though, how do you know that the AI is actually "thinking" more carefully, as opposed to just pretending to?

The short answer is: you can know for a fact that it _isn't_ thinking more carefully because LLMs don't actually think at all, they just parrot language. LLMs are performing well when they are putting out what you want to hear, which is not necessarily a well thought out answer but rather an answer that LOOKS well thought out.

3 hours agoTurskarama

> Think more about what you're doing and experiment. Try different pre-prompts. Try different conversation styles.

Ask on a Wednesday. During a full moon. While in a shipping container. Standing up. Keep a black box on your desk as the sacred GenAI avatar and pray to it. Ask while hopping on one leg.

7 hours agoGeoAtreides

Well, it's actually a rant about AI making what the author perceives as mistakes. Honestly it reads like the author is attempting to show off or brag by listing imaginary mistakes an AI might have made, but they are all the sort of mistakes a human could make too. And the fact that they are not real incidents, significantly weakens his argument. He is a consultant who sells training services so obviously if people come to rely on AI more for this kind of thing he will be out of work.

It does not help that his examples of things an imaginary LLM might miss are all very subjective and partisan too.

9 hours agomike_hearn

Because "rant" is irrational, and the author wants to be seen as staking out a rational opposition.

Of course, every ranter wants to be seen that way, and so a protest that something isn't a rant against X is generally a sign that it absolutely is a rant against X that the author is pre-emptively defending.

12 hours agodragonwriter

I've rarely read a rant that didn't consist of some good logical points

12 hours agovoxl
[deleted]
11 hours ago

Doesn‘t mean listing logical points makes it a rant

11 hours agocroes

If logical points are all against sth that is debatable then it's a rant. They can be good points tho.

9 hours agothrowaway290

• Instead of forming hypotheses, users asked the AI for ideas.

• Instead of validating sources, they assumed the AI had already done so.

• Instead of assessing multiple perspectives, they integrated and edited the AI’s summary and moved on.

These are point against certain actions with a tool not against the tool.

AI is for the starting point not the final result.

AI must never be the last step but it often is because people trust computers especially if they answer in a confident language.

It's the ELIZA effect all over again.

9 hours agocroes

The classic hallmark of rant is picking some study, not reading the methodology etc and making wild conclusion on it. For example for a study it says:

> The study revealed a clear pattern: the more confidence users had in the AI, the less they thought critically

And the study didn't even checked that. They just plotted the correlation between how much user think they rely on AI vs how much effort they think they saved. Isn't it expected to be positive even if they think as critically.

[1]: https://www.microsoft.com/en-us/research/wp-content/uploads/...

9 hours agoYetAnotherNick

The other thing is that the second anyone even perceives an opinion to be "anti-AI" they bombard you with "people thought the printing press lowered intellect too!" Or radio or TV or video games, etc.

No one ever considers that maybe they all did lower our attention spans, prevent us from learning as well as we used to, etc. and now we are at a point we can't afford to keep losing intelligence and attention span

12 hours agoaprilthird2021

I think people don't consider that because the usual criticism of television and video games is that people spend too long paying attention to them.

One of the famous Greek philosophers complained that books were hurting people's minds because they no longer memorized information, so this kind of complaint is as old as civilization itself. There is no evidence that we would be on Mars by now already if we had never invented books or television.

9 hours agomike_hearn

Pluto? Plotto? Platti?

Seriously though, that's a horrible bowdlerization of the argument in the Phaedrus. It's actually very subtle and interesting, not just reactionary griping.

8 hours agopasabagi

That’s a much harder claim to prove. The value of an attention span is non zero, but if the speed of access to information is close to zero, how do these relate?

If I can solve two problems in a near constant time that is a few hours, what is the value of solving the problem which takes days to reason through?

I suspect that as the problem spaces diverge enough you’ll have two skill sets. Who can solve n problems the fastest and who can determine which k problems require deep thought and narrow direction. Right now we have the same group of people solving both.

10 hours agonostrebored

> The value of an attention span is non zero, but if the speed of access to information is close to zero, how do these relate?

Gell-Mann Amnesia. Attention span limits the amount information of information we can process and with attention spans decreasing, increases to information flow stop having a positive effect. People simply forget what they started with even if that contradicts previous information.

> If I can solve two problems in a near constant time that is a few hours, what is the value of solving the problem which takes days to reason through?

You don't end up solving the problem in near constant time, you end up applying the last suggested solution. There's a difference.

7 hours agofriendzis

The difference is that between a considered critique and unhinged venting.

12 hours agoSoftTalker

It’s not a rant against fentanyl, it’s a rant against irresponsible use of fentanyl.

Just like this is a rant against irresponsible use of AI.

Hope this helps

9 hours agoyapyap

Yes, that makes much more sense.

5 hours agojohnisgood

TFA makes the point pretty clear IMHO: they aren’t opposed to AI, they’re opposed to over-reliance on AI.

12 hours agothrowaway894345

Reminds me of people who say “there is nothing wrong with capitalism but…”

You shall not criticize the profit!

9 hours agoEGreg

[dead]

12 hours agoTacticalCoder

They have to preface their articles with "This isn’t a rant against AI." because there are a lot of rants against AI out there, such as your comment.

12 hours agowoah

Both substances and AI can be used responsibly. It is not the fault of substances nor AI.

People is why we can't have anything nice. It sucks.

I have medical reasons to take opioids, but in the eyes of people, I am a junkie. I would not be considered a junkie if I kept popping ibuprofen. It is silly. Opioids do not even make me high to begin with (it is complicated).

9 hours agojohnisgood

I bet the downvotes are done by people who have absolutely no need to take any medications, or have no clue what it is like to be called a junkie for the rest of your life for taking medications that were prescribed to begin with.

Or if not, then what, is it not true that both substances and AI can be used responsibly, and irresponsibly?

"People is why we can't have anything nice. It sucks." is also true, applies to many things, just consider vending machines alone, or bags in public (for dog poop) and anything of the sort. We no longer have bags anymore, because people stole it. A great instance of "this is why we can't have nice things". Pretty sure you can think of more.

Make the down-votes make sense, please.

(I do not care about the down-votes per se, I care about why I am being disagreed with without any responses.)

5 hours agojohnisgood

I'd argue that for a profession that has existed for quite some time, "since chatGPT appeared" isn't in any way "slow"

2 hours agoblack_puppydog

You have to use machine filtering of some kind, because there's too much information.

A director of NSA, pre 9/11, once remarked that the entire organization produced about two pieces of actionable intelligence a day, and about one item a week that reached the President. An internal study from that era began "The U.S. Government collects too much information".

But that was from the Cold War era, when the intelligence community was struggling to find out basic things such as how many tank brigades the USSR had. After 9/11, the intel community had to try to figure out what little terrorist units with tens of people were up to. That required trolling through far too much irrelevant information.

12 hours agoAnimats

A picked nit, perchance, but you may have meant "trawling". Not sure how much trolling the NSA does :)

11 hours agohalper

Trolling is a fishing technique [0] of slowly dragging a lure or baited hook from a moving boat, and is likely the meaning that online trolling takes it from rather than a creature from Scandinavian folklore [1].

There's definitely a metaphor to be made for trolling for data, that GP could have been intentionally making. I've certainly seen that idiom used before, although it could have been an eggcorn [2] for trawling.

[0] https://en.wikipedia.org/wiki/Trolling_(fishing)

[1] https://en.wikipedia.org/wiki/Troll_(slang)#Origin_and_etymo...

[2] https://en.wikipedia.org/wiki/Eggcorn

7 hours agoagurk

I wrote about some similar observations in the clinical domain -- I call it the "human -> AI reasoning shunt" [0]. Explicitly requesting an AI tool to perform reasoning is one thing, but a concern I have is that, with the increasing prevalence of these AI tools, even tasks that theoretically are not reasoning-based (ie helping write clinical notes or answer simple questions) can surreptitiously offload some degree of reasoning away from humans by allowing these systems to determine what bits of information are important or not.

[0] https://samrawal.substack.com/p/the-human-ai-reasoning-shunt

15 hours agozora_goron

It's been less than 3 years, yet this guy is already able to confidently predicting a "collapse of critical thinking." I'm sure that is the product of rational analysis and not confirmation bias...

14 hours agotqi

There is published research demonstrating the effect, but sure, snark your way into your own confirmation bias. Why not?

The idea that humans in general actually do any thinking is demonstrably false.

12 hours agoergonaught

Give me a break, I read the article, I'm not convinced does anything to further his specific claims about the community. Frankly the whole thing is just a rant about how things were better back when.

"But the tradecraft is slipping. Analysts are skipping the hard parts. They’re trusting GenAI to do the heavy cognitive lifting, and it’s changing how we operate at a foundational level."

Next we're going to be hearing about how participation trophies and DEI are also contributing to this imagined "problem."

10 hours agotqi

Really more of a moral panic than a coherent analysis, but that's not unusual.

13 hours agofarts_mckensy

I like how all these articles miss the elephant in the room: using a chatbot as an assistant is offering your data, thoughts, insights, and focus of interests to a corporation that's at best neutral and at worse hostile. Moreover, that corporation may also share anything with business partners, governments, and law enforcement institutions with unknown objectives.

10 hours agoghssds

> is offering your data, thoughts, insights, and focus of interests to a corporation that's at best neutral and at worse hostile

For a second I thought you were talking about the fact we all have jobs doing exactly that!

10 hours agovasco

> jobs doing exactly that

Hopefully narrowed by team, role and task..

9 hours agowalterbell

Good point in the post about confidence - most people equate confidence with accuracy - and since AIs always sound confident, they always sound correct

17 hours agoBariumBlue

There's a reason the "con" in "con man" stands for "confidence". Turns out the illusion of confidence is the easiest way to hack the human brain.

15 hours agokibwen

Except that's not what "confidence man" means. It means that you gain their confidence/trust, then betray it.

A con man often uses the illusion of confidence to gain trust, though that's not the only way. The reverse also works: gain their trust by seeming unconfident and incapable, and thus easily taken advantage of.

14 hours agojfengel

Yep. Last night I was asking ChatGPT (4o) to help me generate a simple HTML canvas that users could draw on. Multiple times, it spoke confidently of its not even kind of working solution (copying the text from the chat below):

- "Final FIXED & WORKING drawing.html" (it wasn't working at all)

- "Full, Clean, Working Version (save as drawing.html)" (not working at all)

- "Tested and works perfectly with: Chrome / Safari / Firefox" (not working at all)

- "Working Drawing Canvas (Vanilla HTML/JS — Save this as index.html)" (not working at all)

- "It Just Works™" (not working at all)

The last one was so obnoxious I moved over to Claude (3.5 Sonnet) and it knocked it out in 3-5 prompts.

16 hours agorglover

IME, it's better to just delete erroneous responses and fix prompts until it works.

They are much better at fractally subdividing and interpreting inputs like a believer of a religion, than at deconstructing and iteratively improving things like an engineert. It's waste of token count trying to have such discussions with an LLM.

7 hours agonumpad0

4o is almost laughably bad at code compared to Claude.

15 hours agoAeolun

To be fair, I wouldn't really expect working software if someone described it that way either.

16 hours agodullcrisp

Those are not my prompts. Those were the headings it put above the code it generated in its responses.

Even if my prompt was low-quality, it doesn't matter. It's confidently stating that what it produced was both tested and working. I personally understand that's not true, but of all the safety guards they should be putting in place, not lying should be near the top of the list.

16 hours agorglover

Intellectual humility is just as rare with AI as it is with humans.

15 hours agomattgreenrocks

The number of times I've caught chatgpt passing off something borked with perfect confidence is growing but what's truly annoying is when you point it out and you get that ever so cheerful "oh I'm so sorry teehee" response from it. It's dumb stuff too like a formula it's simplified based on a assumption that was never prompted.

16 hours agomorkalork

Why OSINT? That goes for any domain.

Besides "OSINT" has been busy posting scareware for years, even before "AI".

There's so much spam that you can't figure out what the real security issues are. Every other "security article" is about "an attacker" that "could" obtain access if you were sitting at your keyboard and they were holding a gun to your head.

7 hours agonottorp

I question the notion that such tools are necessary or admissible in my daily life.

Mere observation of others has shown me the decadence that results from even allowing such "tools" into my life at all.

(who or what is the tool being used?)

I have seen zero positive effects from the cynical application of such tools in any aspect of life. The narrative that we "all use them" is false.

7 hours agoringeryless

> Not because analysts are getting lazy, but because AI is making the job feel easier than it actually is.

But all the examples feel like people are being really lazy, e.g.

> Paste the image into the AI tool, read the suggested location, and move on.

> Ask Gemini, “Who runs this domain?” and accept the top-line answer.

6 hours agoramonverse

Most cybersecurity is just a smoke show anyways, presentation matters more than content. AI is just good at security theather as humans are.

5 hours agotorginus

Aka, i have no problem being explicitly anti AI as a bad idea to begin with. This is what I think, that it is a foolish project from the get go.

Techne is the Greek word for HAND.

7 hours agoringeryless

> What Dies When Tradecraft Goes Passive?

Eventually, Brazil (1985) happens, to the detriment of Archibald [B]uttle, where everyone gives unquestionable trust to a flawed system.

8 hours agoTerr_

Tried one exercise from the article, to ask gemini to identify owner of domain (my domain). Gemini was very confident and very wrong.

I bet any OSINT person would have had my name and contact in half an hour.

10 hours agovincnetas

Am I the only one to have to search for what OSINT was an acronym for?

6 hours agoDaub

It's true, so often chatgpt has to apologize because it was wrong. lol

9 hours agoingohelpinger

Do you think humans are less likely to be wrong or just less likely to apologize when they are?

8 hours agodambi0

i think being wrong is fine, but being wrong intentionally is not very human, this is due to emotions, consciousnesses, pride etc. which ai does not have as of now, and this leads me to believe, it's just another religion which will be used to "make the world a better place" :D

6 hours agoingohelpinger

> “Paris, near Place de la République.” It sounds right. You move on. But a trained eye would notice the signage is Belgian. The license plates are off. The architecture doesn’t match. You trusted the AI and missed the location by a country.

I genuinely hope if you're a professional intelligence analyst it doesn't take a trained eye to distinguish Paris from Belgium. Genuinely every day there's articles like this. The post about college students at elite universities who can't read, tariff policy by random number generator, programmers who struggle to solve first semester CS problems, intelligence analysts who can't do something you can do if you play Geoguessr as a hobby. Are we just getting dumber every year? It feels like we're falling off a cliff over the last decade or so.

Like, the entire article boils down to "verify information and use critical thinking", you'd think someone working in intelligence and law enforcement which this author trains knows this when they get hired?

15 hours agoBarrin92

I think most people over estimate the average persons intelligence. Complaints like this are as old as time if you read primary sources from past generations.

15 hours agodexwiz

Relevant today as I unpick some unit tests I let AI write and turn out to be very plausible-looking at first and second glance, but turned out to test nothing of value when properly examined.

9 hours agopetesergeant

This article seems a bit weird because it doesn't talk about whether the quality of the analysis went up or down afterwards.

To pick an extreme example, programmers using a strongly typed language might not bother manually checking for potential type errors in their code and leave it to the type checker to catch them. If the type checker turns out to be buggy then their code may fail in production due to their sloppiness. However, we expect the code to eventually be free of type errors to a superhuman extent because they are using a tool that is strong to cover their personal weaknesses.

AI isn't as provably correct as type checkers, but they're pretty good at critical thinking (superhuman compared to the average HN argument) and human analysts must also routinely leave a trail of mistakes in their wake. The real question is what influence the AI has on the quality and I don't see why the assumption is that it is negative. It might well be; but the article doesn't seem to go into that in any depth.

16 hours agoroenxi

"OSINT" has had a rather quick collapse in that area for quite some time, many participants under that label are basically propaganda outlets for whatever state or other.

Maybe the article addresses that, I'm not permitted to read it, likely because I'm using IPv6.

Forensic Architecture is a decent counterexample, however. They've been using machine learning and computer synthesis techniques for years without dropping in quality.

7 hours agocess11

The trouble with OSINT is that they often take the opinions of "good" government officials and journalists at face value.

This sort of lazy thinking doesn't miss a beat when it comes to take the opinions of an LLM at face value.

Why not? It sounds mostly the same. The motivations to believe AI, is exactly the same as the motivation to believe government officials and journalists.

7 hours agoImHereToVote

The main takeaway of this whole LLM chatbot nonsense to me is how gullible people are and how low the bar is.

These tools are brand new and have proven kinks (hallucinations, for example). But instead of being, rightly, in my view, skeptical, the majority of people completely buy into the hype and already have full automation bias when it comes to these tools. They blindly trust the output, and merrily push forth AI generated, incorrect garbage that they themselves have no expertise or ability to evaluate. It's like everyone is itching to buy a bridge.

In some sense, I suppose it's only natural. Much of the modern economy sustains itself on little more than hype and snake oil anyway, so I guess it's par for the course. Still, it's left me a bit incredulous, particularly when people I thought were smart and capable of being critical seemingly adopt this nonsense without batting an eye. Worse, they all hype it up even further. Makes me feel like the whole LLM business is some kind of ponzi scheme given how willingly users will schill for these products for nothing.

15 hours agovoidhorse

For sure. I look at the shilling people do for The Next Big Thing (such as AI) and think, “if you put that much time and care into acquiring competence in something useful you wouldn’t need to be chasing internet clout along with all the other too-online people.”

15 hours agomattgreenrocks

> Still, it's left me a bit incredulous, particularly when people I thought were smart and capable of being critical seemingly adopt this nonsense without batting an eye.

That's the main problem, it's becoming the standard in everything.

14 hours agoantegamisou

A crutch is one thing. A crutch made of rotten wood is another.

20 hours agoFrankWilhoit

Also, a crutch for doing long division is not the same as a crutch for general thinking and creativity.

19 hours agoadd-sub-mul-div

It isn't something completely new, there are many cases of unwarranted trust in machines even before computers existed. AI just adds persuasion.

The "Pray Mr. Babbage..." anecdote comes to mind: https://www.azquotes.com/quote/14183

16 hours agorini17

This comment indirectly represents my current biggest fear with respect to AI; I have encountered a disturbing lack of comprehension for figurative language. Abstractions, analogies, and figurative language are, I believe, critical tools for thinking. "Rotten wood, what are you even saying?"

People also seem to be losing their ability to detect satire.

I'm concerned GenAI will lower creative standards too, that people will be fine with the sound of suno, or the look of Dall-E. How then would the arts evolve?

14 hours agozarmin

How will arts evolve? By recombining these things, as it always has. I swear, this thread is a collection of the most curmudgeony people. "People no longer use their memory now that all those pesky books are around".

The kids will be alright.

8 hours agodanielbln

At the end of the day it is people who are doing OSINT and their self/ai confidence is a reflection of their fallibility, just as being manipulated by intelligence operatives in their discord servers to be peer pressured into pushing a certain narrative. OSINT should be about uncovering objective truth in a sea full of lies in a storm of obfuscation through a tsunami of misinformation caused by an earthquake of disinformation. Now these OSINT people need to battle the siren song of clout (and being first).

I doubt anyone can do it perfectly every time, it requires a posthuman level of objectivity and high level of information quality that hardly ever exists.

15 hours agosmashah

> This isn’t a rant against AI. I use it daily

It is, but it adds disingenuous apologetic.

Not wishing to pick on this particular author, or even this particular topic, but it follows a clear pattern that you can find everywhere in tech journalism:

  Some really bad thing X is happening. Everyone knows X is happening.
  There is evidence X is happening, But I am *not* arguing against X
  because that would brand me a Luddite/outsider/naysayer.... and we
  all know a LOT of money and influence (including my own salary)
  rests on nobody talking about X.
Practically every article on the negative effects of smartphones or social media printed in the past 20 years starts with the same chirpy disavowal of the authors actual message. Something like;

"Smartphones and social media are an essential part of modern life today... but"

That always sounds like those people who say "I'm not a racist, but..."

Sure, we get it, there's a lot of money and powerful people riding on "AI". Why water down your message of genuine concern?

18 hours agononrandomstring

There were too many cheap accusations of hypocrisy "you say X is bad so why do you use it yourself". So everyone is now preempting it.

16 hours agorini17

I think this is a good point regardless of how much you have been down voted. I hope your not using this context to sub-communicate this issue isn't important. If not, It might have been better to put your last line at the top

16 hours agotrinsic2

The subject is deadly serious, and I only wish I could amplify it more. The abdication of reason and responsibility to machines is desperately dumb and no good will come of it.

Maybe what I'm getting at is this [0] poem of Taylor Mali. Somehow we all lost our nerve to challenge really, really bad things, wrapping up messages in tentative language. Sometimes that's a genuine attempt at balance, or honesty. But often these days I feel an author is trying too hard to distance themself from ... from themself.

It's a a silly bugbear, I know.

[0] https://taylormali.com/poems/totally-like-whatever-you-know/

8 hours agononrandomstring

> It is, but it adds disingenuous apologetic.

It’s not. It’s a rant against people and their laziness and gullibility.

15 hours agoAeolun

[dead]

18 hours agoaaron695

This is another silly against AI tools - that doesn’t offer useful or insightful suggestions on how to adapt or provide an informed study of areas of concern and - one that capitalizes on the natural worries we have on HN because of our generic fears around critical thinking being lost when AI will take over our jobs - in general, rather like concerns about the web in pre-internet age and SEO in digital marketing age

OSINT only exists because of internet capabilities and google search - ie someone had to learn how to use those new tools just a few years ago and apply critical thinking

AI tools and models are rapidly evolving and more in depth capabilities appearing in the models, all this means the tools are hardly set in stone and the workflows will evolve with them - it’s still up to human oversight to evolve with the tools - the skills of human overseeing AI is something that will develop too

17 hours agoAIorNot

The article is all about that oversight. It ends with a ten point checklist with items such as "Did I treat GenAI as a thought partner—not a source of truth?".

17 hours agocard_zero

So weak! No matter how good a model gets it will always present information with confidence regardless of whether or not it's correct. Anyone that has spent five minutes with the tools I knows this.

17 hours agocmiles74

I’ve read enough pseudo-intellectual Internet comments that I tend to subconsciously apply a slight negative bias to posts that appear to try too hard to project an air of authority via confidence. It isn’t always the best heuristic, as it leaves out the small set of competent and well-marketed people. But it certainly deflates my expectations around LLM output.

14 hours agomattgreenrocks

OSINT (not a term I was particularly familiar with, personally) actually goes back quite a ways[1]. Software certainly makes aggregating the information easier to accumulate and finding signal in the noise, but bad security practices do far more to make that information accessible.

[1] https://www.tandfonline.com/doi/full/10.1080/16161262.2023.2...

16 hours agosalgernon

Back in the 1990s my boss went to a conference where there was a talk on OSINT.

She was interested in the then-new concept of "open source" so went to the talk, only to find it had nothing to do with software development.