I think we’re just getting started, with fake images and videos.
I suspect that people will be killed, because of outrage over fake stuff. Before the Ukraine invasion, some of the folks in Donbas made a fake bomb, complete with corpses from a morgue (with autopsy scars)[0]. That didn’t require any AI at all.
We can expect videos of unpopular minorities, doing horrible things, politicians saying stuff they never said, and evidence submitted to trial, that was completely made from whole cloth.
So far, I see the most concern about this sort of thing from people who came of age around or after Web 2.0 hit, at a time when even a good photoshop wasn’t too hard to place as fake.
Those I know who lived through this issue when digital editing really became cheap seem to be more sanguine about it, while the younger generation on the opposite side side is some combination “whatever” or frustrated but accept that yet another of countless weird things has invade a reality that was never quite right to begin with.
The folks in between, I’d say about the 20 years from age 20 to 40, are the most annoyed though. The eye of the storm on the way to proving that cyberpunk lacked the required imagination to properly calibrate our sense of when things were going to really get insane.
From 1950 - 2005(ish) there were a small number of sources due to the enormous moat required to become a broadcaster. From 2005 to 2021, you could mostly trust video as the costs of casual fakery were prohibitive. Now that the cost to produce fake videos are near zero, I suspect we will return to a much smaller number of sources (though not as small as in the pre YouTube era).
There will be people who care about trusted and reliably accurate news sources, and at least some of them are willing to pay for it. Think 404 Media.
But there are people who don't want their news to be "reliably accurate", but who watch/read news to have their own opinions and prejudices validated no matter how misinformed they are. Think Fox News.
But there are way way more people who only consume "news" on algorithmically tweaked social media platforms, where driving "engagement" is the only metric that matters, and "truth" or "accuracy" is not just lower priorities but are completely irrelevant to the platform owners and hence their algorithms. Fake ragebait drives engagement which drives advertising profits.
What this incident really shows is the growing gap between how easy it is to create a convincing warning and how costly it is to verify what's actually happening. Hoaxes aren't new, but generative tools make fabrication almost free and massively increase the volume.
The rail operator didn't do anything wrong. After an earthquake and a realistic-looking image, the only responsible action is to treat it as potentially real and inspect the track.
This wasn't catastrophic, but it's a preview of a world where a single person can cheaply trigger high-cost responses. The systems we build will have to adapt, not by ignoring social media reports, but by developing faster, more resilient ways to distinguish signal from noise.
You don't need AI for this kind of disruption. People have been making fake bomb threats for years. You just have to say it, either directly to the railway/etc. or publicly enough that somebody else will believe it and forward it to them. The difference might be of intent - if you say you planted a bomb on the bridge, you're probably committing a crime, but if you just post a piece of art without context, it's more plausibly deniable.
It's also pretty common in the UK for trains to be delayed just because some passenger accidentally left their bag on the platform. Not even any malicious intent. I was on a train that stopped in a tunnel for that reason once. They're just very vulnerable to any hint of danger.
AI definitely makes it easier and it will happen more often.
You don't need anything for anything. You can do war with long sticks. Turns out guns, planes, and firebombs work better.
Sir, this is AI prose. Wendy's doesn't allow AI prose.
Thanks for the heads-up! I actually wrote this based on my own thoughts about the incident, but I understand the concern. I'll make sure to keep my posts in line with the community guidelines.
To be clear, you don't need AI for this.
You can also just call the railroad and report the bridge as damaged.
Hoaxes and pranks and fake threats have been around forever.
Again, I see this argument.
“Bad X has happened before and unsolved. Why worry about bad X^2?”
Personally I’d prefer if it remained at X so solutions can catch up. But that’s just me.
I think the implication is we already handle these events well enough pre-ai, and that the events are not necessarily more disruptive just because an ai was used to trigger them.
Implicit in this though is the assumption that the increase in awareness of these events has more to do with an ai being involved rather than the event actually being exceptional.
Yep, why give people computers? It just increases the number of bad X, before writing these type of hoaxes were much less common.
That leaves much more of a paper trail. People routinely are fined and jailed for pulling off such "pranks", partially because "fake threats"/"abuse of emergency response resources" are an exception to many freedom-of-speech laws.
A fake photo of a collapsed bridge however won't cross that criminal threshold.
You also don't need gunpowder to kill someone with projectiles, but gunpowder changed things in important ways. All I ever see are the most specious knee-jerk defenses of AI that immediately fall apart.
It's not clear to me that it "changed things in important ways" in this case if a call alleging serious damage to the rail would've similarly triggered a pause for inspection.
A phone call to railway management claiming stone fall on a track, a dead cow, a stalled car, etc will trigger a slowdown on that line, a call to the driver, and an inspection.
If that's not happening then management is playing fast and loose with legal responsibility and the risks of mass and inertia.
I really liked the first half to 3/4ths of the that book. The last part was less interesting to me but the Moab plot line and all the parts around anonymity/online presence I enjoyed.
> A BBC journalist ran the image through an AI chatbot which identified key spots that may have been manipulated.
The image is likely AI generated in this case, but this does not seem like the best strategy for finding out if an image is AI generated.
Someone I know is a high school English teacher (being vague because I don’t want to cause them trouble or embarrassment). They told me they were asking ChatGPT to tell them whether their students’ creative writing assignments were AI-generated or not-I pointed out that LLMs such as ChatGPT have poor reliability at this; classifier models trained specifically for this task perform somewhat better, yet also have their limitations. In any event, if the student has access to whatever model the teacher is using to test for AI-generation (or even comparable models), they can always respond adversarially by tinkering with an AI-generated story until it is no longer classified as AI-generated
A New York lawyer used ChatGPT to write a filing with references to fake cases. After a human told him they were hallucinated, he asked ChatGPT if that was true (which said they were real cases). He then screenshotted that answer and submitted it to the judge with the explanation "ChatGPT ... assured the reliability of its content." https://www.courtlistener.com/docket/63107798/54/mata-v-avia... (pages 19, 41-43)
I hope he was disbarred.
He was probably offered a role at some ai obsessed firm because of his “ai-native workflow”.
Reminds me of a Reddit story that made the rounds about a professor asking ChatGPT if it wrote papers, to which it frequently responded afirmatively. He sent an angry email about it, and a student responded by showing a response from ChatGPT claiming it wrote his email.
> student responded by showing a response from ChatGPT claiming it wrote his email
Which is actually fine. Students need to do their own homework. A teacher can delegate writing emails.
But if he didn't delegate, and it said he did, that would suggest that the methodology doesn't really work.
I believe you just got whooshed.
A person arguing in favor of LLM use failed to comprehend the context or argument? Unpossible!
You missed the entire point lol
Under the other photos it says A photo taken by a BBC North West Tonight reporter showed the bridge is undamaged and A BBC North West reporter visited the bridge today and confirmed it was undamaged
They may have first ran the photo through an AI, but they also went out to verify. Or ran it after verification to understand it better, maybe
So.. is this where the AI hype train starts to lose steam? One AI hallucinated and caused the incident, and another AI program just wasted everyone's time after it was unable to verify the issue. Sounds like AI was utterly useless to everyone involved.
> One AI hallucinated and caused the incident
I suspect that AI was prompted to create the image, not that this was an incidental "hallucination".
Cynical-me suspects this may have been a trial run by malicious actors experimenting with disrupting critical infrastructure.
> Sounds like AI was utterly useless to everyone involved
Maybe.
Imo, I think the advances in AI and the hype toward generated everything will actually be the current societies digitally-obsessed course-correction back to having a greater emphases on things like theater, live music, conversing with people in-person or even strangers (the horror, I know) simply to connect/consume more meaningfully. It'll level out integrating both instead of being so digitally loop-sided as humans adapt to enjoy both.*
To me, this shows a need for more local journalism that has been decimated by the digital world. By journalism, I mean it in a more traditional sense, not bloggers and podcast (no shade some follow principled, journalistic integrity -- as some national "traditional" one don't). Local journalism is usually held to account by the community, and even though the worldwide BBC site has this story, it was the local reporters they had that were able to verify. If these AI stories/events accelerate a return to local reporting with a worldwide audience, then all the better.
* I try to be a realist, but when I err, it tends to be on the optimist side
The tech giants sucking up all the ad revenue is what killed local journalism. Unless you can find a solution to that problem (or an alternstove fundong model), it's not coming back.
But just think of all the people that didn’t have to receive a paycheck because of all this efficiency!
It’s really incredible how the supposedly unassailable judgement of mass consumer preference consistently leads our society to produce worse shit so we can have more or it, and rewards the chief enshittifiers with mega yachts.
They have powerful untaxed monopolies in excess of the economic value tech companies themselves generate.
At some point, the value of their services come from the people who use their sites.
> Sounds like AI was utterly useless to everyone involved.
Not the hoaxer!
ChatGPT: This looks like AI. I can tell from some of the pixels and from seeing quite a bit of training data in my time.
[deleted]
This is the fast way they can try, but it shouldn't be the most trustworthy way and shouldn't be in report.
If it's nano banana you can give it to Gemini bc it has artifacts
All these tool integrations are making it increasingly difficult to explain to non-tech people what these chatbots are capable of. Even more so as multi-modality improves (at some point image generation went from a distinct tool to arguably an inherent part the the models).
Yeah, talk about begging the question. Yikes.
Yeah that hardly talks of the "journalist" being good at their job. At worst they asked a biased question like "has this photo been AI generated and if then how" or worse.
People tend to think that AI is like a specific kind of human which knows other AI things better. But we should expect better from people that do writing as their job.
It's not, but when you have 30 minutes to ship a story...
[deleted]
> Network Rail said the railway line was fully reopened at around 02:00 GMT and it has urged people to "think about the serious impact it could have" before creating or sharing hoax images.
> "The disruption caused by the creation and sharing of hoax images and videos like this creates a completely unnecessary delay to passengers at a cost to the taxpayer," a spokesperson said.
I don't think this will work the way they think it will work. In fact, I think they just proved they're vulnerable to a type of attack that causes disruption and completely unnecessary delay to passengers at a cost to the taxpayer
Anyone to whom that information is relevant already knew that this vulnerability has existed for a long time.
Cannot image how often this will happen after we are buried under fake contents from AI.
Just realize that people and institutions are adaptable and their processes are not set in stone. We'll find a way through even if you or I can't imagine exactly how right now.
Much of the world relies on general well-behavedness. The whole Andon principle doesn’t work if you’ve got asshole employees. With the public you don’t have a choice. You have to stop the trains because otherwise everyone will murder you if it turned out to be true. So better to be defensive.
When the west devolves into a low trust society because of things like this and the relentless importing of people from such, it will lose the advantage of being a high trust society. Equality for all!
I'd consider places with no school shootings higher trust than those with 300 school shootings every year :)
Dangerous subject. Pointing out the Somali Autism Scam in Minnesota gets me reliably flag-killed on the site.
Don’t trust, only verify.
[deleted]
Remember Moab
You don't need AI to make these hoaxes, pranks have been around forever etc etc.... but as with alot of the areas AI touches, the problem isn't the tools or use of them exactly, it's the scale. In this case the low barrier to creating the fake media coupled with the pervasiveness of social media networks and their reach (also networks that aren't new), affording the rapid deployment and significant impact by bad actors.
The problem is the scale. The scale of impact is immense and we're not ready to handle it.
The BBC says the hoaxer should consider the effect on other people. Should Sir Keir, who wants to "turbocharge" "AI", perhaps consider the effect on other people?
So far we have almost no positive applications for the IP laundering machines.
It's a bit of a non story, even with the fake image.
From the article:
Trains were halted after a suspected AI-generated picture that seemed to show major damage to a bridge appeared on social media following an earthquake.
...
Railway expert Tony Miles said due to the timing of the incident, very few passengers will have been impacted by the hoax as the services passing through at that time were primarily freight and sleeper trains.
"They generally go slow so as not to disturb the passengers trying to sleep - this means they have a bit of leeway to go faster and make up time if they encounter a delay," he said.
"It's more the fact that Network Rail will have had to mobilise a team to go and check the bridge which could impact their work for days."
Standard responsible rail maintainance is to investigate rail integrity following heavy rains, earthquakes, etc.
A fake image of a stone bridge with fallen parapets prompts the same response as a phone call about a fallen stone from a bridge or (ideally !!) just the earthquake itself - send out a hi-railer for a track inspection.
The larger story here (be it the UK, the US, or AU) is track inspections .. manned or unmanned?
on the decision to veer toward unmanned inspections that rely upon lidar, gauge measures, crack vibration sensing etc.
Personally I veer toward manned patrols with state of the art instrumentation - for the rail I'm familiar with there are things that can happen with ballast that are best picked up by a human, for now.
They should already be able to detect line breaks using old technology. They send current pulses down the line to detect stuck switches, since stuck switches can cause collisions. Also, the pulses are conducted through the wheels and axles of any trains, so they can use resistance and/or timing to figure out where the trains are.
Having said that, if it was 2020 and you told me that making photorealistic pictures of broken bridges was harder than spoofing the signals I just described, I’d say you were crazy.
The idea that a kid could do this would have seen even less plausible (that’s not to say a kid did it, just that they could have).
Anyway, since recently-intractable things are now trivial, runbooks for hoax responses need to be updated, apparently.
> They should already be able to detect line breaks using old technology.
Yes. That doesn't do much to detect a stone from a parapet rolling onto the line though.
Hence the need for inspection.
> runbooks for hoax responses need to be updated, apparently.
I'd argue not - whether it's an image of a damaged bridge, a phone call from a concerned person about an obstruction on the line, or just heavy rains or an earthquake .. the line should be inspected.
If anything urban rail is in a better position today as ideally camera networks should hopefully rapidly resolve whether a bridge is really damaged as per a fake image or not.
> I'd argue not - whether it's an image of a damaged bridge, a phone call from a concerned person about an obstruction on the line, or just heavy rains or an earthquake .. the line should be inspected.
Ideally? Sure.
But when someone can generate plausible disaster photos of every inch of every line of a country's rail network in mere minutes? And as soon as your inspection finishes, they do it again?
Yeah; it’s completely a matter of frequencies and probabilities. Also, technology keeps improving.
If I were working for the train line, and bridges kept “blowing up” like this, I’d probably install a bunch of cameras and try to arrange the shots to be aesthetically pleasing, then open the network to the public.
The runbook would involve checking continuity sensors in the rail, and issuing random pan/tilt commands to the camera.
plausibly correlated with what?
This correlated with an earthquake - this is the event that should have triggered an inspection regardless.
> But when someone can generate plausible disaster photos of every inch of every line of a country's rail network in mere minutes?
In the UK (and elsewhere) a large percentage of track is covered by cameras - inspection of over the top claims can be rapidly dismissed.
> And as soon as your inspection finishes, they do it again?
Sounds like a case for cyber crimes and public nuisance.
It's also no different to endless prank calls via phone, not a new thing.
> It's also no different to endless prank calls via phone, not a new thing.
Of course it's different. If I do 5 prank calls, that takes, say, 15 minutes.
In 15 minutes how many hoaxes can I generate with AI? Hundreds, maybe thousands?
This is like saying nukes are basically swords because they both kill people. We've always been able to kill people, who cares about nuclear weapons?
> This correlated with an earthquake…
Plenty of disasters don't. "No earthquake, no incident" obviously can't be the logic tree.
> In the UK (and elsewhere) a large percentage of track is covered by cameras - inspection of over the top claims can be rapidly dismissed.
"Yes. That doesn't do much to detect a stone from a parapet rolling onto the line though. Hence the need for inspection."
Sounds like you now agree it's less a need?
> Sounds like a case for cyber crimes and public nuisance.
"Sorry, not much we can do." As is the case when elderly folks get their accounts drained over the phone today.
Any idea how the road barriers in the USA detect a train to lower themselves? I assume it's something to do with current passed from one rail to the other through the axle?
When I stuck train wheels on my DeLorean and rode it down the tracks it lowered the barriers automatically which caused a bit of a traffic incident in Oxnard.
There are sensor sections on both sides. If you short the tracks together with a large enough wire, it triggers the signal box. Actually learned this at the MIT swap fest when manning the back gate a decade ago. Got some cheap alligator clips and strung to them together, no luck.... Larger gauge copper did trigger it, and confused a ton of people when no train came by lol
If whatever technology they installed said everything was fine, I would still want them to do what they did because the costs of being wrong are so much higher than the costs of what they did.
The point of that technology needs to be to alert you when something is wrong not to assure you that everything is fine whenever some other telemetry indicates otherwise.
90% manned. A lot of money and time goes into getting track access.
And collecting unmanned data is still such a pain. At the moment, you stick calibration gear to a train and hope it gets as much noise free data as it can. All whilst going at least 40mph over the area you want - you’re fighting vibrations, vehicle grease, too much sunlight, not enough sunlight, rain, ballast covering things, equipment not calibrated before going out etc etc.
Regardless of how many people it disrupted or not, it’s not a non story.
It’s highlighted a weakness. It’s easy to disrupt national infrastructure by generating realistic hoax photos/videos with very little effort from anywhere in the world.
It's not a new story, nor has it highlighted a new weakness - people have had the ability to claim tracks are covered in stone or by a dead cow for a good many years now.
Tracks have cameras to rapidly discount big claims, in this specific case there was an actual earthquake which should (and likely did, the story doesn't drill down very deep) have triggered a manual track inspection for blockages and ballast shifts in of itself.
> Almost 700 schools throughout Poland were in May last year targeted by hoax bomb threats during key exams, private Polish radio broadcaster RMF FM reported.
> It cited Polish investigators it did not name as saying that a detailed analysis of internet connections and a thorough examination of the content of emails with false bomb threats turned up ties to servers in the Russian city of St. Petersburg.
UK is really good at self-sabotaging and giving itself away to corporate interests (wealth is gonna trickle down any minute now, I'm sure of it!), Russians can happily just grab popcorn and enjoy the comedy show, no active participation necessary.
> Yet another attack vector
AI-Generated disinfo has been a known attack vector for the Russian regime (and their allied regimes) for years now [0][1].
I think we’re just getting started, with fake images and videos.
I suspect that people will be killed, because of outrage over fake stuff. Before the Ukraine invasion, some of the folks in Donbas made a fake bomb, complete with corpses from a morgue (with autopsy scars)[0]. That didn’t require any AI at all.
We can expect videos of unpopular minorities, doing horrible things, politicians saying stuff they never said, and evidence submitted to trial, that was completely made from whole cloth.
It’s gonna suck.
[0] https://www.bellingcat.com/news/2022/02/28/exploiting-cadave...
So far, I see the most concern about this sort of thing from people who came of age around or after Web 2.0 hit, at a time when even a good photoshop wasn’t too hard to place as fake.
Those I know who lived through this issue when digital editing really became cheap seem to be more sanguine about it, while the younger generation on the opposite side side is some combination “whatever” or frustrated but accept that yet another of countless weird things has invade a reality that was never quite right to begin with.
The folks in between, I’d say about the 20 years from age 20 to 40, are the most annoyed though. The eye of the storm on the way to proving that cyberpunk lacked the required imagination to properly calibrate our sense of when things were going to really get insane.
From 1950 - 2005(ish) there were a small number of sources due to the enormous moat required to become a broadcaster. From 2005 to 2021, you could mostly trust video as the costs of casual fakery were prohibitive. Now that the cost to produce fake videos are near zero, I suspect we will return to a much smaller number of sources (though not as small as in the pre YouTube era).
There will be people who care about trusted and reliably accurate news sources, and at least some of them are willing to pay for it. Think 404 Media.
But there are people who don't want their news to be "reliably accurate", but who watch/read news to have their own opinions and prejudices validated no matter how misinformed they are. Think Fox News.
But there are way way more people who only consume "news" on algorithmically tweaked social media platforms, where driving "engagement" is the only metric that matters, and "truth" or "accuracy" is not just lower priorities but are completely irrelevant to the platform owners and hence their algorithms. Fake ragebait drives engagement which drives advertising profits.
What this incident really shows is the growing gap between how easy it is to create a convincing warning and how costly it is to verify what's actually happening. Hoaxes aren't new, but generative tools make fabrication almost free and massively increase the volume.
The rail operator didn't do anything wrong. After an earthquake and a realistic-looking image, the only responsible action is to treat it as potentially real and inspect the track.
This wasn't catastrophic, but it's a preview of a world where a single person can cheaply trigger high-cost responses. The systems we build will have to adapt, not by ignoring social media reports, but by developing faster, more resilient ways to distinguish signal from noise.
You don't need AI for this kind of disruption. People have been making fake bomb threats for years. You just have to say it, either directly to the railway/etc. or publicly enough that somebody else will believe it and forward it to them. The difference might be of intent - if you say you planted a bomb on the bridge, you're probably committing a crime, but if you just post a piece of art without context, it's more plausibly deniable.
It's also pretty common in the UK for trains to be delayed just because some passenger accidentally left their bag on the platform. Not even any malicious intent. I was on a train that stopped in a tunnel for that reason once. They're just very vulnerable to any hint of danger.
AI definitely makes it easier and it will happen more often.
You don't need anything for anything. You can do war with long sticks. Turns out guns, planes, and firebombs work better.
Sir, this is AI prose. Wendy's doesn't allow AI prose.
Thanks for the heads-up! I actually wrote this based on my own thoughts about the incident, but I understand the concern. I'll make sure to keep my posts in line with the community guidelines.
To be clear, you don't need AI for this.
You can also just call the railroad and report the bridge as damaged.
Hoaxes and pranks and fake threats have been around forever.
Again, I see this argument.
“Bad X has happened before and unsolved. Why worry about bad X^2?”
Personally I’d prefer if it remained at X so solutions can catch up. But that’s just me.
I think the implication is we already handle these events well enough pre-ai, and that the events are not necessarily more disruptive just because an ai was used to trigger them.
Implicit in this though is the assumption that the increase in awareness of these events has more to do with an ai being involved rather than the event actually being exceptional.
Yep, why give people computers? It just increases the number of bad X, before writing these type of hoaxes were much less common.
That leaves much more of a paper trail. People routinely are fined and jailed for pulling off such "pranks", partially because "fake threats"/"abuse of emergency response resources" are an exception to many freedom-of-speech laws.
A fake photo of a collapsed bridge however won't cross that criminal threshold.
You also don't need gunpowder to kill someone with projectiles, but gunpowder changed things in important ways. All I ever see are the most specious knee-jerk defenses of AI that immediately fall apart.
It's not clear to me that it "changed things in important ways" in this case if a call alleging serious damage to the rail would've similarly triggered a pause for inspection.
A phone call to railway management claiming stone fall on a track, a dead cow, a stalled car, etc will trigger a slowdown on that line, a call to the driver, and an inspection.
If that's not happening then management is playing fast and loose with legal responsibility and the risks of mass and inertia.
Reminds me of https://en.wikipedia.org/wiki/Fall;_or,_Dodge_in_Hell with the Moab plot point.
I really liked the first half to 3/4ths of the that book. The last part was less interesting to me but the Moab plot line and all the parts around anonymity/online presence I enjoyed.
> A BBC journalist ran the image through an AI chatbot which identified key spots that may have been manipulated.
The image is likely AI generated in this case, but this does not seem like the best strategy for finding out if an image is AI generated.
Someone I know is a high school English teacher (being vague because I don’t want to cause them trouble or embarrassment). They told me they were asking ChatGPT to tell them whether their students’ creative writing assignments were AI-generated or not-I pointed out that LLMs such as ChatGPT have poor reliability at this; classifier models trained specifically for this task perform somewhat better, yet also have their limitations. In any event, if the student has access to whatever model the teacher is using to test for AI-generation (or even comparable models), they can always respond adversarially by tinkering with an AI-generated story until it is no longer classified as AI-generated
A New York lawyer used ChatGPT to write a filing with references to fake cases. After a human told him they were hallucinated, he asked ChatGPT if that was true (which said they were real cases). He then screenshotted that answer and submitted it to the judge with the explanation "ChatGPT ... assured the reliability of its content." https://www.courtlistener.com/docket/63107798/54/mata-v-avia... (pages 19, 41-43)
I hope he was disbarred.
He was probably offered a role at some ai obsessed firm because of his “ai-native workflow”.
Reminds me of a Reddit story that made the rounds about a professor asking ChatGPT if it wrote papers, to which it frequently responded afirmatively. He sent an angry email about it, and a student responded by showing a response from ChatGPT claiming it wrote his email.
> student responded by showing a response from ChatGPT claiming it wrote his email
Which is actually fine. Students need to do their own homework. A teacher can delegate writing emails.
But if he didn't delegate, and it said he did, that would suggest that the methodology doesn't really work.
I believe you just got whooshed.
A person arguing in favor of LLM use failed to comprehend the context or argument? Unpossible!
You missed the entire point lol
Under the other photos it says A photo taken by a BBC North West Tonight reporter showed the bridge is undamaged and A BBC North West reporter visited the bridge today and confirmed it was undamaged
They may have first ran the photo through an AI, but they also went out to verify. Or ran it after verification to understand it better, maybe
So.. is this where the AI hype train starts to lose steam? One AI hallucinated and caused the incident, and another AI program just wasted everyone's time after it was unable to verify the issue. Sounds like AI was utterly useless to everyone involved.
> One AI hallucinated and caused the incident
I suspect that AI was prompted to create the image, not that this was an incidental "hallucination".
Cynical-me suspects this may have been a trial run by malicious actors experimenting with disrupting critical infrastructure.
There is precedent for state actors putting a lot of effort into a hoax like this: https://en.wikipedia.org/wiki/Columbian_Chemicals_Plant_expl...
> Sounds like AI was utterly useless to everyone involved
Maybe.
Imo, I think the advances in AI and the hype toward generated everything will actually be the current societies digitally-obsessed course-correction back to having a greater emphases on things like theater, live music, conversing with people in-person or even strangers (the horror, I know) simply to connect/consume more meaningfully. It'll level out integrating both instead of being so digitally loop-sided as humans adapt to enjoy both.*
To me, this shows a need for more local journalism that has been decimated by the digital world. By journalism, I mean it in a more traditional sense, not bloggers and podcast (no shade some follow principled, journalistic integrity -- as some national "traditional" one don't). Local journalism is usually held to account by the community, and even though the worldwide BBC site has this story, it was the local reporters they had that were able to verify. If these AI stories/events accelerate a return to local reporting with a worldwide audience, then all the better.
* I try to be a realist, but when I err, it tends to be on the optimist side
The tech giants sucking up all the ad revenue is what killed local journalism. Unless you can find a solution to that problem (or an alternstove fundong model), it's not coming back.
But just think of all the people that didn’t have to receive a paycheck because of all this efficiency!
It’s really incredible how the supposedly unassailable judgement of mass consumer preference consistently leads our society to produce worse shit so we can have more or it, and rewards the chief enshittifiers with mega yachts.
They have powerful untaxed monopolies in excess of the economic value tech companies themselves generate.
At some point, the value of their services come from the people who use their sites.
> Sounds like AI was utterly useless to everyone involved.
Not the hoaxer!
ChatGPT: This looks like AI. I can tell from some of the pixels and from seeing quite a bit of training data in my time.
This is the fast way they can try, but it shouldn't be the most trustworthy way and shouldn't be in report.
If it's nano banana you can give it to Gemini bc it has artifacts
All these tool integrations are making it increasingly difficult to explain to non-tech people what these chatbots are capable of. Even more so as multi-modality improves (at some point image generation went from a distinct tool to arguably an inherent part the the models).
Yeah, talk about begging the question. Yikes.
Yeah that hardly talks of the "journalist" being good at their job. At worst they asked a biased question like "has this photo been AI generated and if then how" or worse.
People tend to think that AI is like a specific kind of human which knows other AI things better. But we should expect better from people that do writing as their job.
It's not, but when you have 30 minutes to ship a story...
> Network Rail said the railway line was fully reopened at around 02:00 GMT and it has urged people to "think about the serious impact it could have" before creating or sharing hoax images.
> "The disruption caused by the creation and sharing of hoax images and videos like this creates a completely unnecessary delay to passengers at a cost to the taxpayer," a spokesperson said.
I don't think this will work the way they think it will work. In fact, I think they just proved they're vulnerable to a type of attack that causes disruption and completely unnecessary delay to passengers at a cost to the taxpayer
Anyone to whom that information is relevant already knew that this vulnerability has existed for a long time.
Cannot image how often this will happen after we are buried under fake contents from AI.
Just realize that people and institutions are adaptable and their processes are not set in stone. We'll find a way through even if you or I can't imagine exactly how right now.
Much of the world relies on general well-behavedness. The whole Andon principle doesn’t work if you’ve got asshole employees. With the public you don’t have a choice. You have to stop the trains because otherwise everyone will murder you if it turned out to be true. So better to be defensive.
When the west devolves into a low trust society because of things like this and the relentless importing of people from such, it will lose the advantage of being a high trust society. Equality for all!
I'd consider places with no school shootings higher trust than those with 300 school shootings every year :)
Dangerous subject. Pointing out the Somali Autism Scam in Minnesota gets me reliably flag-killed on the site.
Don’t trust, only verify.
Remember Moab
You don't need AI to make these hoaxes, pranks have been around forever etc etc.... but as with alot of the areas AI touches, the problem isn't the tools or use of them exactly, it's the scale. In this case the low barrier to creating the fake media coupled with the pervasiveness of social media networks and their reach (also networks that aren't new), affording the rapid deployment and significant impact by bad actors.
The problem is the scale. The scale of impact is immense and we're not ready to handle it.
The BBC says the hoaxer should consider the effect on other people. Should Sir Keir, who wants to "turbocharge" "AI", perhaps consider the effect on other people?
So far we have almost no positive applications for the IP laundering machines.
It's a bit of a non story, even with the fake image.
From the article:
... Standard responsible rail maintainance is to investigate rail integrity following heavy rains, earthquakes, etc.A fake image of a stone bridge with fallen parapets prompts the same response as a phone call about a fallen stone from a bridge or (ideally !!) just the earthquake itself - send out a hi-railer for a track inspection.
The larger story here (be it the UK, the US, or AU) is track inspections .. manned or unmanned?
Currently on HN: Railroads will be allowed to reduce inspections and rely more on technology (US) https://news.ycombinator.com/item?id=46177550
https://apnews.com/article/automated-railroad-track-inspecti...
on the decision to veer toward unmanned inspections that rely upon lidar, gauge measures, crack vibration sensing etc.
Personally I veer toward manned patrols with state of the art instrumentation - for the rail I'm familiar with there are things that can happen with ballast that are best picked up by a human, for now.
They should already be able to detect line breaks using old technology. They send current pulses down the line to detect stuck switches, since stuck switches can cause collisions. Also, the pulses are conducted through the wheels and axles of any trains, so they can use resistance and/or timing to figure out where the trains are.
Having said that, if it was 2020 and you told me that making photorealistic pictures of broken bridges was harder than spoofing the signals I just described, I’d say you were crazy.
The idea that a kid could do this would have seen even less plausible (that’s not to say a kid did it, just that they could have).
Anyway, since recently-intractable things are now trivial, runbooks for hoax responses need to be updated, apparently.
> They should already be able to detect line breaks using old technology.
Yes. That doesn't do much to detect a stone from a parapet rolling onto the line though.
Hence the need for inspection.
> runbooks for hoax responses need to be updated, apparently.
I'd argue not - whether it's an image of a damaged bridge, a phone call from a concerned person about an obstruction on the line, or just heavy rains or an earthquake .. the line should be inspected.
If anything urban rail is in a better position today as ideally camera networks should hopefully rapidly resolve whether a bridge is really damaged as per a fake image or not.
> I'd argue not - whether it's an image of a damaged bridge, a phone call from a concerned person about an obstruction on the line, or just heavy rains or an earthquake .. the line should be inspected.
Ideally? Sure.
But when someone can generate plausible disaster photos of every inch of every line of a country's rail network in mere minutes? And as soon as your inspection finishes, they do it again?
Yeah; it’s completely a matter of frequencies and probabilities. Also, technology keeps improving.
If I were working for the train line, and bridges kept “blowing up” like this, I’d probably install a bunch of cameras and try to arrange the shots to be aesthetically pleasing, then open the network to the public.
The runbook would involve checking continuity sensors in the rail, and issuing random pan/tilt commands to the camera.
plausibly correlated with what?
This correlated with an earthquake - this is the event that should have triggered an inspection regardless.
> But when someone can generate plausible disaster photos of every inch of every line of a country's rail network in mere minutes?
In the UK (and elsewhere) a large percentage of track is covered by cameras - inspection of over the top claims can be rapidly dismissed.
> And as soon as your inspection finishes, they do it again?
Sounds like a case for cyber crimes and public nuisance.
It's also no different to endless prank calls via phone, not a new thing.
> It's also no different to endless prank calls via phone, not a new thing.
Of course it's different. If I do 5 prank calls, that takes, say, 15 minutes.
In 15 minutes how many hoaxes can I generate with AI? Hundreds, maybe thousands?
This is like saying nukes are basically swords because they both kill people. We've always been able to kill people, who cares about nuclear weapons?
> This correlated with an earthquake…
Plenty of disasters don't. "No earthquake, no incident" obviously can't be the logic tree.
> In the UK (and elsewhere) a large percentage of track is covered by cameras - inspection of over the top claims can be rapidly dismissed.
"Yes. That doesn't do much to detect a stone from a parapet rolling onto the line though. Hence the need for inspection."
Sounds like you now agree it's less a need?
> Sounds like a case for cyber crimes and public nuisance.
"Sorry, not much we can do." As is the case when elderly folks get their accounts drained over the phone today.
Any idea how the road barriers in the USA detect a train to lower themselves? I assume it's something to do with current passed from one rail to the other through the axle?
When I stuck train wheels on my DeLorean and rode it down the tracks it lowered the barriers automatically which caused a bit of a traffic incident in Oxnard.
There are sensor sections on both sides. If you short the tracks together with a large enough wire, it triggers the signal box. Actually learned this at the MIT swap fest when manning the back gate a decade ago. Got some cheap alligator clips and strung to them together, no luck.... Larger gauge copper did trigger it, and confused a ton of people when no train came by lol
If whatever technology they installed said everything was fine, I would still want them to do what they did because the costs of being wrong are so much higher than the costs of what they did.
The point of that technology needs to be to alert you when something is wrong not to assure you that everything is fine whenever some other telemetry indicates otherwise.
90% manned. A lot of money and time goes into getting track access.
And collecting unmanned data is still such a pain. At the moment, you stick calibration gear to a train and hope it gets as much noise free data as it can. All whilst going at least 40mph over the area you want - you’re fighting vibrations, vehicle grease, too much sunlight, not enough sunlight, rain, ballast covering things, equipment not calibrated before going out etc etc.
Regardless of how many people it disrupted or not, it’s not a non story.
It’s highlighted a weakness. It’s easy to disrupt national infrastructure by generating realistic hoax photos/videos with very little effort from anywhere in the world.
It's not a new story, nor has it highlighted a new weakness - people have had the ability to claim tracks are covered in stone or by a dead cow for a good many years now.
Tracks have cameras to rapidly discount big claims, in this specific case there was an actual earthquake which should (and likely did, the story doesn't drill down very deep) have triggered a manual track inspection for blockages and ballast shifts in of itself.
"think about the serious impact it could have"
They do ... that's why sociopaths do such things.
Yet another attack vector for the Russians.
https://en.wikipedia.org/wiki/Russian_sabotage_operations_in...
See e.g. https://www.polskieradio.pl/395/7785/artykul/2508878,russian... (2020)
> Almost 700 schools throughout Poland were in May last year targeted by hoax bomb threats during key exams, private Polish radio broadcaster RMF FM reported.
> It cited Polish investigators it did not name as saying that a detailed analysis of internet connections and a thorough examination of the content of emails with false bomb threats turned up ties to servers in the Russian city of St. Petersburg.
UK is really good at self-sabotaging and giving itself away to corporate interests (wealth is gonna trickle down any minute now, I'm sure of it!), Russians can happily just grab popcorn and enjoy the comedy show, no active participation necessary.
> Yet another attack vector
AI-Generated disinfo has been a known attack vector for the Russian regime (and their allied regimes) for years now [0][1].
[0] - https://cyberscoop.com/russia-ukraine-china-iran-information...
[1] - https://cloud.google.com/blog/topics/threat-intelligence/esp...
[flagged]
> If your cat has worms, do you blame it on Russia or Best Korea?
Best Korea of course. The Worst Korea could never do this kind of thing.
Am I bovvered?