What worries me is that _a lot of people seem to see LLMs as smarter than themselves_ and anthropmorphize them into a sort of human-exact intelligence. The worst-case scenario of Utah's law is that when the disclaimer is added that the report is generated by AI, enough jurists begin to associate that with "likely more correct than not".
> a lot of people seem to see LLMs as smarter than themselves
Well, in many cases they might be right..
As far as I can tell from poking people on HN about what "AGI" means, there might be a general belief that the median human is not intelligent. Given that the current batch of models apparently isn't AGI I'm struggling to see a clean test of what AGI might be that a human can pass.
Being an intelligent being is not the same as being considered intelligent relative to the rest of your species. I think we’re just looking to create an intelligence, meaning, having the attributes that make a being intelligent, which mostly are the ability to reason and learn. I think the being might take over from there no?
With humans, the speed and ease with which we learn and reason is capped. I think a very dumb intelligence with stay dumb for not very long because every resource will be spent in making it smarter.
> there might be a general belief that the median human is not intelligent
This is to deconstruct the question.
I don't think it's even wrong - a lot of people are doing things, making decisions, living life perfectly normally, successfully even, without applying intelligence in a personal way. Those with socially accredited 'intelligence' would be the worst offenders imo - they do not apply their intelligence personally but simply massage themselves and others towards consensus. Which is ultimately materially beneficial to them - so why not?
For me 'intelligence' would be knowing why you are doing what you are doing without dismissing the question with reference to 'convention', 'consensus', someone/something else. Computers can only do an imitation of this sort of answer. People stand a chance of answering it.
Reading how AI is being approached in China, the focus is more on achieving day to day utilty, without eviscerating youth employment.
In contrast, the SV focus of AI has been about skynet / singularity, with a hype cycle to match.
This is supported by the lack of clarity on actual benefits, or clear data on GenAI use. Mostly I see it as great for prototyping - going from 0 to 1, and for use cases where the operator is highly trained and capable of verifying output.
Outside of that, you seem to be in the land of voodoo, where you are dealing with something that eerily mimics human speech, but you don't have any reliable way of finding out its just BS-ing you.
Do you have any links you could share to content you found especially insightful about AI use in China?
I don't know if it supports their particular point, but Machine Decision is Not Final seems like a very cool and interesting look at China's culture around AI:
AI is smarter than everyone already. Seriously, the breadth of knowledge the AI possesses has no human counterpart.
Just this weekend it (Gemini) has produced two detailed sets of instructions on how to connect different devices over bluetooth, including a video (that I didn’t watch), while the devices did not support doing the connections in that direction. No reasonable human reading the involved manuals would think those solutions feasible. Not impressed, again.
It's pretty similar to looking something up with a search engine, mashing together some top results + hallucinating a bit, isn't it? The psychological effects of the chat-like interface + the lower friction of posting in said chat again vs reading 6 tabs and redoing your search, seems to be the big killer feature. The main "new" info is often incorrect info.
If you could get the full page text of every url on the first page of ddg results and dump it into vim/emacs where you can move/search around quickly, that would probably be similarly as good, and without the hallucinations. (I'm guessing someone is gonna compare this to the old Dropbox post, but whatever.)
It has no human counterpart in the same sense that humans still go to the library (or a search engine) when they don't know something, and we don't have the contents of all the books (or articles/websites) stored in our head.
> I'm guessing someone is gonna compare this to the old Dropbox post, but whatever.
If they do, you’ll be in good company. That post is about the exact opposite of what people usually link it for. I’ll let Dan explain:
> If you could get the full page text of every url on the first page of ddg results and dump it into vim/emacs where you can move/search around quickly, that would probably be similarly as good, and without the hallucinations.
Curiously, literally nobody on earth uses this workflow.
People must be in complete denial to pretend that LLM (re)search engines can’t be used to trivially save hours or days of work. The accuracy isn’t perfect, but entirely sufficient for very many use cases, and will arguably continue to improve in the near future.
The only way LLM search engines save time is if you take what it says at face value as truth. Otherwise you still have to fact check whatever it spews out which is the actual time consuming part of doing proper research.
Frankly I've seen enough dangerous hallucinations from LLM search engines to immediately discard anything it says.
Of course you have to fact check - but verification is much faster and easier than searching from scratch.
How is verification faster and easier? Normally you would check an article's citations to verify its claims, which still takes a lot of work, but an LLM can't cite its sources (it can fabricate a plausible list of fake citations, but this is not the same thing), so verification would have to involve searching from scratch anyway.
For most things, no it isn’t. The reason it can work well at all for software is that it’s often (though not always) easy to validate the results. But for giving you a summary of some topic, no, it’s actually very hard to verify the results without doing all the work over again.
AI has more knowledge than everyone already, I wouldn't say smarter though. It's like wisdom vs intelligence in D+D (and/or life).. wisdom is knowing things, intelligence is how quick you can learn / create new things.
Knowledge is what I see equivalent with a big library. It contains mostly correct information in the context of the book (which might be incorrect in general) and "ai" is very good at taking everything out of context, Smashing a probability distribution over it and picking an answer which humans will accept. E.g. it does not contain knowledge, at best the vague pretense of it.
Having knowledge is not exactly the same as being smart though is it.
Although it helps immensely.
[dead]
Man, what are we supposed to do with people who think the above?
I'd do the same thing I'd do with anyone that has a different opinion than me: try my best to have an honest and open discussion with them to understand their point of view and get to the heart of why they believe said thing, without forcefully tearing apart their beliefs. A core part of that process is avoiding saying anything that could cause them to feel shame for believing something that I don't, even if I truly believe they are wrong, and just doing what I can to earnestly hear them out. The optional thing afterwards, if they seem open to it, is express my own beliefs in a way that's palatable and easily understood. Basically explain it in a language they understand, and in a way that we can think about and understand and discuss together, not taking offense to any attempts at questioning or poking holes in my beliefs because that is the discovery process imo for trying something new.
Online is a little trickier because you don't know if they're a dog. Well, now a days it's even harder, because they could also not have a fully developed frontal lobe, or worse, they could be a bot, troll, or both.
Well said, and thank you for the final paragraph. Made me chuckle.
I don't know, it's kinda terrifying how this line of thinking is spreading even on HN. AI as we have it now is just a turbocharged autocomplete, with a really good information access. It's not smart, or dumb, or anything "human" .
Just brace for the societal correction.
There's a lot of things going on in the western world, both financial and social in nature. It's not good in the sense of being pleasant/contributing to growth and betterment, but it's a correction nonetheless.
That's my take on it anyway. Hedge bets. Dive under the wave. Survive the next few years.
I think whether any text is written with the help of AI is not the main issue. The real issue is that for texts like police reports a human still has to take full responsibility for its contents. If we preserve this understanding, than the question of which texts are generated by AI becomes moot.
I don’t understand the urgency to replace human work with AI. Why is every organization so eager about skipping the AI as an assistant step? Here there are already massive productivity gains in using the AI to create the draft of the report, it makes little economical to make it do the final version compared to the risk, maybe it’s just plain laziness? Same with developers, why is very organization wanting to leapfrog from humans write all the code to they don’t even read the generated code?
Not everyone is in an urgent hurry to replace people with bots; that's a hyperbolic construct.
But to try to answer some of what I think you're trying to ask about: The bot can be useful. It can be better at writing a coherent collection of paragraphs or subroutines than Alice or Bill might be, and it costs a lot less to employ than either of them do.
Meanwhile: The bot never complains to HR because someone looked at them sideways. The bot [almost!] never calls in sick; the bot can work nearly 24/7. The bot never slips and falls in the parking lot. The bot never promises to be on-duty while they vacation out-of-state with a VPN or uses a mouse-jiggler to screw up the metrics while they sleep off last night's bender.
The bot mostly just follows instructions.
There's lots of things the bot doesn't get right. Like, the stuff it produces may be full of hallucinations and false conclusions that need reviewed, corrected, and outright excised.
But there's lots of Bills and Alices in the world who are even worse, and the bot is a lot easier and cheaper to deal with than they are.
That said: When it comes to legal matters that put a real person's life and freedom in jeopardy, then there should be no bot involved.
If a person in a position of power (such as a police officer) can't write a meaningful and coherent report on their own, then I might suggest that this person shouldn't ever have a job where producing written reports are a part of their job. There's probably something else they're good at that they can do instead (the world needs ditchdiggers, too).
Neither the presence nor absence of a bot can save the rest of us from the impact of their illiteracy.
Yes. Allowing officers to blame AI creates a major accountability gap. Per e.g. the EU AI Act’s logic, if a human "edits" a draft, they must be held responsible and do not need to disclose the use of AI.
To ensure safety, those offerings must use premarket red teaming to eliminate biases in summarization. However, ethical safety also requires post-market monitoring, which is impossible if logs aren't preserved. Rather than focusing on individual cases, I think, we must demand systemic oversight in general and access for independent research (not only focussing on a specific technology)
It should be treated kind of the same as writing a report after a glass of wine. Probably no one really cares but "sorry that doesn't count because I was intoxicated when I wrote that bit" isn't going to fly.
Sadly justice system is a place where responsibility does not happen. It is not a system where you make one mistake and you are to prison. Instead everyone but the victims of the system are protected and colluded with. More you punish the victims better you make out.
I agree. A programmer has to take responsibility for the generated code they push, and so do police officers for the reports they file. Using a keyboard does not absolve you of typos, it's your responsibility to proofread and correct, this is no different, just a lot more advanced.
Of course the problem is also that police often operates without any real oversight and covers up more misconduct than workers in an under-rug sweeping factory. But that's another issue.
> That means that if an officer is caught lying on the stand – as shown by a contradiction between their courtroom testimony and their earlier police report – they could point to the contradictory parts of their report and say, “the AI wrote that.”
Normally, if a witness (e.g. a police officer) were found to be recounting something written by a third party, it would be considered hearsay and struck from the record (on objection).
It would be an interesting legal experiment to have an officer using this system swear to which portions they wrote themselves, and attempt to have all the rest of the testimony disallowed as hearsay.
I’d suspect the other direction:
Police unions get LLMs classified as some kind of cognitive aid, so it becomes discrimination to ban them in school or the workplace.
That is an aspect I had not considered in my assumptions that AI/robots will eventually go through the same/similar social justice process as all the other causes, i.e., women’s suffrage, racial equality, gay rights, etc. because it will ultimately and, arguably, more than all the other prior social justice causes célèbres, serve the ruling class that has risen to dominate through social justice causes far more than anything prior.
It’s going to be interesting to see the state propaganda against the bigots and evil bioists (or whatever the word smithing apparatchiks will devise) so want to bar the full equality in society of AI/robots who look just like you and me after all and also just want equal rights to love each other, and who are you to oppose others since we are all just individuals?
Shoot the messenger all you want, but it’s coming.
Cynical and fun to read but no. Too many parasites have already chewed their way to the empty heart of power of the post-war liberal system, and I think the next time it gets power at the highest levels in the US will be the end if it there. Maybe it will last another generation in Europe, but not long enough to see the scenario you describe play out.
I find this article strange in its logic. If the use of AI generated content is problematic as a principle I can understand the conflict. Then no AI should be used to "transcribe and interpret a video" at all - period. But if the concern is accuracy in the AI "transcript" and not the support from AI as such, isn't it a good thing that the AI generated text is deleted after the officer has processed the text and finalized their report?
That said, I believe it is important to aknowlegde the fact that human memory, experience and interpretation of "what really happened" is flawed, isn't that why the body cameras are in use in the first place? If everyone believed police officers already where able to recall the absolute thruth of everything that happens in situations, why bother with the cameras?
Personally I do not think it is a good idea to use AI to write full police reports based on body camera recordings. However, as a support in the same way the video recordings are available, why not? If, in the future, AI will write accurate "body cam" based reports I would not have any problems with it as long as the video is still available to be checked. A full report should, in my opinion, always contain additional contextual info from the police involved and witnesses to add what the camera recordings not necessarily reflect or contain.
My worry is at scale AI from one vendor can introduce biases. We wont know what those biases are. But whatever they are the same bias affects all reports.
That is something to worry about, agreed. So, the quality and the reliance of AI is what we should focus on. In addition we should be able to keep track (and records of) how AI has used and build its narrative and conclutions.
> important first step in reigning in AI police reports.
That should be 'reining in'. "Reign" is -- ironically - - what monarchs do.
Such innocent mistakes make me smile these days because it gives assurance a real human wrote them.
Don't worry sufficiently advanced LLMs will learn how to put in the right amount of typoes to be convincing.
Swearing is a good heuristic still I think. The American corporate world remains rather prissy about swearing, so if the post sounds like a hairy docker after 12 pints then it's probably not an LLM.
It's not certain that LLMs don't do this already—it's likely their doing this even now.
That’s —— not just —— possible— it’s —— ——— probable!!!
Are you an LLM that misspelled “they’re” intentionally?
That was the joke. Also the use of the "It's not; it's" structure and the em-dash.
Unless it's an LLM instructed to make occasional mistakes.
I recommend taking a look at this video to get an idea behind the through process (or lack thereof) law enforcement might display when provided with a number of "AI" tools, and even if this one example is closer to traditional face recognition than LLMs, the behavior seems the same. Spoiler: complete submission and deference, and in this specific case to a system that was not even their own.
This does sound problematic, but if a police officer's report contradicts the body-worn camera or other evidence, it already undermines their credibility, whether they blame AI or not. My impression is that police don't usually face repercussions for inaccuracies or outright lying in court.
> That means that if an officer is caught lying on the stand – as shown by a contradiction between their courtroom testimony and their earlier police report
The bigger issue, that the article doesn't cover, is that police officers may not carefully review the AI generated report, and then when appearing in court months or years later, will testify to whatever is in the report, accurate or not. So the issue is that the officer doesn't contradict inaccuracies in the report.
> My impression is that police don't usually face repercussions for inaccuracies or outright lying in court.
That's because it's a very difficult thing to prove. Bad memories and even completely false memories are real things.
Sure, but other court participants are given somewhat less grace for lying under oath.
Are they?
Perjury isn't a commonly prosecuted crime.
If an officer misremembers something about you, you go to jail . If you misremember something about the event, you also go to jail. Yeah, I guess it tracks
That's why I qualified it with "somewhat."
Neither is grace a common defense.
That's why we need a greatly reduced standard of proof for officer misconduct, especially when it comes to consequences like just losing your job (as opposed to, e.g., jail time).
While I agree that officers should be accountable. More enforcement of them will not suddenly make them good officers. Other nations train their police for years prior to putting them into the thick of it. US police spend far less time studying, and it shows, in everything from de-escalation tactics to general legal understanding. If you create a pipeline to weed out bad officers, then there needs to be a pipeline producing better officers
AIUI US policing is descended from slave catching and strike breaking. Two activities which I think we'd say today are obviously bad.
In many European states their policing starts as town guards tasked with ensuring order. Order is, at least, not obviously bad.
So that's a philosophical difference in what these forces even think their purpose is.
Certainly agreed on that. I think part of it is training but also part of it is just vetting. There are pretty clearly too many people who get into policing out of a desire to wield authority rather than a desire to help people. In many cases I think there is not much use in trying to "train" such people; they just need to be doggedly weeded out. But yes, we need action on both ends, ensuring the pipeline produces good officers going in, and then also regular monitoring to ensure they stay good.
This is an outrageous lie, there were SEVEN Police Academy movies!!!
To me it’s a question of it they are on average better. It’s not like human based input is perfect either.
> In July of this year, EFF published a two-part report on how Axon designed Draft One to defy transparency. Police upload their body-worn camera’s audio into the system, the system generates a report that the officer is expected to edit, and then the officer exports the report. But when they do that, Draft One erases the initial draft, and with it any evidence of what portions of the report were written by AI and what portions were written by an officer. That means that if an officer is caught lying on the stand – as shown by a contradiction between their courtroom testimony and their earlier police report – they could point to the contradictory parts of their report and say, “the AI wrote that.” Draft One is designed to make it hard to disprove that.
> Axon’s senior principal product manager for generative AI is asked (at the 49:47 mark) whether or not it’s possible to see after-the-fact which parts of the report were suggested by the AI and which were edited by the officer. His response (bold and definition of RMS added):
“So we don’t store the original draft and that’s by design and that’s really because the last thing we want to do is create more disclosure headaches for our customers and our attorney’s offices.
Policing and Hallucinations. Can’t wait to see this replicated globally.
Does the officer not take full ownership of the report once they edit it? If they got an intern to write a report and then they signed off on it, they’d be responsible, right?
The experiments of AI agents sending emails to grown-ups are good I think – AIs are doing much more dangerous stuff like these AI Police Reports. I don't think making a fuss over every agent-sent email is going to cause other AI incursion into our society to slow down. The Police Report writer is a non-human partially autonomous participant like a K9 officer. It's wishful thinking that AIs aren't going to be set loose doing jobs. The cat is out of the bag.
I'm curious about this claim. What about agents sending emails to each other is good?
“Fighting back” = adding a disclaimer.
You guys are so fucked.
> You guys are so fucked.
"You guys"? Everyone is fucked. This is going to be everywhere. Coming to your neighborhood, eventually.
Not everyone lives in a 3rd world authoritarian backwater, its time to stop that ridiculous US-centrism
I dont live in a police state.
I guess that means you don't live in the US, or in the UK, or in Australia.
Correct
You either don't have police reports or some amount of your country's police reports aee written by AI.
I'd be more worried that you aren't reading articles about it than if you were.
Considering that AI can barely write in my native language I am not worried.
There are countries on this planet that are not actively digging their own graves.
Cmon tell us, Mr. Rammstein’s throwaway, which much-superior country is it?!
He wont tell you. If he did he would have to admit he lives in a police state or martial law.
What worries me is that _a lot of people seem to see LLMs as smarter than themselves_ and anthropmorphize them into a sort of human-exact intelligence. The worst-case scenario of Utah's law is that when the disclaimer is added that the report is generated by AI, enough jurists begin to associate that with "likely more correct than not".
> a lot of people seem to see LLMs as smarter than themselves
Well, in many cases they might be right..
As far as I can tell from poking people on HN about what "AGI" means, there might be a general belief that the median human is not intelligent. Given that the current batch of models apparently isn't AGI I'm struggling to see a clean test of what AGI might be that a human can pass.
Being an intelligent being is not the same as being considered intelligent relative to the rest of your species. I think we’re just looking to create an intelligence, meaning, having the attributes that make a being intelligent, which mostly are the ability to reason and learn. I think the being might take over from there no?
With humans, the speed and ease with which we learn and reason is capped. I think a very dumb intelligence with stay dumb for not very long because every resource will be spent in making it smarter.
> there might be a general belief that the median human is not intelligent
This is to deconstruct the question.
I don't think it's even wrong - a lot of people are doing things, making decisions, living life perfectly normally, successfully even, without applying intelligence in a personal way. Those with socially accredited 'intelligence' would be the worst offenders imo - they do not apply their intelligence personally but simply massage themselves and others towards consensus. Which is ultimately materially beneficial to them - so why not?
For me 'intelligence' would be knowing why you are doing what you are doing without dismissing the question with reference to 'convention', 'consensus', someone/something else. Computers can only do an imitation of this sort of answer. People stand a chance of answering it.
Reading how AI is being approached in China, the focus is more on achieving day to day utilty, without eviscerating youth employment.
In contrast, the SV focus of AI has been about skynet / singularity, with a hype cycle to match.
This is supported by the lack of clarity on actual benefits, or clear data on GenAI use. Mostly I see it as great for prototyping - going from 0 to 1, and for use cases where the operator is highly trained and capable of verifying output.
Outside of that, you seem to be in the land of voodoo, where you are dealing with something that eerily mimics human speech, but you don't have any reliable way of finding out its just BS-ing you.
Do you have any links you could share to content you found especially insightful about AI use in China?
I don't know if it supports their particular point, but Machine Decision is Not Final seems like a very cool and interesting look at China's culture around AI:
https://www.urbanomic.com/book/machine-decision-is-not-final...
I’ve been hunting for a link I found here on HN, which discussed how policy /government elites in China looked at AI.
Sadly, the search for that link continues.
I did find these from SCMP and Foreign Policy, but there are better articles out there.
- https://foreignpolicy.com/2025/11/20/china-ai-race-jobs-yout...
- https://www.scmp.com/specialist-publications/special-reports...
AI is smarter than everyone already. Seriously, the breadth of knowledge the AI possesses has no human counterpart.
Just this weekend it (Gemini) has produced two detailed sets of instructions on how to connect different devices over bluetooth, including a video (that I didn’t watch), while the devices did not support doing the connections in that direction. No reasonable human reading the involved manuals would think those solutions feasible. Not impressed, again.
It's pretty similar to looking something up with a search engine, mashing together some top results + hallucinating a bit, isn't it? The psychological effects of the chat-like interface + the lower friction of posting in said chat again vs reading 6 tabs and redoing your search, seems to be the big killer feature. The main "new" info is often incorrect info.
If you could get the full page text of every url on the first page of ddg results and dump it into vim/emacs where you can move/search around quickly, that would probably be similarly as good, and without the hallucinations. (I'm guessing someone is gonna compare this to the old Dropbox post, but whatever.)
It has no human counterpart in the same sense that humans still go to the library (or a search engine) when they don't know something, and we don't have the contents of all the books (or articles/websites) stored in our head.
> I'm guessing someone is gonna compare this to the old Dropbox post, but whatever.
If they do, you’ll be in good company. That post is about the exact opposite of what people usually link it for. I’ll let Dan explain:
https://news.ycombinator.com/item?id=27067281
> If you could get the full page text of every url on the first page of ddg results and dump it into vim/emacs where you can move/search around quickly, that would probably be similarly as good, and without the hallucinations.
Curiously, literally nobody on earth uses this workflow.
People must be in complete denial to pretend that LLM (re)search engines can’t be used to trivially save hours or days of work. The accuracy isn’t perfect, but entirely sufficient for very many use cases, and will arguably continue to improve in the near future.
The only way LLM search engines save time is if you take what it says at face value as truth. Otherwise you still have to fact check whatever it spews out which is the actual time consuming part of doing proper research.
Frankly I've seen enough dangerous hallucinations from LLM search engines to immediately discard anything it says.
Of course you have to fact check - but verification is much faster and easier than searching from scratch.
How is verification faster and easier? Normally you would check an article's citations to verify its claims, which still takes a lot of work, but an LLM can't cite its sources (it can fabricate a plausible list of fake citations, but this is not the same thing), so verification would have to involve searching from scratch anyway.
For most things, no it isn’t. The reason it can work well at all for software is that it’s often (though not always) easy to validate the results. But for giving you a summary of some topic, no, it’s actually very hard to verify the results without doing all the work over again.
AI has more knowledge than everyone already, I wouldn't say smarter though. It's like wisdom vs intelligence in D+D (and/or life).. wisdom is knowing things, intelligence is how quick you can learn / create new things.
Knowledge is what I see equivalent with a big library. It contains mostly correct information in the context of the book (which might be incorrect in general) and "ai" is very good at taking everything out of context, Smashing a probability distribution over it and picking an answer which humans will accept. E.g. it does not contain knowledge, at best the vague pretense of it.
Having knowledge is not exactly the same as being smart though is it.
Although it helps immensely.
[dead]
Man, what are we supposed to do with people who think the above?
I'd do the same thing I'd do with anyone that has a different opinion than me: try my best to have an honest and open discussion with them to understand their point of view and get to the heart of why they believe said thing, without forcefully tearing apart their beliefs. A core part of that process is avoiding saying anything that could cause them to feel shame for believing something that I don't, even if I truly believe they are wrong, and just doing what I can to earnestly hear them out. The optional thing afterwards, if they seem open to it, is express my own beliefs in a way that's palatable and easily understood. Basically explain it in a language they understand, and in a way that we can think about and understand and discuss together, not taking offense to any attempts at questioning or poking holes in my beliefs because that is the discovery process imo for trying something new.
Online is a little trickier because you don't know if they're a dog. Well, now a days it's even harder, because they could also not have a fully developed frontal lobe, or worse, they could be a bot, troll, or both.
Well said, and thank you for the final paragraph. Made me chuckle.
I don't know, it's kinda terrifying how this line of thinking is spreading even on HN. AI as we have it now is just a turbocharged autocomplete, with a really good information access. It's not smart, or dumb, or anything "human" .
Just brace for the societal correction.
There's a lot of things going on in the western world, both financial and social in nature. It's not good in the sense of being pleasant/contributing to growth and betterment, but it's a correction nonetheless.
That's my take on it anyway. Hedge bets. Dive under the wave. Survive the next few years.
I think whether any text is written with the help of AI is not the main issue. The real issue is that for texts like police reports a human still has to take full responsibility for its contents. If we preserve this understanding, than the question of which texts are generated by AI becomes moot.
I don’t understand the urgency to replace human work with AI. Why is every organization so eager about skipping the AI as an assistant step? Here there are already massive productivity gains in using the AI to create the draft of the report, it makes little economical to make it do the final version compared to the risk, maybe it’s just plain laziness? Same with developers, why is very organization wanting to leapfrog from humans write all the code to they don’t even read the generated code?
Not everyone is in an urgent hurry to replace people with bots; that's a hyperbolic construct.
But to try to answer some of what I think you're trying to ask about: The bot can be useful. It can be better at writing a coherent collection of paragraphs or subroutines than Alice or Bill might be, and it costs a lot less to employ than either of them do.
Meanwhile: The bot never complains to HR because someone looked at them sideways. The bot [almost!] never calls in sick; the bot can work nearly 24/7. The bot never slips and falls in the parking lot. The bot never promises to be on-duty while they vacation out-of-state with a VPN or uses a mouse-jiggler to screw up the metrics while they sleep off last night's bender.
The bot mostly just follows instructions.
There's lots of things the bot doesn't get right. Like, the stuff it produces may be full of hallucinations and false conclusions that need reviewed, corrected, and outright excised.
But there's lots of Bills and Alices in the world who are even worse, and the bot is a lot easier and cheaper to deal with than they are.
That said: When it comes to legal matters that put a real person's life and freedom in jeopardy, then there should be no bot involved.
If a person in a position of power (such as a police officer) can't write a meaningful and coherent report on their own, then I might suggest that this person shouldn't ever have a job where producing written reports are a part of their job. There's probably something else they're good at that they can do instead (the world needs ditchdiggers, too).
Neither the presence nor absence of a bot can save the rest of us from the impact of their illiteracy.
Yes. Allowing officers to blame AI creates a major accountability gap. Per e.g. the EU AI Act’s logic, if a human "edits" a draft, they must be held responsible and do not need to disclose the use of AI.
To ensure safety, those offerings must use premarket red teaming to eliminate biases in summarization. However, ethical safety also requires post-market monitoring, which is impossible if logs aren't preserved. Rather than focusing on individual cases, I think, we must demand systemic oversight in general and access for independent research (not only focussing on a specific technology)
It should be treated kind of the same as writing a report after a glass of wine. Probably no one really cares but "sorry that doesn't count because I was intoxicated when I wrote that bit" isn't going to fly.
Sadly justice system is a place where responsibility does not happen. It is not a system where you make one mistake and you are to prison. Instead everyone but the victims of the system are protected and colluded with. More you punish the victims better you make out.
I agree. A programmer has to take responsibility for the generated code they push, and so do police officers for the reports they file. Using a keyboard does not absolve you of typos, it's your responsibility to proofread and correct, this is no different, just a lot more advanced.
Of course the problem is also that police often operates without any real oversight and covers up more misconduct than workers in an under-rug sweeping factory. But that's another issue.
> That means that if an officer is caught lying on the stand – as shown by a contradiction between their courtroom testimony and their earlier police report – they could point to the contradictory parts of their report and say, “the AI wrote that.”
Normally, if a witness (e.g. a police officer) were found to be recounting something written by a third party, it would be considered hearsay and struck from the record (on objection).
It would be an interesting legal experiment to have an officer using this system swear to which portions they wrote themselves, and attempt to have all the rest of the testimony disallowed as hearsay.
I’d suspect the other direction:
Police unions get LLMs classified as some kind of cognitive aid, so it becomes discrimination to ban them in school or the workplace.
That is an aspect I had not considered in my assumptions that AI/robots will eventually go through the same/similar social justice process as all the other causes, i.e., women’s suffrage, racial equality, gay rights, etc. because it will ultimately and, arguably, more than all the other prior social justice causes célèbres, serve the ruling class that has risen to dominate through social justice causes far more than anything prior.
It’s going to be interesting to see the state propaganda against the bigots and evil bioists (or whatever the word smithing apparatchiks will devise) so want to bar the full equality in society of AI/robots who look just like you and me after all and also just want equal rights to love each other, and who are you to oppose others since we are all just individuals?
Shoot the messenger all you want, but it’s coming.
Cynical and fun to read but no. Too many parasites have already chewed their way to the empty heart of power of the post-war liberal system, and I think the next time it gets power at the highest levels in the US will be the end if it there. Maybe it will last another generation in Europe, but not long enough to see the scenario you describe play out.
I find this article strange in its logic. If the use of AI generated content is problematic as a principle I can understand the conflict. Then no AI should be used to "transcribe and interpret a video" at all - period. But if the concern is accuracy in the AI "transcript" and not the support from AI as such, isn't it a good thing that the AI generated text is deleted after the officer has processed the text and finalized their report?
That said, I believe it is important to aknowlegde the fact that human memory, experience and interpretation of "what really happened" is flawed, isn't that why the body cameras are in use in the first place? If everyone believed police officers already where able to recall the absolute thruth of everything that happens in situations, why bother with the cameras?
Personally I do not think it is a good idea to use AI to write full police reports based on body camera recordings. However, as a support in the same way the video recordings are available, why not? If, in the future, AI will write accurate "body cam" based reports I would not have any problems with it as long as the video is still available to be checked. A full report should, in my opinion, always contain additional contextual info from the police involved and witnesses to add what the camera recordings not necessarily reflect or contain.
My worry is at scale AI from one vendor can introduce biases. We wont know what those biases are. But whatever they are the same bias affects all reports.
That is something to worry about, agreed. So, the quality and the reliance of AI is what we should focus on. In addition we should be able to keep track (and records of) how AI has used and build its narrative and conclutions.
> important first step in reigning in AI police reports.
That should be 'reining in'. "Reign" is -- ironically - - what monarchs do.
Such innocent mistakes make me smile these days because it gives assurance a real human wrote them.
Don't worry sufficiently advanced LLMs will learn how to put in the right amount of typoes to be convincing.
Swearing is a good heuristic still I think. The American corporate world remains rather prissy about swearing, so if the post sounds like a hairy docker after 12 pints then it's probably not an LLM.
It's not certain that LLMs don't do this already—it's likely their doing this even now.
That’s —— not just —— possible— it’s —— ——— probable!!!
Are you an LLM that misspelled “they’re” intentionally?
That was the joke. Also the use of the "It's not; it's" structure and the em-dash.
Unless it's an LLM instructed to make occasional mistakes.
I recommend taking a look at this video to get an idea behind the through process (or lack thereof) law enforcement might display when provided with a number of "AI" tools, and even if this one example is closer to traditional face recognition than LLMs, the behavior seems the same. Spoiler: complete submission and deference, and in this specific case to a system that was not even their own.
https://www.youtube.com/watch?v=B9M4F_U1eEw
This does sound problematic, but if a police officer's report contradicts the body-worn camera or other evidence, it already undermines their credibility, whether they blame AI or not. My impression is that police don't usually face repercussions for inaccuracies or outright lying in court.
> That means that if an officer is caught lying on the stand – as shown by a contradiction between their courtroom testimony and their earlier police report
The bigger issue, that the article doesn't cover, is that police officers may not carefully review the AI generated report, and then when appearing in court months or years later, will testify to whatever is in the report, accurate or not. So the issue is that the officer doesn't contradict inaccuracies in the report.
> My impression is that police don't usually face repercussions for inaccuracies or outright lying in court.
That's because it's a very difficult thing to prove. Bad memories and even completely false memories are real things.
Sure, but other court participants are given somewhat less grace for lying under oath.
Are they?
Perjury isn't a commonly prosecuted crime.
If an officer misremembers something about you, you go to jail . If you misremember something about the event, you also go to jail. Yeah, I guess it tracks
That's why I qualified it with "somewhat."
Neither is grace a common defense.
That's why we need a greatly reduced standard of proof for officer misconduct, especially when it comes to consequences like just losing your job (as opposed to, e.g., jail time).
While I agree that officers should be accountable. More enforcement of them will not suddenly make them good officers. Other nations train their police for years prior to putting them into the thick of it. US police spend far less time studying, and it shows, in everything from de-escalation tactics to general legal understanding. If you create a pipeline to weed out bad officers, then there needs to be a pipeline producing better officers
AIUI US policing is descended from slave catching and strike breaking. Two activities which I think we'd say today are obviously bad.
In many European states their policing starts as town guards tasked with ensuring order. Order is, at least, not obviously bad.
So that's a philosophical difference in what these forces even think their purpose is.
Certainly agreed on that. I think part of it is training but also part of it is just vetting. There are pretty clearly too many people who get into policing out of a desire to wield authority rather than a desire to help people. In many cases I think there is not much use in trying to "train" such people; they just need to be doggedly weeded out. But yes, we need action on both ends, ensuring the pipeline produces good officers going in, and then also regular monitoring to ensure they stay good.
This is an outrageous lie, there were SEVEN Police Academy movies!!!
To me it’s a question of it they are on average better. It’s not like human based input is perfect either.
> In July of this year, EFF published a two-part report on how Axon designed Draft One to defy transparency. Police upload their body-worn camera’s audio into the system, the system generates a report that the officer is expected to edit, and then the officer exports the report. But when they do that, Draft One erases the initial draft, and with it any evidence of what portions of the report were written by AI and what portions were written by an officer. That means that if an officer is caught lying on the stand – as shown by a contradiction between their courtroom testimony and their earlier police report – they could point to the contradictory parts of their report and say, “the AI wrote that.” Draft One is designed to make it hard to disprove that.
> Axon’s senior principal product manager for generative AI is asked (at the 49:47 mark) whether or not it’s possible to see after-the-fact which parts of the report were suggested by the AI and which were edited by the officer. His response (bold and definition of RMS added):
“So we don’t store the original draft and that’s by design and that’s really because the last thing we want to do is create more disclosure headaches for our customers and our attorney’s offices.
Policing and Hallucinations. Can’t wait to see this replicated globally.
Does the officer not take full ownership of the report once they edit it? If they got an intern to write a report and then they signed off on it, they’d be responsible, right?
The experiments of AI agents sending emails to grown-ups are good I think – AIs are doing much more dangerous stuff like these AI Police Reports. I don't think making a fuss over every agent-sent email is going to cause other AI incursion into our society to slow down. The Police Report writer is a non-human partially autonomous participant like a K9 officer. It's wishful thinking that AIs aren't going to be set loose doing jobs. The cat is out of the bag.
I'm curious about this claim. What about agents sending emails to each other is good?
“Fighting back” = adding a disclaimer.
You guys are so fucked.
> You guys are so fucked.
"You guys"? Everyone is fucked. This is going to be everywhere. Coming to your neighborhood, eventually.
Not everyone lives in a 3rd world authoritarian backwater, its time to stop that ridiculous US-centrism
I dont live in a police state.
I guess that means you don't live in the US, or in the UK, or in Australia.
Correct
You either don't have police reports or some amount of your country's police reports aee written by AI.
I'd be more worried that you aren't reading articles about it than if you were.
Considering that AI can barely write in my native language I am not worried.
There are countries on this planet that are not actively digging their own graves.
Cmon tell us, Mr. Rammstein’s throwaway, which much-superior country is it?!
He wont tell you. If he did he would have to admit he lives in a police state or martial law.
Misery loves company
a/s/l?