120

Mamdani to kill the NYC AI chatbot caught telling businesses to break the law

Why did NYC release it in the first place? Did they not QA it?

Or was it perhaps one of those cases where they found issues, but the only way to really know for sure that the deleterious impact is significant enough by pushing it to prod?

2 hours agoandsoitis

Remember that many people are heavily are happy-path biased. They see a good result once and say "that's it, ship it!"

I'm sure they QA'd it, but QA was probably "does this give me good results" (almost certainly 'yes' with an LLM), not "does this consistently not give me bad results".

17 minutes agocheald

> almost certainly 'yes' with an LLM

LLMs can handle search because search is intentionally garbage now and because they can absorb that into their training set.

Asking highly specific questions about NYC governance, which can change daily, is almost certainly 'not' going to give you good results with an LLM. The technology is not well suited to this particular problem.

Meanwhile if an LLM actually did give you good results it's an indication that the city is so bad at publishing information that citizens cannot rightfully discover it on their own. This is a fundamental problem and should be solved instead of layering a $600k barely working "chat bot" on top the mess.

14 minutes agothemafia

>Why did NYC release it in the first place? Did they not QA it? How do you QA black box non-deterministic system? I'm not being facetious, seriously asking.

an hour agodrillsteps5

The same way you test any system - you find a sampling of test subjects, have them interact with the system and then evaluate those interactions. No system is guaranteed to never fail, it's all about degree of effectiveness and resilience.

The thing is (and maybe this is what parent meant by non-determinism, in which case I agree it's a problem), in this brave new technological use-case, the space of possible interactions dwarfs anything machines have dealt with before. And it seems inevitable that the space of possible misunderstandings which can arise during these interactions will balloon similarly. Simply because of the radically different nature of our AI interlocutor, compared to what (actually, who) we're used to interacting with in this world of representation and human life situations.

25 minutes agopegasus

Does knowing the system architecture not help you with defining things like happy path vs edge case testing? I guess it's much less applicable for overall system testing, but in "normal" systems you test components separately before you test the whole thing, which is not the case with LLMs.

By "non-deterministic" I meant that it can give you different output for the same input. Ask the same question, get a different answer every time, some of which can be accurate, some... not so much. Especially if you ask the same question in the same dialog (so question is the same but the context is not so the answer will be different).

3 minutes agodrillsteps5

> radically different nature of our AI interlocutor

It's the training data that matters. Your "AI interlocutor" is nothing more than a lossy compression algorithm.

13 minutes agothemafia

Most AI Chatbots do not rely on their training data, but on the data that is passed to them through RAG. In that sense they are not compressing the data, just searching and rewording it for you.

2 minutes agosebastiennight
[deleted]
14 minutes ago
[deleted]
40 minutes ago

> Why did NYC release it in the first place? Did they not QA it?

Considering Louis Rossmann's videos on his adventures with NYC bureaucracy (e.g. [0]), the QAers might not have known the laws any better than the chat bot.

[0] https://www.youtube.com/watch?v=yi8_9WGk3Ok

an hour agothedanbob

Considering the previous mayor's relationship with the law, it could be on purpose.

31 minutes agodirewolf20

QA efforts can whack-a-mole some issues, but the mismatch of problem and solution is inherent in any situation in which a generator of plausible-sounding text gets pointed at an area where correctness matters.

an hour agoelgenie

Why do you think OpenAI let a red team loose on GPT-5 for six months before releasing it to the public?

an hour agofragmede

For the image. There is no way a red team can find all the issues in 6 months. They can find some of the biggest, but even getting all the issues fixed in 6 months seems unlikely.

35 minutes agobluGill

> Why did NYC release it in the first place?

Perhaps a big fat check was involved.

2 hours agoerxam

Usually it's a manila envelope.

24 minutes agokevin_thibedeau

> The Office of Technology and Innovation spent nearly $600,000 to build out the foundations of the MyCity chatbot, which will be used for future chatbot offerings on MyCity. [0]

This was experimental tech... while I admire cities attempting to implement AI, it seems they did not spend enough tax dollars on it!

[0] https://abc7ny.com/post/ai-artificial-intelligence-eric-adam...

42 minutes agohashberry

> The bot, built using Microsoft’s cloud computing platform

When is the last time there was positive news involving Microsoft? This bot could've easily been on AWS or GCP but I find it hilarious that here they are, getting dragged yet again

3 hours agosylens

Even if the capability of each platform was exactly the same, Microsoft cloud users skew heavily towards governments, large non-tech corporations and really anyone who you sell to using large sales teams, fancy dinners and kickbacks rather than quality of software. And the end result follows.

2 hours agopaxys

Being in and around the NYC area, while also knowing plenty of small businesses, I'm glad Mamdani killed this bot. Telling bosses to steal tips from their employees is run-of-the-mill corruption and common over here. The vibe for businesses is that everyone has to be exploiting someone else or have a schtick. If you were to talk about morals, you would be ridiculed. Most lawyers wouldn't even prosecute small businesses for this. It's probably why the agent was put into production, the level of business ethics in NYC is cartoonishly evil.

an hour agokittikitti

In the case of stealing tips, that's wage theft and the New York State Department of Labor has zero sense of humor about that. They will definitely investigate all claims on that topic. It might be too little and too late for the individual affected, but the business will pay.

25 minutes agopatrickmay

He is turning out to be a benevolent, law-abiding mayor that just happens to be communist.

16 minutes ago1970-01-01

What's that supposed to mean?

15 minutes agodirewolf20

Some of it is good, some of it is bad.

4 minutes agoIzikiel43

The previous mayors were none of these things

15 minutes ago1970-01-01

> A spokesperson for the mayor, Dora Pekec, confirmed in a text message that the new administration plans to take down the chatbot. She said a member of the Mamdani transition team had seen reporting on the bot from The Markup and THE CITY and presented it to the mayor as a possible place to save funds.

Journalism works.

2 hours agotoomuchtodo

Journalism teed up an easy way for an incoming politician to dunk on his predecessor, if you'll forgive the mixed metaphor. Not that I'm opposed to any part of it, just that this was an easy scenario for "journalism" to "work" in.

8 minutes agoandrewflnr

It does. And it works best if you elect politicians who are willing to listen.

an hour agoatq2119

We’ll likely see a lot of these AI pet projects get axed in the coming year or two… especially things rushed out in the early phases of the AI bubble when folks were desperate to appear to be using AI.

an hour agocmiles8

yeah i hope the problems stay to somewhat humorous themes like convincing a car sales bot to sell you a car for $1 and not more serious issues like convincing a bot to metaphorically launch the ICBMs.

40 minutes agochasd00

"The WOPR did a better job avoiding thermonuclear war than most humans would" is my hot take.

33 minutes agotoomuchtodo

Thinks through possibilities -> realizes what it is proposing is a bad idea

Hell put WOPR in charge of everything

7 minutes agojjk166

What else to expect from Eric Adams.

an hour agoterespuwash

[flagged]

an hour agomonero-xmr

To ride NYC's free busses, you must have a two minute conversation with a chat bot. (/s)