179

Miasma: A tool to trap AI web scrapers in an endless poison pit

I dunno... it feels like the same approach as those people who tell you gleeful stories of how they kept a phone spammer on a call for 45 minutes: "That'll teach 'em, ha ha!" Do these types of techniques really work? I’m not convinced.

Also, inserting hidden or misleading links is specifically a no-no for Google Search [0], who have this to say: We detect policy-violating practices both through automated systems and, as needed, human review that can result in a manual action. Sites that violate our policies may rank lower in results or not appear in results at all.

So you may well end up doing more damage to your own site than to the bots by using dodgy links in this manner.

[0]https://developers.google.com/search/docs/essentials/spam-po...

2 hours agobobosola

>I dunno... it feels like the same approach as those people who tell you gleeful stories of how they kept a phone spammer on a call for 45 minutes: "That'll teach 'em, ha ha!" Do these types of techniques really work? I’m not convinced

If you are automating it, I don't see why not. Kitboga, a you-tuber kept scam callers in AI call-center loops tying up there resources so they cant use them on unsuspecting victims.[0]

That's a guerilla tactic, similar in warfare, when you steal resources from an enemy, you get stronger and they get weaker, its pretty effective.

[0]: https://www.youtube.com/watch?v=ZDpo_o7dR8c

an hour agotrinsic2

more and more scammers are automating their side as well so soon the loop will be just bots talking to bots

21 minutes agobdangubic

Also, inserting hidden or misleading links is specifically a no-no for Google Search [0]

Depending on your goals, this may be a pro or a con. I, personally, would like to see a return of "small web" human-centric communities. If there were tools that include anti-scraping, anti-Google (and other large search crawlers) as well as a small web search index for humans to find these sites, this idea becomes a real possibility.

an hour agochongli

One would assume legit spiders obey robots.txt.

an hour agoxyzal

This, to me, is the strongest argument to offer these slop generators. It provides an incentive to follow the robots.txt.

23 minutes agololc

yes it work.

phone scammers have a very high personel cost, hence why some resort for human traffic.

if everyone picked up the phone and wasted a few seconds, it would be enough to make their whole enterprise worthless. but since most people who would not fail shutdown right away, they have the best ROI of any industry. they don't even pay the call for first seconds.

26 minutes agoiririririr

> If you have a public website, they are already stealing your work.

I have a public website, and web scrapers are stealing my work. I just stole this article, and you are stealing my comment. Thieves, thieves, and nothing but thieves!

3 hours agotasuki

The problem I have, is they hammer my site so hard they take it down.

The content is for everyone. They can have it. Just don't also take it away from everybody else.

an hour agomargalabargala

Unintentional denial-of-service attacks from AI scrapers are definitely a problem, I just don't know if "theft" is the right way to classify them. They shouldn't get lumped in with intellectual property concerns, which are a different matter. AI scrapers are a tragedy of the commons problem kind of like Kessler syndrome: a few bad actors can ruin low Earth orbit for everyone via space pollution, which is definitely a problem, but saying that they "stole" LEO from humanity doesn't feel like the right terminology. Maybe the problem with AI scrapers could be better described as "bandwidth pollution" or "network overfishing" or something.

29 minutes agoethmarks

Yes I completely agree.

17 minutes agomargalabargala

you're totally right about not being theft, but we have a term. you used it yourself, "distributed denial of service". that's all it is. these crawlers should be kicked off the internet for abuse. people should contact the isp of origin.

25 minutes agoFeepingCreature

Firstly, since this argument is about semantic pedantry anyways, it's just denial-of-service, not distributed denial-of-service. AI scraper requests come from centralized servers, not a botnet.

Secondly, denial-of-service implies intentionality and malice that I don't think is present from AI scrapers. They cause huge problems, but only as a negligent byproduct of other goals. I think that the tragedy of the commons framing is more accurate.

14 minutes agoethmarks

I agree theft isn't a good analogy, but there is something similar going on. I put my words out into the world as a form of sharing. I enjoy reading things others write and share freely, so I write so others might enjoy the things I write. But now the things I write and share freely are being used to put money in the bank accounts of the worst people on the planet. They are using my work in a way I don't want it to be used. It makes me not want to share anymore.

2 hours agocoldpie

>but there is something similar going on [...]

No, what you're basically describing is "I shared something but then I didn't like how it ended up being used". If you put stuff out in public for anyone to use, then find out it's used in a way you don't like, it's your right to stop sharing, but it's not "similar" to stealing beyond "I hate stealing"

an hour agogruez

> If you put stuff out in public for anyone to use, then find out it's used in a way you don't like

Nope. Copyright is a thing, licenses are a thing. Both are completely ignored by LLM companies, which was already proven in court, and for which they already had to pay billions in fines.

Just because something is publicly accessible, that does not mean everybody is entitled to abuse it for everything they see fit.

an hour agoHendrikto

>Nope. Copyright is a thing, licenses are a thing. Both are completely ignored by LLM companies, which was already proven in court,

...the same courts that ruled that AI training is probably fair use? Fair use trumps whatever restrictions author puts on their "licenses". If you're an author and it turned out that your book was pirated by AI companies then fair enough, but "I put my words out into the world as a form of sharing" strongly implied that's not what was happening, eg. it was a blog on the open internet or something.

an hour agogruez

I never understand why anyone wants authors to not be able to enforce copyright and licensing laws for AI training. Unless you are Anthropic or OAI it seems like a wild stance to have. It’s good when people are rewarded for works that other people value. If trainers don’t value the work, they shouldn’t train on it. If they do, they should pay for it.

31 minutes agoFromTheFirstIn

My own view is, I thought we were all agreed that the idea that Microsoft can restrict Wine from even using ideas from Windows, such that people who have read the leaked Windows source cannot contribute to Wine, was a horrible abuse of the legal system that we only went along with under duress? Now when it's our data being used, or more cynically when there's money to be made, suddenly everyone is a copyright maximalist.

No. Reading something, learning from it, then writing something similar, is legal; and more importantly, it is moral. There is no violation here. Copyright holders already have plenty of power; they must not be given the power to restrict the output of your brain forever more for merely having read and learnt. Reading and learning is sacred. Just as importantly, it's the entire damn basis of our profession!

If you do not want people to read and learn from your content, do not put it on the web.

20 minutes agoFeepingCreature

>I never understand why anyone wants authors to not be able to enforce copyright and licensing laws for AI training.

Fair use is part of "copyright and licensing laws".

25 minutes agogruez

> But now the things I write and share freely are being used to put money in the bank accounts of the worst people on the planet.

I don't think that's the case. I'm not even arguing they aren't the worst people on the planet - might as well be. But all is see them doing is burning money all over the place.

2 hours agotasuki

They’re getting the money to burn, though

an hour agoFromTheFirstIn

If you want a good analogy, try the enclosure of the commons in the British countryside. Communally managed grasslands were destroyed by noblemen with massive herds of cattle overgrazing the land, kickstarting a land grab that effectively forced people to enclose or be left behind themselves. Property is a virus that destroys all other forms of allocation.

an hour agokmeisthax

If someone hands out cookies in the supermarket, are you allowed to grab everything and leave?

3 hours agospiderfarmer

Odd thing about cookies… they disappear after one serving.

Websites are an endless stream of cookies.

The analogy doesn’t hold.

2 hours agodrfloyd51

If copying content from harddrive to another is theft, then so is DNA copying itself.

Everything is a Remix culture. We should promote remix culture rather than hamper it.

Everything is a Remix (Original Series) https://youtu.be/nJPERZDfyWc

2 hours agoghywertelling

how about this analogy: I created a most tasty cookie recipe. I give it out for free, and all copies have my name because I am vain person who likes to be known far and wide as the best baking chef ever. Is it ok to get the recipe, remove my name, and write in LLM-Codex as the creator? again, i'm ok with giving the recipe for free, i just want my name out there.

an hour agoGeoAtreides

>Is it ok to get the recipe, remove my name, and write in LLM-Codex as the creator? again, i'm ok with giving the recipe for free, i just want my name out there.

From a legal perspective, it's a pretty clear "no". The instructions in recipes aren't copyrightable. The moral question is more ambiguous, but it's still pretty weak. Most recipes are uncredited, and it's unclear why someone can force everyone to attribute the recipe to them when all they realistically did was tweak the dish a bit. In the example above, I doubt you invented cookies.

17 minutes agogruez

Bandwidth and compute constraints make websites all but an endless stream though.

an hour agolou1306

Digital information may be our first post-scarce resource. It's interesting, and sad, to see so many attempt to fit it within scarcity-based economic models.

2 hours agoz3c0

> digital information may be our first post-scarce resource

… browses memory and storage prices on NewEgg …

Hmm.

But the word digital is distracting us.

The word information is the important one. The question isn't where information goes. It's where information comes from.

Is new information post scarcity?

Can it ever be?

2 hours agoTerretta

[dead]

2 hours agothrowaway613746

It’s interesting to see twists on the old anti-piracy arguments recycled for anti-ai.

2 hours agobengale

Turns out many (most?) people on the internet were never anti-copyright in the first place. They were just anti-copyright (or at least, refused to challenge the anti-copyright people) because they wanted free movies and/or hated corporations.

an hour agogruez

That really depends, but the quick answer is that according to our human social contract, we'd just ask "how many can I take?". Until now, the only real tool to limit scrapers has been throttling, but I don't see any reason for there not to be a similar conversational social contract between machines.

3 hours agofalcor84

Isn’t robots.txt such a “social contract between machines”? But AI scrapers couldn’t care less.

2 hours agovolemo

I will copy the supermarket and paste it somewhere else.

I'm also going to download a car.

3 hours agoGaggiX

This is a dishonest analogy. In your example, there is only a limited amount of cookies available. While there is no practical limit on the amount of time a certain digital media can be viewed.

You are allowed to take one cookie. But you are allowed to view a public website multiple times if you so want.

2 hours agopbasista

There sure is a limit in the load that the server you're DDoSing can take or the will for people to post new worthy content in public. The supply is limited just not at the first degree. Let's make a small edit: Are you allowed to take all the cookies and then sell them with a small ribbon with your name on it ?

2 hours agohollow-moe

Their is no arguing with pirates. They’ll take what’s yours and forget about you while you tend to the ashes.

an hour agospiderfarmer

Multiple AI scrapers are downloading every page of my 6M page website as we speak. They don’t care about the fact that I have dedicated 20 years to building it, nor that I have to maintain multiple VPSes just to serve it to them.

If I can poison them and their families, I will.

an hour agospiderfarmer

Wow, how did you manually hand-write 6 million web pages? That is impressive. It would take me a while to even montonically count that high.

17 minutes agoImPostingOnHN

[dead]

2 hours agothrowaway613746

This is ultimately just going to give them training material for how to avoid this crap. They'll have to up their game to get good code. The arms race just took another step, and if you're spending money creating or hosting this kind of content, it's not going to make up for the money you're losing by your other content getting scraped. The bottom has always been threatening to fall out of the ads paid for eyeballs, And nobody could anticipate the trigger for the downfall. Looks like we found it.

3 hours agoaldousd666

> This is ultimately just going to give them training material for how to avoid this crap.

> The arms race just took another step, and if you're spending money creating or hosting this kind of content, it's not going to make up for the money you're losing by your other content getting scraped.

So we should all just do nothing and accept the inevitable?

2 hours agojohneth

> So we should all just do nothing and accept the inevitable?

I daresay rate-limiting will result in better outcomes than well-poisoning with hidden links that are against the policies of search engines.

Lots of potential for collateral damage, including your own websites' reputations and search visibility, with the well-poisoning approach.

an hour agoninjagoo

To be honest who cares about Google search anymore it's pretty useless these days.

6 minutes agoddtaylor

The README.md specifically states how to allow for nice robots to proceed unhindered. The people behind these efforts, I would imagine, don't particularly care about their sites' reputations in the cases people use LLMs for search.

an hour agoxantronix

To be clear, I mean AI is going to be the downfall of ad supported content. But let's face it. We have link farms and spam factories as a result of the ad supported content market. I think this is going to eventually do justice for users because it puts a premium on content quality that someone will want to pay a direct licensing fee to scrape for your AI bots as opposed to tricking somebody into clicking on a link and looking at an impression for something they won't buy.

3 hours agoaldousd666

Tech is just a series of arms races

2 hours agoApocryphon

Can't we simple parse and remove any style="display: none;", aria-hidden="true", and tabindex="1" attributes before the text is processed and get around this trick? What am I missing?

an hour agoArt9681

If you do that and don't follow robots.txt, you are blocked. If you do that and follow robots.txt, fine. That's all we wanted you to do anyway. Just follow the instructions that well-behaved scrapers are meant to follow.

17 minutes agohoistbypetard

Way back in the day I had a software product, with a basic system to prevent unauthorised sharing, since there was a small charge for it.

Every time I released an update, and new crack would appear. For the next six months I worked on improving the anti-copying code until I stumbled across an article by a coder in the same boat as me.

He realised he was now playing a game with some other coders where he make the copyprotection better, but the cracker would then have fun cracking it. It was a game of whack-a-mole.

I removed the copy protection, as he did, and got back to my primary role of serving good software to my customers.

I feel like trying to prevent AI bots, or any bots, from crawling a public web service, is a similar game of whack-a-mole, but one where you may also end up damaging your service.

an hour agoCrzyLngPwd

> the cracker would then have fun cracking it.

I wonder if you could've won by making the cracking boring. No new techniques, bare minimum changes to require compiling a new crack, and just enough to make it difficult to automate. I.e. turn the cracking into a job.

But in reality, there are other community-driven motivations to put out cracks.

29 minutes agoCpoll

>No new techniques, bare minimum changes to require compiling a new crack, and just enough to make it difficult to automate.

From a practical perspective you also have to have a steady stream of features for the newer versions to be worth cracking. Otherwise why use v1.09 when v1.01 works fine? Moreover spending less effort into improving the DRM is still playing at the cat and mouse game, albeit with less time investment. If you're making minimal changes, the cracker also has to spend minimal time updating the crack.

13 minutes agogruez

Love it. Thanks for doing this work. Not sure why people are criticizing this. Also, insane amount of work has been done to improve scraping - which in my mind is just absolute bonkers and i didn't see people complaining about that.

15 minutes agodwa3592

certainly don't allow anyone to access your content. perhaps shut the site down just to be safe.

an hour agoeffnorwood

Is there any evidence or hints that these actually work?

It seems pretty reasonable that any scraper would already have mitigations for things like this as a function of just being on the internet.

4 hours agomadeofpalk

It might work against people just use their Mini Mac with OpenClaw to summarize news every morning, but it certainly won't work against Google.

More centralized web ftw.

3 hours agoraincole

It also probably won't work if the person actually wants your content and is checking if the thing they scraped actually makes sense or it just noise. Like, none of these are new things. Site owners send junk/fake data to webscrapers since web scraping was invented.

2 hours agohexage1814

In my experience, Google (among others) plays nice. Just put "disallow: *" in your robots.txt, and they won't bother you again.

My current problem is OpenAI, that scans massively ignoring every limit, 426, 444 and whatever you throw at them, and botnets from East Asia, using one IP per scrap, but thousands of IPs.

2 hours agootherme123

> It might work against people just use their Mini Mac with OpenClaw to summarize news every morning,

Good enough for me.

> More centralized web ftw.

This ain't got anything to do with "centralized web," this kind of epistemological vandalism can't be shunned enough.

2 hours agoLaGrange

Even it did work, I just can't bring myself to care enough. It doesn't feel like anything I could do on my site would make any material difference. I'm tired.

4 hours agosd9

I definitely get this. The thing that gives me hope is that you only need to poison a very small % of content to damage AI models pretty significantly. It helps combat the mass scraping, because a significant chunk of the data they get will be useless, and its very difficult to filter it by hand

4 hours ago20k

The asymmetry is what makes this very interesting. The cost to inject poison is basically zero for the site owner, but the cost to detect and filter it at scale is significant for the scraper. That math gets a lot worse for them as more sites adopt it. It doesn't solve the problem, but it changes the economics.

2 hours agolucasfin000

About two years ago, I made up reference to a nonexistent python library and put code "using" it in just 5 GitHub repos. Several months later the free ChatGPT picked it up. So IMO it works.

an hour agoxyzal

Via websearch? Or training?

an hour agologicprog

There are hundreds of bots using residential proxies. That is not free. Make them pay.

3 hours agospiderfarmer

The search engine crawlers are sophisticated enough, but Meta's are not. Neither is Anthropic's Claude crawler. Source: personal experience trying garbage generators on Yandex, Blexbot, Meta's and Anthropics crawlers.

I'm completely uncertain that the unsophisticated garbage I generated makes any difference, much less "poisons" the LLMs. A fellow can dream, can't he?

2 hours agobediger4000

it won't work, especially on gemini. Googlebot is very experienced when it comes to crawling. It might work for OpenAI and others maybe.

3 hours agom00dy

What kind of migitations? How would you detect the poison fountain?

4 hours agonubg

style="display: none;" aria-hidden="true" tabindex="1"

many scraper already know not to follow these, as it's how site used to "cheat" pagerank serving keyword soups

4 hours agoavereveard

Google will give your website a penalty for doing this.

3 hours agom00dy

Because the internet is noisy and not up to date all recent LLMs are trained using Reinforcement Learning with Verifiable Rewards, if a model has learned the wrong signature of a function for example it would be apparent when executing the code.

4 hours agoGaggiX

It does work, on two levels:

1. Simple, cheap, easy-to-detect bots will scrape the poison, and feed links to expensive-to-run browser-based bots that you can't detect in any other way.

2. Once you see a browser visit a bullshit link, you insta-ban it, as you can now see that it is a bot because it has been poisoned with the bullshit data.

My personal preference is using iocaine for this purpose though, in order to protect the entire server as opposed to a single site.

4 hours agophoronixrly

I did a related approach:

A toll charging gateway for llm scrapers: a modification to robots.txt to add price sheets in the comment field like a menu.

This was for a hackathon by forking certbot. Cloudflare has an enterprise version of this but this one would be self hosted

I think it has legs but I think I need to get pushed and goaded otherwise I tend to lose interest ...

It was for the USDC company btw so that's why there's a crypto angle - this might be a valid use case!

I'm open to crypto not all being hustles and scams

Tell me what you think?

https://github.com/kristopolous/tollbot

2 hours agokristopolous

A related technique used to work so well for search engine spiders. I had some software i wrote called 'search engine cloaker'... this was back in the early 2000s... one of the first if not the first to do the shadowy "cloaking" stuff! We'd spin dummy content from lists of keywords and it was just piles and piles. We made it a bit smarter using Markov chains to make the sentences somewhat sensible. We'd auto-interlink and get 1000s of links. It eventually stopped working... but it took a long while for that to happen. We licensed the software to others. I rationalized it because I felt, hey, we have to write crappy copy for this stupid "SEO" thing, so let's just automate that and we'll give the spiders what they seem to want.

an hour agobluepeter

Could this lead to something like the Streisand effect? I imagine these bots work at a scale where humans in the loop only act when something deviates from the standard, so, if a bot flags something up with your website then you’re now in a list you previously weren’t. Now don’t ask me what they do with those lists, but I guess you will make the cut.

an hour agohmokiguess

The data poisoning angle is interesting. Models trained on scraped web data inherit whatever biases, errors, and manipulation exist in that data. If bad actors can inject corrupted data at scale, it creates a malign incentive structure where model training becomes adversarial. The real solution is probably better data provenance -- models trained on licensed, curated datasets will eventually outcompete those trained on the open web.

2 hours agoeliottre

Isn't this a trope at this point? That AI companies are indiscriminately training on random websites?

Isn't it the case that AI models learn better and are more performant with carefully curated material, so companies do actually filter for quality input?

Isn't it also the case that the use of RLHF and other refinement techniques essentially 'cures' the models of bad input?

Isn't it also, potentially, the case that the ai-scrapers are mostly looking for content based on user queries, rather than as training data?

If the answers to the questions lean a particular way (yes to most), then isn't the solution rate-limiting incoming web-queries rather than (presumed) well-poisoning?

Is this a solution in search of a problem?

2 hours agoninjagoo

You do raise an interesting point. The poison fountains would probably be more effective if their outputs more closely resembled whatever the most popular problem spaces are at any given point.

an hour agoxantronix

My asthmar

I'm assuming this is a reference to Lord of the flies

3 hours agonosmokewhereiam

Miasma is bad or poisonous air. It's a Greek word.

2 hours agocwnyth

This is essentially machine-generated spam.

The irony of machine-generated slop to fight machine-generated slop would be funny, if it weren't for the implications. How long before people start sharing ai-spam lists, both pro-ai and anti-ai?

Just like with email, at some point these share-lists will be adopted by the big corporates, and just like with email will make life hard for the small players.

Once a website appears on one of these lists, legitimately or otherwise, what'll be the reputational damage hurting appearance in search indexes? There have already been examples of Google delisting or dropping websites in search results.

Will there be a process to appeal these blacklists? Based on how things work with email, I doubt this will be a meaningful process. It's essentially an arms race, with the little folks getting crushed by juggernauts on all sides.

This project's selective protection of the major players reinforces that effect; from the README:

" Be sure to protect friendly bots and search engines from Miasma in your robots.txt!

User-agent: Googlebot User-agent: Bingbot User-agent: DuckDuckBot User-agent: Slurp User-agent: SomeOtherNiceBot Disallow: /bots Allow: / "

2 hours agoninjagoo

Isn't posting projects like this the most visible way to report a bug and let it have fixed as soon as possible?

4 hours agometa-level

"disobeys robots.txt" is more of a feature

4 hours agosuprfsat

why not just try to block them at the door instead of feeding them poisoned food...

24 minutes agojijji

Of course Googlebot, Bingbot, Applebot, Amazonbot, YandexBot, etc from the major corps are HTTP useragent spiders that will have their downloaded public content used by corporations for AI training too. Might as well just drop the "AI" and say "corporate scrapers".

2 hours agosuperkuh

Why not simply blacklist or rate limit those bot IP’s ?

4 hours agosnehesht

If you have real traffic and bot traffic, you still need to identify which is which. On top of that, bots very likely don’t reuse the same IPs over and over again. I assume if we knew all the IPs used only by bots ahead of time, then yeah it would be simple to blacklist them. But although it’s simple in theory, the practice of identifying what to blacklist in the first place is the part that isn’t as simple

3 hours agoxprnio

You wouldn’t permanently block them, it’s more like a rolling window.

You can use security challenges as a mechanism to identify false positives.

Sure bots can get tons of proxies for cheap, doesn’t mean you can’t block them similar to how SSH Honeypots or Spamhaus SBL work albeit temporarily.

3 hours agosnehesht

Because punishment for breaking the robots.txt rules is a social good.

4 hours agophyzome

The AI companies are using virtually unlimited "clean" residential IPs so this is not a valid strategy.

2 hours agoarbol

How? They run their scraping and training infrastructure - and models themselves - from within those “AI datacenters”[1] we hear about in the news - and not proxying through end-users’ own pipes.

[1]: in quotes, because I dislike the term, because it’s immaterial whether or not an ugly block of concrete out in the sticks is housing LLM hardware - or good ol’ fashioned colo racks.

2 hours agoDaiPlusPlus

Residential proxy networks.

an hour agoAyyEye

There are way too many to do that

4 hours agoaduwah

True, most of the blacklists systems today aren’t realtime like Amazon WAF or Cloudflare.

We need a Crawler blacklist that can in realtime stream list deltas to centralized list and local dbs can pull changes.

Verified domains can push suspected bot ips, where this engine would run heuristics to see if there is a patters across data sources and issue a temporary block with exponential TTL.

There are many problems to solve here, but as any OSS it will evolve over time if there is enough interest in it.

Costs of running this system will be huge though and corp sponsors may not work but individual sponsors may be incentivized as it’s helps them reduce bandwidth, compute costs related to bot traffic.

3 hours agosnehesht

In the real-time spam market the lists worked well with honest groups for a bit, but started falling apart when once good lists get taken over by actors that realize they can use their position to make more money. It's a really difficult trap to avoid.

2 hours agopixl97

For the lulz

an hour agoxyzal

Wonder if you can just avoid hiding it to make it more believable

Why not have a library of babel esq labrinth visible to normal users on your website,

Like anti surveillance clothing or something they have to sift through

3 hours agofoxes

"/brainstorming git checkout this miasma repo source code and implement a fix to prevent the scraper from not working on sites that use this tool"

2 hours agorob

Applied model collapse

4 hours agoimdsm
[deleted]
3 hours ago

I wish if there was some regulation which could force companies who scrape for (profit) to reveal who they are to the end websites, many new AI company don't seem to respect any decision made by the person who owns the website and shares their knowledge for other humans, only for it to get distilled for a few cents.

5 hours agoImustaskforhelp

> > Be sure to protect friendly bots and search engines from Miasma in your robots.txt!

Can't the LLMs just ignore or spoof their user agents anyway?

4 hours agorvz

Well-behaved agents will obey robots.txt and not fall into the trap.

4 hours agophoronixrly

[dead]

2 hours agomaltyxxx

[dead]

3 hours agodevnotes77

[dead]

4 hours agoSophieVeldman

[dead]

3 hours agofirekey_browser

I know there are real world problems to deal with, but at least I got one over on that evil open claw instance /s

an hour agoobsidianbases1

These projects are the new "To-Do List" app.

5 hours agoGaggiX

Why do this though?

It's like if someone was trying to "trap" search crawlers back in the early 2000s.

Seems counterproductive

4 hours agoobsidianbases1

Because of bots that don't respect ROBOTS.txt .

If you want an AI bot to crawl your website while you pay for that bandwidth then you wont use the tool.

4 hours agobilekas

If bandwidth cost is a concern the maybe you should reconsider how you publish your site.

Like, what if you actually post something that gains traction, is it going to bankrupt you or something?

an hour agoobsidianbases1

search crawlers used to bring people TO your site llm boots are used to keep people OUT of your site, because knowledge is indexed and distributed by corporations.

2 hours agointegralid

So if your site is dependent on ads, and since the only way for people to see those ads is coming to your site, then yes, you lose.

If your site exists to share information, then the information gets disseminated, whether via LLM or some browser, it doesn't make a difference to me

an hour agoobsidianbases1

Those are not the only two options.

Why are you presenting the latter option as if it were mainstream? It's such a small percentage of use cases that it probably isn't even a rounding error.

People who want to disseminate information also want the credit.

I'd still like to know why you are presenting this false dichotomy. What reason do you have for presenting a use case that has fractions of a percentage as if it were a standard use case? What is your motivation behind this?

38 minutes agolelanthran

My only motivation is that it pains me to see smart capable people working on insignificant problems.

Maybe I don't understand the problem as well as I should, and I'm open to hearing what it is you think that I'm missing.

But from my perspective, this is a solution for a non-problem, which in my eyes is a problem itself.

23 minutes agoobsidianbases1

You misunderstand: I am asking what is your motivation for presenting a 0.0001% use case as a 50% use case.

The use case you present is so small it can be ignored as an option, yet you present it as the only other option.

20 minutes agolelanthran

You don't get attribution for your work if it merely feeds into it's training data

43 minutes agoaarjaneiro

That assumes the AI bots are scraping for training data and not simple retrieval/ RAG (which would likely provide attribution)

39 minutes agoobsidianbases1

Web crawlers didn’t routinely take down public resources or use the scraped info to generate facsimiles that people are still having ethical debates over. Its presence didn’t even register and it was indexing that helped them. It isn’t remotely the same thing.

https://www.libraryjournal.com/story/ai-bots-swarm-library-c...

3 hours agoForgeties79

AI bots must've taken down that link you shared, it won't load :/

And search crawlers/results have been producing snippets that prevent users from clicking to the source for well over a decade.

Edit: it loaded. I don't see how the problem isn't simply solved by an off the shelf solution like cloud flare. In the real world, you wouldn't open up a space/location if you couldn't handle the throughput. Why should online spaces/locations get special treatment?

an hour agoobsidianbases1

-1 for the name

5 hours agosplitbrainhack

https://en.wikipedia.org/wiki/Miasma_theory

Seems a clever and fitting name to me. A poison pit would probably smell bad. And at the same time, the theory that this tool would actually cause “illness” (bad training data) in AI is not proven.

5 hours agoQuantumNomad_

If you want to ruin someone's web experience based on what kind of thing they are, rather than the content of their character, consider that you might be the baddies.

2 hours agojstanley

If you're constantly being harassed by someone and despite your best efforts, nothing is being done to help you, quite the opposite in fact, tons of people cheer your assailant on in the name of profit and progress, it's only natural that you lash out.

It's not all that productive, it's an act of desperation. If you can't stop the enemy, at least you can make their action more costly.

One positive outcome I could see it AI companies becoming more critical of their training data.

2 hours agomrweasel

What "content of character" do you ascribe to a web scraper?

an hour agolifeformed

You don't, that's why it's unethical to block them.

If you keep getting harrassed by people wearing black hoodies, would it be ethical to start taking countermeasures against all people who wear black hoodies?

an hour agojstanley

If they are coming to my door to harass me, then yes, it makes sense to take countermeasures against all black-hoodie wearers when I see them at the door.