271

Ghostty's AI Policy

The biggest surprise to me with all this low-quality contribution spam is how little shame people apparently have. I have a handful of open source contributions. All of them are for small-ish projects and the complexity of my contributions are in the same ball-park as what I work on day-to-day. And even though I am relatively confident in my competency as a developer, these contributions are probably the most thoroughly tested and reviewed pieces of code I have ever written. I just really, really don't want to bother someone with low quality "help" who graciously offers their time to work on open source stuff.

Other people apparently don't have this feeling at all. Maybe I shouldn't have been surprised by this, but I've definitely been caught off guard by it.

3 hours agoVersion467

> The biggest surprise to me with all this low-quality contribution spam is how little shame people apparently have.

ever had a client second guess you by replying you a screenshot from GPT?

ever asked anything in a public group only to have a complete moron replying you with a screenshot from GPT or - at least a bit of effor there - a copy/paste of the wall of text?

no, people have no shame. they have a need for a little bit of (borrowed) self importance and validation.

Which is why i applaud every code of conduct that has public ridicule as punishment for wasting everybody's time

2 hours agomonegator

Problem is people seriously believe that whatever GPT tells them must be true, because… I don't even know. Just because it sounds self-confident and authoritative? Because computers are supposed to not make mistakes? Because talking computers in science fiction do not make mistakes like that? The fact that LLMs ended up having this particular failure mode, out of all possible failure modes, is incredibly unfortunate and detrimental to the society.

2 hours agoSharlin

Last year I had to deal with a contractor who sincerely believed that a very popular library had some issue because it was erroring when parsing a chatgpt generated json... I'm still shocked, this is seriously scary

an hour agopera

"SELECT isn't broken" isn't a new advice, and it exists for a reason.

18 minutes agoTeMPOraL

My boss says it's because they are backed by trillion dollar companies and the companies would face dire legal threats if they did not ensure the correctness of AI output.

an hour agoSuzuran

Your boss sounds hilarious naive to how the world works.

an hour agobuggy6257

This is a good heuristic, and it's how most things in life operate. It's the reason you can just buy food in stores without any worry that it might hurt you[0] - there's potential for million ${local currency} fines, lawsuits, customer loss and jail time serving as strong incentive for food manufacturers and vendors to not fuck this up. The same is the case with drugs, utilities, car safety and other important aspects of life.

So their boss may be naive, but not hilariously so - because that is, in fact, how the world works[1]! And as a boss, they probably have some understanding of it.

The thing they miss is that AI fundamentally[2] cannot provide this kind of "correct" output, and more importantly, that the "trillion dollar companies" not only don't guarantee that, they actually explicitly inform everyone everywhere, including in the UI, that the output may be incorrect.

So it's mostly failure to pay attention and realize they're dealing with an exception to the rule.

--

[0] - Actually hurt you, I'm ignoring all the fitness/healthy eating fads and "ultraprocessed food" bullshit.

[1] - On a related note, it's also something security people often don't get: real world security relies on being connected - via contracts and laws and institutions - to "men with guns". It's not perfect, but scales better.

[2] - Because LLMs are not databases, but - to a first-order approximation - little people on a chip!

28 minutes agoTeMPOraL

In a lot of ways he is, despite witnessing a lot of how the sausage is made directly. Honestly, I think at at least half of it is wanting to convince himself that the world still functions in ways that make sense to him rather than admit that it's mostly grifters grifting all the way down.

40 minutes agoSuzuran

The high-trust Boomer brain cannot comprehend the actual low-trust society of grifters in which we live.

32 minutes agonathanaldensr

If only every LLM-shop out there would put disclaimers on their page that they hope absolve them of the responsibility of correctness, so that your boss could make up his own mind... Oh wait.

19 minutes agobreakingcups

People's trust on LLM imo stems from the lack of awareness of AI hallucinating. Hallucination benchmarks are often hidden or talked about hastily in marketing videos.

an hour agoanon_anon12

I think it's better to say that LLMs only hallucinate. All the text they produce is entirely unverified. Humans are the ones reading the text and constructing meaning.

22 minutes agowpietri

It's just a database. There is no difference in a technical sense between "hallucination" and whatever else you imagine.

When you query Postgres and don't get what you want, do you call that a "hallucination"?

an hour agocess11

To quote Luke Skywalker: Amazing. Every word of what you just said is wrong.

an hour agoIgorPartola

Which is why I keep saying that anthropomorphizing LLMs gives you good high-order intuitions about them, and should not be discouraged.

Consider: GP would've been much more correct if they said "It's just a person on a chip." Still wrong, but much less, in qualitative fashion, than they are now.

44 minutes agoTeMPOraL

> It's just a database. There is no difference in a technical sense between "hallucination" and whatever else you imagine.

It's like a JPEG. Except instead of lossy compression on images that give you a pixel soup that only vaguely resembles the original if you're resource bound (and even modern SOTA models are when it comes to LLMs), instead you get stuff that looks more or less correct but just isn't.

an hour agoKronisLV

This comes from not having a specific area or understanding, if you ask it about an area you know well, you'll see.

27 minutes agoderrida

I get what you're saying but I think it's wrong (I also think it's wrong when people say "well, people used to complain about calculators...").

An LLM chatbot is not like querying a database. Postgres doesn't have a human-like interface. Querying SQL is highly technical, when you get nonsensical results out of it (which is most often than not) you immediately suspect the JOIN you wrote or whatever. There's no "confident vibe" in results spat out by the DB engine.

Interacting with a chat bot is highly non-technical. The chat bot seems to many people like a highly competent person-like robot that knows everything, and it knows it with a high degree of confidence too.

So it makes sense to talk about "hallucinations", even though it's a flawed analogy.

I think the mistake people make when interacting with LLMs is similar to what they do when they read/watch the news: "well, they said so on the news, so it must be true."

an hour agothe_af

I don't remember exactly who said it, but at one point I read a good take - people trust these chatbots because there's big companies and billions behind them, surely big companies test and verify their stuff thoroughly?

But (as someone else described), GPTs and other current-day LLMs are probabilistic. But 99% of what they produce seems feasible enough.

an hour agoCthulhu_

I think in science fiction it’s one of the most common themes for the talking computer to be utterly horribly wrong, often resulting in complete annihilation of all life on earth.

Unless I have been reading very different science fiction I think it’s definitely not that.

I think it’s more the confidence and seeming plausibility of LLM answers

an hour agopousada

People are literally taking Black Mirror storylines and trying to manifest them. I think they did a `s/dys/u/` and don't know how to undo it...

an hour agooneeyedpigeon

Sure, but this failure mode is not that. "AI will malfunction and doom us all" is pretty far from "AI will malfunction by sometimes confabulating stuff".

an hour agoSharlin

The stories I read had computers being utterly horribly right, which resulted in attempts (sometimes successful) at annihilate humanity.

14 minutes agoTeMPOraL

This sounds a bit like the "Asking vs. Guessing culture" discussion on the front page yesterday. With the "Guesser" being GP who's front-loading extra investigation, debugging and maintenance work so the project maintainers don't have to do it, and with the "Asker" being the client from your example, pasting the submission to ChatGPT and forwarding its response.

an hour agoTeMPOraL

I've also had the opposite.

I raise an issue or PR after carefully reviewing someone else's open source code.

They ask Claude to answer me; neither them nor Claude understood the issue.

Well, at least it's their repo, they can do whatever.

6 minutes agoncruces

Not OP, but I don't consider these the same thing.

The client in your example isn't a (presumably) professional developer, submitting code to a public repository, inviting the scrutiny of fellow professionals and potential future clients or employers.

2 hours agomonooso

I consider them to be the same attitude. Machine made it / Machine said it. It must be right, you must be wrong.

They are sure they know better because they get a yes man doing their job for them.

an hour agomonegator

Didn't happen to me yet.

I'm not looking forward to it...

an hour agopositive-spite

Our CEO chiming in on a technical discussion between engineers: by the way, this is what Claude says: *some completely made-up bullshit*

24 minutes agomeindnoch

Random people don’t do this. Your boss however…

2 hours agoAeolun

Keep in mind that many people also contribute to big open source projects just because they believe it will look good ok their CV/GitHub and help them get a job. They don't care about helping anyone, they just want to write "contributed to Ghostty" in their application.

2 hours agoflexagoon

I think this falls under the "have no shame" comment that they made

2 hours agonchmy

From my experience, it's not about helping anyone or CV building. I just ran into a bug or a missing feature that is blocking me.

2 hours ago0x696C6961

TBH Im not sure if this is a "growing up in a good area" vibe. But over the last decade or so I have had to slowly learn the people around me have no sense of shame. This wasnt their fault, but mine. Society has changed and if you don't adapt you'll end up confused and abused.

I am not saying one has to lose their shame, but at best, understand it.

2 hours agoionwake

Like with all things in life shame is best in moderation.

Too little or too much shame can lead to issue.

Problem is no one tells you what too little or too much actually is and there are many different situations where you need to figure it out on your own.

So I think sometimes people just get it wrong but ultimately everyone tries their best. Truly malicious shameless people are extremely rare in my experience.

For the topic at hand I think a lot of these “shameless” contributions come from kids

an hour agopousada

To add, I don't know if this is a cultural, personal, or other thing but nowadays even if people get shamed for whatever they do, they see it more as a challenge, and it makes them rebel even harder against what is perceived to be old fashioned or whatever.

Basically teenagers. But it feels like the rebellious teenager phase lasts longer nowadays. Zero evidence besides vibes and anecdotes, but still.

Or maybe it's me that's getting old?

an hour agoCthulhu_

Shame is a good thing it shows one has a conscience and positive self regard.

Just like pain is a good thing, it tells you and signals to remove your hand from the stove.

18 minutes agoderrida

Yeah but its not helpful if its the new air fryer thats burning the hand not the stove, unless you adapt.

16 minutes agoionwake

Kind of by definition we will not see the people who do not submit frivolous PRs that waste the time of other people. So keep in mind that there's likely a huge amount of survivor bias involved.

Just like with email spam I would expect that a big part of the issue is that it only takes a minority of shameless people to create a ton of contribution spam. Unlike email spam these people actually want their contributions to be tied to their personal reputation. Which in theory means that it should be easier to identify and isolate them.

an hour agovanderZwan

"Other people" might also just be junior devs - I have seen time and again how (over-)confident newbies can be in their code. (I remember one case where a student suspected a bug in the JVM when some Java code of his caused an error.)

It's not necessarily maliciousness or laziness, it could simply be enthusiasm paired with lack of experience.

2 hours agokleiba

Our postgres replication suddenly stopped working and it took three of us hours - maybe days - of looking through the postgres source before we actually accepted it wasn't us or our hosting provider being stupid and submitted a ticket.

I can't imagine the level of laziness or entitlement required for a student (or any developer) to blame their tools so quickly without conducting a thorough investigation.

31 minutes agoJDye

have found bugs in native JVM, usually it takes some effort, though. Printing the assembly is the easiest one. (I consider the bug in java.lang/util/io/etc. code not an interesting case)

Memory leaks and issues with the memory allocator are months long process to pin on the JVM...

In the early days (bug parade times), the bugs are a lot more common, nowadays -- I'd say it'd be an extreme naivete to consider JVM the culprit from the get-go.

2 hours agoxxs

I worked for a major open-source company for half a decade. Everyone thinks their contribution is a gift and you should be grateful. To quote Bo Burnham, "you think your dick is a gift, I promise it's not".

2 hours agoEtheryte

Sounds like everyone's got some main character syndrome, the cure for that is to be a meaningless cog in the enterprise wheels for a while. But then I suspect a lot of open source contributions are done exactly by those people - they don't really matter in their day job, but in open source they can Make A Difference.

Of course, the vast majority of OS work is the same cog-in-a-machine work, and with low effort AI assisted contributions, the non-hero-coding work becomes more prevalent than ever.

an hour agoCthulhu_

The Internet (and developer communities) used to be a high trust society - mostly academics and developers, everyone with shared experiences of learning when it was harder to get resources, etc.

The grift culture has changed that completely, now students face a lot of pressure to spam out PRs just to show they have contributed something.

5 minutes agoGardenLetter27

It's good to regularly see such policies and discussions around them to remind me how staggeringly shameless some people could be and how many of such people out there. Interacting mostly with my peers, friends, acquaintances I tend to forget that they don't represent average population and after some time I start to assume all people are reasonable and act in good faith.

2 hours agoRonsenshi

Yep, this. You can just look at the state of FOSS licensing across GitHub to see it in action: licenses are routinely stripped or changed to remove the original developers, even on trivial items, even on forked projects where the action is easily visible, even on licenses that allow for literally everything else. State "You can do everything except this" and loads of people will still actively do it, because they have no shame (or because they enjoy breaking someone else's rules? Because it gives them a power trip? Who knows).

an hour agotoyg

I think of it like people just have crappy prompt adherence. It makes more sense that way.

an hour agolatentsea

With AI at least you can wipe the context and reapply system prompt.

an hour agoRonsenshi

To have that shame, you need to know better. If you don’t know any better, having access to a model that can make code and a cursory understanding of the language syntax probably feels like knowing how to write good code. Dunning-Krueger strikes again.

I’ll bet there are probably also people trying to farm accounts with plausible histories for things like anonymous supply chain attacks.

2 hours agoDrewADesign
[deleted]
2 hours ago

when it comes to enabling opportunities i dont think it becomes a matter of shame for them anymore. A lot of people (especially in regions where living is tough and competition is fierce) will do anything by hook or crook to get ahead in competition. And if github contributions is a metric for getting hired or getting noticed then you are going to see it become spammed.

2 hours agoarbitrandomuser

Shamelessness is very definitely in vogue at the moment. It will pass, let's hope for more than ruins.

2 hours ago6LLvveMx2koXfwn

Lots of people cosplay as developers, and "contributing" to open source is a box they must check. It's like they go through the moves without understanding they're doing the opposite of what they should be doing. Same with having a tech blog, they don't understand that the end goal is not "having a blog" but "producing and sharing quality content"

an hour agolm28469

> Other people apparently don't have this feeling at all.

I think this is interesting too. I've noticed the difference in dating/hook-up contexts. The people you're talking about also end up getting laid more but that group also has a very large intersection with sex pests and other shitty people. The thing they have in common though is that they just don't care what other people think about them. That leads some of them to be successful if they are otherwise good people... or to become borderline or actual crininals if not. I find it fascinating actually, like how does this difference come about and can it actually be changed or is it something we get early in life or from the genetic lottery.

an hour agoguerrilla

The major companies that made available the very tools they use to create this spam code, applied the exact same ethics.

an hour agopil0u

You just have to go take a look at what people write in social media, using their real name and photo, to conclude that no, some people have no shame at all.

2 hours agoSharlin

I would imagine there are a lot of "small nice to haves" that people submit because they are frustrated about the mere complexity of submitting changes. Minor things that involve a lot of complexity merely in terms of changing some config or some default etc. Something where there is a significant probability of it being wrong but also a high probability of someone who knows the project being able to quickly see if it's ok or not.

i.e. imagine a change that is literally a small diff, that is easy to describe as a mere user and not a developer, and that requires quite a lot of deep understanding merely to submit as a PR (build the project! run the tests! write the template for the PR!).

Really a lot of this stuff ends up being a kind of failure mode of various projects that we all fall into at some point where "config" is in the code and what could be a simple change and test required a lot of friction.

Obviously not all submissions are going to be like this but I think I've tried a few little ones like that where I would normally just leave whatever annoyance I have alone but think "hey maybe it's 10 min faff with AI and a PR".

The structure of the project incentives kind of creates this. Increasing cost to contribution is a valid strategy of course, but from a holistic project point of view it is not always a good one especially assuming you are not dealing with adversarial contributors but only slightly incompetent ones.

2 hours agonobodywillobsrv

[flagged]

2 hours agoblell

If you are from poor society you can't afford to have shame. You either succeed or fail, again and again, and keep trying.

2 hours agopostepowanieadm

In other news, wet roads cause rain.

2 hours agoMrBuddyCasino

"The biggest surprise to me with all this low-quality contribution spam is how little shame people apparently have."

And this is one half of why I think

"Bad AI drivers will be [..] ridiculed in public."

isn't a good clause. The other is that ridiculing others, not matter what, is just no decent behavior. Putting it as a rule in your policy document makes it only worse.

2 hours agoweinzierl

> The other is that ridiculing others, not matter what, is just no decent behavior.

Shaming people for violating valid social norms is absolutely decent behaviour. It is the primary mechanism we have to establish social norms. When people do bad things that are harmful to the rest of society, shaming them is society's first-level corrective response to get them to stop doing bad things. If people continue to violate norms, then society's higher levels of corrective behaviour can involve things like establishing laws and fining or imprisoning people, but you don't want to start with that level of response. Although putting these LLM spammers in jail does sound awfully enticing to me in a petty way, it's probably not the most constructive way to handle the problem.

The fact that shamelessness is taking over in some cultures is another problem altogether, and I don't know how you deal with that. Certain cultures have completely abdicated the ability to influence people's behaviour socially without resorting to heavy-handed intervention, and on the internet, this becomes everyone in the world's problem. I guess the answer is probably cultivation of spaces with strict moderation to bar shameless people from participating. The problem could be mitigated to some degree if a Github-like entity outright banned these people from their platform so they could not continue to harass open-source maintainers, but there is no platform like that. It unfortunately takes a lot of unrewarding work to maintain a curated social environment on the internet.

an hour agoanonymous908213
[deleted]
23 minutes ago

No society can function without enforced rules. Most people do the pro-social thing most of the time. But for the rest, society must create negative experiences that help train people to do the right thing.

What negative experience do you think should instead be created for people breaking these rules?

13 minutes agowpietri

Getting to live by the rules of decency is a privilege now denied us. I can accept that but I don't have to like it or like the people who would abuse my trust for their personal gain.

Tit for tat

2 hours agoconartist6

On a tangent: the origin of the problems with low-quality drive-by requests is github's social nature. That might have been great when GitHub started, but nowadays many use it as portfolio padding and/or social proof.

"This person contributed to a lot of projects" heuristic for "they're a good and passionate developer" means people will increasingly game this using low-quality submissions. This has been happening to the fire.

Of course, AI just added kerosene to the fire, but re-read the policy and omit AI and it still makes sense!

A long term fix for this is to remove the incentive. Paradoxically, AI might help here because this can so trivially be gamed that it's obvious it's not longer any kind of signal.

2 hours agosenko

Your point about rereading without ai makes so much sense.

The economics of it have changed, human nature hasn’t. Before 2023 (?) people also submitted garbage PRs just to be able to add “contributed to X” to their CV. It’s just become a lot cheaper.

an hour agostephantul

I can see this becoming a pretty generally accepted AI usage policy. Very balanced.

Covers most of the points I'm sure many of us have experienced here while developing with AI. Most importantly, AI generated code does not substitute human thinking, testing, and clean up/rewrite.

On that last point, whenever I've gotten Codex to generate a substantial feature, usually I've had to rewrite a lot of the code to make it more compact even if it is correct. Adding indirection where it does not make sense is a big issue I've noticed LLMs make.

3 hours agoarjunbajaj

I agree with you on the policy being balanced.

However:

> AI generated code does not substitute human thinking, testing, and clean up/rewrite.

Isn't that the end goal of these tools and companies producing them?

According to the marketing[1], the tools are already "smarter than people in many ways". If that is the case, what are these "ways", and why should we trust a human to do a better job at them? If these "ways" keep expanding, which most proponents of this technology believe will happen, then the end state is that the tools are smarter than people at everything, and we shouldn't trust humans to do anything.

Now, clearly, we're not there yet, but where the line is drawn today is extremely fuzzy, and mostly based on opinion. The wildly different narratives around this tech certainly don't help.

[1]: https://blog.samaltman.com/the-gentle-singularity

2 hours agoimiric

> Isn't that the end goal of these tools and companies producing them?

It seems to be the goal. But they seem very far away from achieving that goal.

One thing you probably account for is that most of the proponents of these technologies are trying to sell you something. Doesn't mean that there is no value to these tools, but the wild claims about the capabilities of the tools are just that.

5 minutes agonicoburns

Intern generated code does not substitute for tech lead thinking, testing, and clean up/rewrite.

2 hours agoTerretta

No, the code is generated by a tool that's "smarter than people in many ways". So which parts of "thinking, testing, and clean up/rewrite" can we trust it with?

an hour agoimiric

Very little, until it stops being stupid in many ways. We don't need smart, we need tools to not be stupid. An unreliable tool is more dangerous and more useless than having no tool.

3 minutes agophanimahesh

The marketing is irrelevant. The AIs are not aware of what they are doing, or motivated in the ways humans are.

17 minutes agocmsj

> Bad AI drivers will be banned and ridiculed in public. You've been warned. We love to help junior developers learn and grow, but if you're interested in that then don't use AI, and we'll help you. I'm sorry that bad AI drivers have ruined this for you.

Finally an AI policy I can agree with :) jokes aside, it might sound a bit too agressive but it's also true that some people have really no shame into overloading you with AI generated shit. You need to protect your attention as much as you can, it's becoming the new currency.

2 hours agoLucasoato

I don't think ridicule is an effective threat for people with no shame to begin with.

2 hours agoweinzierl

Well, this is explicitly public ridicule. The penalty isn't just feeling shamed. It's reputational harm, immortalized via Google.

One of the theorized reasons for junk AI submissions is reputation boosting. So maybe this will help.

And I think it will help with people who just bought into the AI hype and are proceeding without much thought. Cluelessness can look a lot like shamelessness at first.

7 minutes agowpietri

I think it makes sense, both for this, and for curl.

Presumably people want this for some kind of prestige, so they can put it on their CV (contributed to ghostty/submitted security issue to curl).

If we change that equation to have them think "wait, if I do this, then when employers Google me they'll see a blog post saying I'm incompetent" changes calculation that is neutral/positive for if their slop gets accepted to negative/positive.

Seems like it's addressing the incentives to me.

5 minutes agomijoharas

I really like the phrase "bad AI drivers"...AI is a tool, and the stupid drive-by pull requests just mean you're being inconsiderate and unhelpful in your usage of the tool, similar to how "bad drivers" are a nightmare to encounter on a highway...so stop it or you'll end up on the dashcam subreddit of programming.

an hour agoepaga

The experience of using a coding agent is that you're more of a "backseat driver" though. The AI acts as your driver and you tell it where to go, sometimes making corrections if it's going the wrong way.

37 minutes agoskybrian

The experience is what you make of it. Personally I'm quite enjoying using AI as a way to generate code I can disagree with and refactor into what I want.

16 minutes agocmsj

"Pull requests created by AI must have been fully verified with human use." should always be a bare minimum requirement.

3 hours agoalansaber

> "Pull requests [] must have been fully verified with human use."

I would expect this is entirely uncontroversial and the AI qualifier redundant.

an hour agoNtrails
[deleted]
3 hours ago

See x thread for rationale: https://x.com/mitchellh/status/2014433315261124760?s=46&t=FU...

“ Ultimately, I want to see full session transcripts, but we don't have enough tool support for that broadly.”

I have a side project, git-prompt-story to attach Claude Vode session in GitHub git notes. Though it is not that simple to do automatic (e.g. i need to redact credentials).

3 hours agojakozaur

Not sure how I feel about transcripts. Ultimately I do my best to make any contributions I make high quality, and that means taking time to polish things. Exposing the tangled mess of my thought process leading up to that either means I have to "polish" that too (whatever that ends up looking like), or put myself in a vulnerable position of showing my tangled process to get to the end result.

an hour agoollien

I've thought about saving my prompts along with project development and even done it by hand a few times, but eventually I realized I don't really get much value from doing so. Are there good reasons to do it?

3 hours agoradarsat1

For me it's increasingly the work. I spend more time in Claude Code going back and forth with the agent than I do in my text editor hacking on the code by hand. Those transcripts ARE the work I've been doing. I want to save them in the same way that I archive my notes and issues and other ephemera around my projects.

My latest attempt at this is https://github.com/simonw/claude-code-transcripts which produces output like the is: https://gisthost.github.io/?c75bf4d827ea4ee3c325625d24c6cd86...

3 hours agosimonw

If the AI generated most of the code based on these prompts, it's definitely valuable to review the prompts before even looking at the code. Especially in the case where contributions come from a wide range of devs at different experience levels.

At a minimum it will help you to be skeptical at specific parts of the diff so you can look at those more closely in your review. But it can inform test scenarios etc.

2 hours agoawesan

It's not for you. It's so others can see how you arrived to the code that was generated. They can learn better prompting for themselves from it, and also how you think. They can see which cases got considered, or not. All sorts of good stuff that would be helpful for reviewing giant PRs.

3 hours agofragmede

Sounds depressing. First you deal with massive PRs and now also these agent prompts. Soon enough there won't be any coding at all, it seems. Just doomscrolling through massive prompt files and diffs in hopes of understanding what is going on.

2 hours agoRonsenshi

> No AI-generated media is allowed (art, images, videos, audio, etc.). Text and code are the only acceptable AI-generated content, per the other rules in this policy.

I find this distinction between media and text/code so interesting. To me it sounds like they think "text and code" are free from the controversy surrounding AI-generated media.

But judging from how AI companies grabbed all the art, images, videos, and audio they could get their hands on to train their LLMs it's naive to think that they didn't do the same with text and code.

2 hours agorikschennink

> To me it sounds like "text and code" are free from the controversy surrounding AI-generated media.

It really isn't, don't you recall the "protests" against Microsoft starting to use repositories hosted at GitHub for training their own coding models? Lots of articles and sentiments everywhere at the time.

Seems to have died down though, probably because most developers seemingly at this point use LLMs in some capacity today. Some just use it as a search engine replacement, others to compose snippets they copy-paste and others wholesale don't type code anymore, just instructions then review it.

I'm guessing Ghostty feels like if they'd ban generated text/code, they'd block almost all potential contributors. Not sure I agree with that personally, but I'm guessing that's their perspective.

2 hours agoembedding-shape

Right, that's what I'm thinking too (I'll update my statement a bit to make that more clear), but I constantly hear this perspective that it's all good for text and code but when it's media, then it's suddenly problematic. It's equally problematic for text and code.

an hour agorikschennink

Culture has historically been free (for use, adaptation, etc). It was only in the last century or so that we got this belief that some ideas (texts, images) could possibily "belong" to someone. I see this as a chance to return to form, not as something problematic. If and when all creative and technical output reaches zero financial value again, only a selected (very specialized, very dedicated) few will dedicate their lives to the production of culture. This will bring quality up in the long run.

Lawrence Lessig has written about this wonderfully (Code and Other Laws of Cyberspace, Free Culture, etc).

I get the need for private property and goods, but I see no reason why digital images and texts (which can be copied without physical limits), including code, should have special protections anymore.

7 minutes agoboca_honey
[deleted]
7 minutes ago

I think that a warning of public ridicule may be fine. However, actually doing it is quite low brow IMO. I'm sad to see more and more otherwise admirable projects step down to that (assuming they actually do it).

41 minutes agomilancurcic

A well crafted policy that, I think, will be adopted by many OSS.

You'd need that kind of sharp rules to compete against unhinged (or drunken) AI drivers and that's unfortunate. But at the same time, letting people DoS maintainers' time at essential no cost is not an option either.

2 hours agocranium

Banned I understand but ridiculed? I would say that these bad drive by spammers are analogous to phishing emails. Do you engage with those? Are they worth any energy or effort from you? I think ghostty should just ghost them :)

25 minutes agohmokiguess
[deleted]
an hour ago

sounds reasonable to me. i've been wondering about encoding detailed AI disclosure in an SBOM.

on a related note: i wish we could agree on rebranding the current LLM-driven never-gonna-AGI generation of "AI" to something else… now i'm thinking of when i read the in-game lore definition for VI (Virtual Intelligence) back when i played mass effect 1 ;)

2 hours agoevilhackerdude

shaming doesn't work.

22 minutes agogverrilla

A factor that people have not considered is that the copyright status of AI generated text is not settled law and precedent or new law may retroactively change the copyright status of a whole project.

Maybe a bit unlikely, but still an issue no one is really considering.

There has been a single ruling (I think) that AI generated code is uncopyrightable. There has been at least one affirmative fair use ruling. Both of these are from the lower courts. I'm still of the opinion that generative AI is not fair use because its clearly substitutive.

3 hours agonutjob2

I agree with you that generative AI is clearly not fair use.

However, at this point, the economic impact of trying to de tangle this mess would be so large, the courts likely won't do anything about it. You and I don't get to infringe on copyright; Microsoft, Facebook and Google sure do though.

an hour agotpxl

I think the usage is so widespread now that the law will adapt to customs. It is untenable now to say code generated is uncopyrightable IMO. Maybe copyright as is defined right now is not enough, but then the legislation will change it. There is enough pressure on them from the business community to do so.

an hour agoSytten

This only matters if you get sued for copyright violation, though.

2 hours agodirewolf20

No? Licenses still apply even if you _don't_ get sued?

2 hours agochristoph-heiss

Do they? Isn’t the application of the license its enforcement?

It’s illegal to commit fraud or murder, but if you do it and suffer no consequences (perhaps you even get pardoned by your president), does it matter that it was illegal? Laws are as strong as their enforcement.

For a less grim and more explicit example, Apple has a policy on the iOS App Store that apps may not use notifications to advertise. Yet it happens all the time, especially from big players like Uber. Apple themselves have done it too. So if you’re a bad actor and disrespectful to your users, does it matter that the rule exists?

an hour agolatexr
[deleted]
an hour ago

At what time in the future does this not become an issue?

2 hours agoconsp

If you're a big enough target, that is inevitable.

You may become a big enough target only when it's too late to undo it.

an hour agonutjob2

Some take that in consideration, I did when I until recently was in a CTO role, and I've come across companies that take compliance seriously and have decided against such code synthesis due to the unclear legal status.

an hour agocess11

I never thought of this, you are right. What happens if, let's say, AI generated text/code is "ilegal"? Especially what happens with all the companies that have been using it for their products? Do they need to rollback? It should be a shit show but super interesting to see it unfold...

2 hours ago101008

I recently had to do a similar policy for my TUI feed reader, after getting some AI slop spammy PRs: https://github.com/CrociDB/bulletty?tab=contributing-ov-file...

The fact that some people will straight up lie after submitting you a PR with lots of _that type_ of comment in the middle of the code is baffling!

3 hours agoCrociDB

Ultimately what's happening here is AI is undermining trust in remote contributions, and in new code. If you don't know somebody personally, and know how they work, the trust barrier is getting higher. I personally am already ultra vigilant for any github repo that is not already well established, and am even concerned about existing projects' code quality into the future. Not against AI per se (which I use), but it's just going to get harder to fight the slop.

3 hours agovegabook

A good PR using IA should be impossible to distinguish from a non-AI one.

an hour agoKolmogorovComp

The problem is that most aren’t good, and bad ones can take a lot of effort to distinguish, if they look plausible on the surface. So the potentially good ones aren’t worth all the bad ones.

an hour agolayer8

I agree with most of them being bad, I disagree with them taking lots of effort to distinguish, and I am maintainer unfortunately receiving receiving more and more using AI.

an hour agoKolmogorovComp

Another project simply paused external contributions entirely: https://news.ycombinator.com/item?id=46642012

Another idea is to simply promote the donation of AI credits instead of output tokens. It would be better to donate credits, not outputs, because people already working on the project would be better at prompting and steering AI outputs.

3 hours agokanzure

>people already working on the project would be better at prompting and steering AI outputs.

In an ideal world sure, but I've seen the entire gamut from amateurs making surprising work to experts whose prompt history looks like a comedy of errors and gotchas. There's some "skill" I can't quite put my finger on when it comes to the way you must speak to an LLM vs another dev. There's more monkey-paw involved in the LLM process, in the sense that you get what you want, but do you want what you'll get?

3 hours agolagniappe

Honestly I don't care how people come with the code they create, but I hold them responsible for what they try to merge.

I work in a team of 5 great professionals, there hasn't been a single instance since Copilot launched in 2022 that anybody, in any single modification did not take full responsibility for what's been committed.

I know we all use it, to different extent and usage, but the quality of what's produced hasn't dipped a single bit, I'd even argue it has improved because LLMs can find answers easier in complex codebases. We started putting `_vendor` directories with our main external dependencies as git subtrees, and it's super useful to find information about those directly in their source code and tests.

It's really as simple. If your teammates are producing slop, that's a human and professional problem and these people should be fired. If you use the tool correctly, it can help you a lot finding information and connecting dots.

Any person with a brain can clearly see the huge benefit of these tools, but also the great danger of not reviewing their output line by line and forfeiting the constant work of resolving design tensions.

Of course, open source is a different beast. The people committing may not be professionals and have no real stakes so they get little to lose by producing slop whereas maintainers are already stretched in their time and attention.

3 hours agoepolanski

> It's really as simple. If you or your teammates are producing slop, that's a human and professional problem and these people should be fired.

Agree, slop isn't "the tool is so easy to use I can't review the code I'm producing", slop is the symptom of "I don't care how it's done, as long as it looks correct", and that's been a problem before LLMs too, the difference is how quickly you reach the "slop" state now, not that you have gate your codebase and reject shit code.

As always, most problems in "software programming" isn't about software nor programming but everything around it, including communication and workflows. If your workflow allows people to not be responsible for what they produce, and if allows shitty code to get into production, then that's on you and your team, not on the tools that the individuals use.

3 hours agoembedding-shape

I mean this policy only applies to outside contributors and not the maintainers.

> Ghostty is written with plenty of AI assistance, and many maintainers embrace AI tools as a productive tool in their workflow. As a project, we welcome AI as a tool!

> Our reason for the strict AI policy is not due to an anti-AI stance, but instead due to the number of highly unqualified people using AI. It's the people, not the tools, that are the problem.

Basically don't write slop and if you want to contribute as an outsider, ensure your contribution actually is valid and works.

3 hours agoaltmanaltman

with limited training data that llm generated code must be atrocious

3 hours agocxrpx

TLDR don't be an asshole and produce good stuff. But I have the feeling that this is not the right direction for the future. Distrust the process: only trust the results.

Moreover this policy is strictly unenforceable because good AI use is indistinguishable from good manual coding. And sometimes even the reverse. I don't believe in coding policies where maintainers need to spot if AI is used or not. I believe in experienced maintainers that are able to tell if a change looks sensible or not.

2 hours agoantirez

As someone who has picked up recently some 'legacy' code. AI has been really good at mostly summing up what is going on. In many cases it finds things I had no idea was wrong (because I do not know the code very well yet). This is so called 'battle hardened code'. I review it and say 'yeah its is wildly broken and I see how the original developer ended up here'. Sometimes the previous dev would be nice enough to leave a comment or some devs 'the code is the comments'. I have also had AI go wildly off the rails and do very dumb things. It is an interesting tool for sure one you have to keep an eye on or it will confidently make a foot gun for you. It is also nice for someone like me who has some sort of weird social anxiaty thing about bugging my fellow devs. In that I can create options tables and pick good ideas out of that.

40 minutes agosumtechguy

I'm not sure I agree it's completely unenforceable: a sloppy, overly verbose PR, maybe without an attached issue, is pretty easy to pick out.

There's some sensible, easily-judged-by-a-human rules in here. I like the spirit of it and it's well written (I assume by Mitchell, not Claude, given the brevity).

22 minutes agodanw1979

This doesn't work in the age of AI where producing crappy results is much cheaper than verifying them. While this is the case, metadata will be important to understand if you should even bother verifying the results.

2 hours agob3kart

[flagged]

4 hours agomefengl

That's really nice - and fast ui!

3 hours agopostepowanieadm

It gets even better when you click on "raw", IMO... which is what you also get when clicking on "raw" on Github.

2 hours agokleiba

Not sure why you are getting downvoted, given that the original site is such a jarringly user-hostile mess.

3 hours agochristoph-heiss

Without using a random 3rd party, and without the "jarring user-hostile mess":

https://raw.githubusercontent.com/ghostty-org/ghostty/refs/h...

3 hours agoembedding-shape

This option is pretty unreadable on mobile though

2 hours agoflexagoon

Is it? Just tried it in Safari, Firefox and Chrome on a iPhone 12 Mini and I can read all the text? Obviously it isn't formatted, as it's raw markdown, just like what parent's recommended 3rd party platform does, but nothing is cut off or missing for me.

Actually, trying to load that previous platform on my phone makes it worse for readability, seems there is ~10% less width and not as efficient use of vertical space. Together with both being unformatted markdown, I think the raw GitHub URL seems to render better on mobile, at least small ones like my mini.

2 hours agoembedding-shape

Whatever your opinion on the GitHub UI may be, at least the text formatting of the markdown is working, which can't be said for that alternative site.

3 hours agouser34283

> You must state the tool you used (e.g. Claude Code, Cursor, Amp)

Interesting requirement! Feels a bit like asking someone what IDE they used.

There shouldn't be that meaningful of a difference between the different tools/providers unless you'd consistently see a few underperform and would choose to ban those or something.

The other rules feel like they might discourage AI use due to more boilerplate needed (though I assume the people using AI might make the AI fill out some of it), though I can understand why a project might want to have those sorts of disclosures and control. That said, the rules themselves feel quite reasonable!