-
-
Notifications
You must be signed in to change notification settings - Fork 589
LLM code seems prone to causing projects potential trouble, it might be worth considering a ban #5439
Description
Problem description
LLM code seems to causing projects potential trouble, and apparently once it's part of a code base and new work derives from it, it's hard to get rid of. I'm bringing this up with Heroic since I would like it to continue to not have trouble, and apparently some contributors will just quietly assume LLM use is okay if there's no policy.
Here's what I mean, this is what seems to be a lawyer looking at Github's Co-Pilot coding LLM in 2026:
Chan-jo.Jun.Co-Pilot.Copyright.Infringement.webm
I'm not a lawyer so I have no idea what this means, other than it sounds concerning to me and I thought you might also find it concerning.
There's also this high-profile incident: https://www.pcgamer.com/software/ai/microsoft-uses-plagiarized-ai-slop-flowchart-to-explain-how-github-works-removes-it-after-original-creator-calls-it-out-careless-blatantly-amateuristic-and-lacking-any-ambition-to-put-it-gently/ PC Gamer calls this "plagiarism." I'm guessing Microslop didn't intentionally tell their own Co-Pilot AI to steal, so does this mean this can just randomly happen?
Some articles on this also seem scary:
https://www.theatlantic.com/technology/2026/01/ai-memorization-research/685552/
Large language models don’t “learn”—they copy. [...] This isn’t the only research to demonstrate the casual plagiarism of AI models. “On average, 8–15% of the text generated by LLMs” also exists on the web, in exactly that same form, according to one study.
There also seem to be lawsuits: https://www.twobirds.com/en/insights/2025/landmark-ruling-of-the-munich-regional-court-(gema-v-openai)-on-copyright-and-ai-training This one almost sounds like they doubt AI training's "fair use" assumption, but I wouldn't know and you better look at it yourself.
It also seems like most coders who think they're faster with LLMs aren't actually objectively that much faster, and LLMs introduce new hidden bugs, and so on.
Anyway, while I'm not qualified to give legal advice and I don't want to do so, this kind of feels messy.
Feature description
I would like to suggest that Heroic considers a policy that LLM code isn't desired for any contribution, and that people are asked not to use any LLM tools including no generative AI auto completion, when writing code for Heroic.
However, perhaps I'm just reading this entire situation wrong. But the risks don't seem to be a good trade-off for benefits here.
Alternatives
I guess the alternative is to not care about LLM code.
Additional information
No response