systemd goes AI agent slopware https://github.com/systemd/systemd/blob/c1d4d5fd9ae56dc07377ef63417f461a0f4a4346/AGENTS.md
has slop documentation now too
Discussion
systemd goes AI agent slopware https://github.com/systemd/systemd/blob/c1d4d5fd9ae56dc07377ef63417f461a0f4a4346/AGENTS.md
has slop documentation now too
@cwebber can't believe they turned down detect-fash but is doing this shit. lame and cringe.
@atarifrosch Do your systems rely on elogind and eudev? IIRC every desktop (or hotplug-supporting) distro with a different init uses huge parts of systemd, just naming them differently. @cwebber
@makepost @atarifrosch elogind is a fork of a subset which has maintained compatibility with the api, iiuc
you gotta be fucking kidding me. I expected nothing from Poettering and I am still disappointed
no matter if you actually use systemd, this is a fatal sign for all of open source software
@cwebber No, no, no, no.
@cwebber
And, obviously, the ticket pointing that was close because the conversion was “too heated”.
@cwebber this is why we can’t have nice things
@cwebber this looks quite reasonable to me! It also seems like a nice short entrypoint into the project for human coders, as it's usually the case for well written instructions.
How lovely 😣🤦♂️
@cwebber well, sheeeit :-(
@cwebber from my experience AI generated code is much better than everything I could come up with on my own. And I am convinced this is the same for most Devs. I welcome and appreciate this step.
@cwebber The issue to disallow this was immediately closed. https://github.com/systemd/systemd/issues/41085
@cwebber according to their CONTIBUTING.md slop won’t meet their standards anyway:
Using AI Code Generators
If you use an AI code generator such as ChatGPT, Claude, Copilot, Llama or a similar tool, this must be disclosed in the commit messages and pull request description.
The quality bar for contributions to this project is high, and unlikely to be met by an unattended AI tool, without significant manual corrections. Always thoroughly review and correct any such outputs, for example ensuring it accurately follows Coding Style at the very minimum. Please do not fire-and-forget pull requests without any human intervention and review, as that will likely result in low-quality results that will not be accepted, and if done repeatedly, may result in the account being blocked. As with any other submissions, authors are responsible for doing due diligence and ensuring their submissions are compatible with the project’s license as documented in LICENSES/README.md.
Not sure how well this is enforced, though.
@cwebber Sounds like I'll be poking at Void again soon...
And maybe OpenBSD or something on my laptop.
@cwebber Glad I fell into the Void years ago for desktop Linux.* Boots fast, and its runit init scripts are usually just a couple of lines long. Hmm, what were the original arguments for that Poetterage thing again?
(*Alpine and Devuan for servers.)
@cwebber For how long will this continue happening? bitwarden, chardet, vim, systemd...
@cwebber Baffeld. Do none of those opensource developers using these 'agents' by Anthropic or other morally impaired AI companies feel burdened by contributing to the popularity & bottom lines of these companies? The same companies that stole their code without credit or compensation, using too much energy, water & land exacerbating climate change. Some of even contributing to war crimes. 😩
EDIT: See later in thread, it seems like the good news is at least that it's not having auto-merging on, which is where the security risk comes in. I still have other concerns.
Looks like they're also using Claude for PR review https://github.com/systemd/systemd/commit/9a70fdcb741fc62af82427696c05560f4d70e4de
Which probably means systemd is now the most attractive target in FOSS for an AI prompt injection attack to insert a backdoor
EDIT: It does seem that they don't have auto-merging of PRs from the review bot, which is an improvement over the situation (and mitigates the primary security risk, hopefully it stays that way), and AI contributions are asked to be disclosed. That said, it seems like the issue is closed, and they are firmly in the "we will accept AI contributions, as long as disclosed" camp.
@cwebber the problem is that once they let a little of AI in the project (and with contributors having not very good appreciation of human contribution) it will start replacing/displacing the good devs and maintainers aren't going to review AI code
Poettering commented, the issue is now closed. https://github.com/systemd/systemd/issues/41085#issuecomment-4053443496
Asking for detection of security vulnerabilities from an LLM is one thing though, that one I could consider useful, but the real question is code and documentation generation. It does seem that for now, the bot usage isn't auto-merging PRs, which does alleviate some previous concerns of mine if reading that right.
But, in AGENTS.md it does mention "docs/CODING_STYLE.md — full style guide (must-read before writing code)". https://github.com/systemd/systemd/blob/main/AGENTS.md
They do require disclosure in the project also of LLM usage. But this does imply that LLM contributed changes are considered welcome, so we will probably see more of them, but I suppose at least they should hopefully be marked appropriately.
I will admit, I made this thread when pretty frustrated and upset about it. SystemD is so key to the security of many peoples' machines. I don't necessarily see having security reviews be a problem the same way that codegen and etc are. And I was wrong about the PR review vulnerability risk in that *for now* afaict the review bot is just performing read-only security review, is not taking auto-action on merging, which is the real risk.
So maybe I overreacted? But Poettering's comment reads the way that most comments I have read that have been drawn into AIgen code have gone, which is "you gotta admit that things are changing, these things are getting really good" and then opening the door to aigen contributions. Which I am very wary of...
@cwebber I would take anything Lennart Poettering says with a massive pinch of salt, given how often I run up against his broken monothic Windows Services imitation these days just trying to build a protocol lab.
@cwebber the AI contributions will happen regardless. It's trivial to have e.g. opus 4.6 spit out prs that we would not be able to classify as being written by AI. In fact, by adding an AGENTS.md that instructs AIs to add disclosure, we probably make AI written prs more obvious. Anyway, if we know people are going to use AI to contribute in ways we cannot reliably detect, we may as well add instructions to make those prs as good as possible.
@cwebber I keep being baffled by these folks just ignoring the code provenance and licensing issues.
@janl Indeed, people have gotten the mistaken impression that the licensing issues have been answered. THEY HAVEN'T YET! The US Supreme Court *declined to take on* a case which had ruled in a lower court that AI generated materials were in the public domain. And yet I am seeing *all over the place* people saying that the US Supreme Court said AI output is in the public domain. They didn't!
And outside the US, nothing is answered either! It's true that the US tends to set international precedent but we are *also* not in times where we can count on that, either.
@cwebber @janl On the legal side, I think folks are counting on the fact that so much money is behind the position that AI sufficiently launders copyright that there's little chance courts in the U.S. are going to rule otherwise. I don't *like* that position, because I think it's wrong on a number of levels -- but if I had to wager a paycheck on the outcome of a court case... that's the position I'd put the money on.
It seems unlikely that SCOTUS, for example, is ever going to rule against the monied class. The only way I see SCOTUS ruling the other way is if it's two money giants going toe-to-toe and the conservatives see some advantage in finding that AI-generated code infringes on copyright. Even then, I'd expect it to be a narrow, hard-to-generalize ruling.
But what do I know? I'm just trying to keep my head above water like most folks.
@cwebber Poettering's opinion is quite common AFAIK
But that doesn't mean it's good to let it be that way
We must be the change we want to see if we want improvements
@cwebber This. I do think that writing code oneself and running it through checkers (any, and the more the better, roughly, as long as they don't replace humans) is a good thing. But these checkers should run sandboxed, just flag issues -- as any linter. And if that stuff is LLM-powered, so be it. But agentic coding? LLM-driven suggestions/refactoring? I'm soooo weary of this.
@cwebber Why are you spreading FUD like that? The very file you linked states the policy:
> Per project policy: if you use AI code generation tools, you must disclose this in commit messages by adding e.g. Co-developed-by: Claude <claude@anthropic.com>. All AI-generated output requires thorough human review before submission.
@cwebber @david_chisnall for once I ll say it again
#systemdsucks
I just wondered what kind of taste I had in my mouth when I read that PR.
Turns out that was puke.
@cwebber dear machine that agrees with everything, is my code good? are you sure?
@cwebber How do you plan on doing a prompt injection when it only has read-only access?
@cwebber I use Code Rabbit reviews on some projects. It’s always in addition to and not instead of human reviews. It often catches some details that humans might miss and overall has raised the quality of the review process.
Are there indications that systemd is reducing human reviews?
@cwebber oh this is really bad lmao
@cwebber oh, you mean the PID 1 that does everythig… brilliant
@cwebber I like to think GitHub's own MCP server has someone on there trying to limit this exposure.
@cwebber@social.coop i'm not sure how, claude is running inside a read only github actions sandbox.
@cwebber this is so disheartening... they are opening up huge attack vectors that they absolutely don't have to.
wonder how big the bribe was to get systemd maintainers to start using Claude?
time to start shopping around for non-systemd distros, I guess. thanks assholes
Automated review makes any additional human review lazier. It's human nature.
That makes the overall quality controls weaker.
@cwebber Goddamnit, I JUST got Fedora set up.
fuuuuuuuuuuuuuuuuuuckadoodledoo.
@cwebber If maintainers are using Claude to review pull requests in systemd, it does raise concerns some developers talk about like the possibility of prompt-injection attacks influencing automated code reviews. Still, most major open-source projects rely on multiple human reviewers and testing pipelines, which helps reduce the risk of any AI-introduced backdoor slipping through.
@trentmichael_reznor Prompt injection attacks against PR review agents have resulted in backdoors have resulted in merged PRs with nobody noticing
@cwebber There have been concerns and demonstrations showing how prompt-injection attacks could mislead AI review tools, but there’s no widely confirmed case of a major open-source project having a real backdoor merged solely because of an AI PR reviewer. Tools used with projects like systemd are usually advisory, with human maintainers still responsible for the final review.
@trentmichael_reznor Depends on what you mean by "major", I suppose. 4k users affected by https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another before it was caught.
However, nowhere near the scale of systemd, for sure. But that seems like a matter of time.
I believe @trentmichael_reznor is a cheapo llm. Blocking it.
Some of @trentmichael_reznor responses in this thread appear to have a similarity to gen-ai text …??
@trentmichael_reznor systemd gets a lot more attention, so maybe it's not as likely, but
@cwebber @trentmichael_reznor It's not just that, the ai part of the review workflow runs with only read-only access to the repository. All it does is produce some json that is processed by another step further in the workflow which does have permission to write the comments from the json to prs.
I did put a little thought into making the risk of promo injection is minimal. The review workflow can also only be triggered by repository members and is not triggered automatically yet.
@cwebber @trentmichael_reznor a quick refresher: the xz backdoor was only using xz as a delivery system. While the payload ultimately put a backdoor in sshd, it had to attack flaws in systemd to do so.
This was likely a nation state backed apt behind the attack. We almost missed it.
systemd adopting botshit is going to make the next backdoor easier. They're probably already working on it.
I'm just glad I already dumped systemd.
@cwebber True because systemd is so widely used and closely watched, it tends to get heavy scrutiny from maintainers and the broader open-source community, which lowers the chance of something slipping through unnoticed. Still, the concern about AI-assisted reviews remains a valid topic in security discussions.
@cwebber holy shit I'm not a flatout genai hater but even I can say categorically no, do not, this is bad, this is ....... no. I mean systemd is a trash-tier init system anyway, it's built like a Gnome graphical app when it's literally your fucking init system, that shit should be built to be maximally safe....
@cwebber will this nightmare ever end
"This file provides guidance to AI coding agents when working with code in this repository. Only add instructions to this file if you've seen an AI agent mess up that particular bit of logic in practice."
followed by
"Never invent your own build commands or try to optimize the build process"
lmao, I bet that was a funny screw up
@cwebber Debian is probably kicking themselves right now. 
@musicman that's right the christine effect isn't limited to fedi nodes
@cwebber this one definitely has me pulling my hair out...
@cwebber Another case that shows that LLMs are unethical, because they prey both on people with very low self-esteem and on people with very high self-esteem.
@cwebber Great, now I have to replace half my operating system.
@cwebber well, I was gonna abandon systemd anyway. But gods damn it, I just got my modded-to-hell Skyrim working. The Owlbear is finishing a play through first…
@cwebber ugh, whyyy. The only init I can stand to use long term :(
@cwebber you'll notice about everything Red Hat touches is compromised in this regard! 
@bclindner @cwebber This time it seems there’s no Red Hat involvement
@cwebber Oh fucking hell 💀
@cwebber ohhhh no, but it's so core 😫
@cwebber I have no words.
One more reason to use Guix + Shepherd!
@cwebber I'm more of an OpenRC or dinit kind of Owlbear personally. Also, Guix doesn't support NY laptop's hardware (imma be a lot more careful about my next laptop…)
@cocaine_owlbear @cwebber Guix has channels for third-party packages, kinda like Ubuntu PPAs or Arch AUR, but more decentralized. There is one such channel with firmwares and vanilla kernels called nonguix. Theres is a LiveCD maintained by @hako with that channel already enabled, you can try it out with your current laptop:
@cwebber Will GNU Guix be able to keep LLMs out of Shepherd and Hurd? I'm also worried about the Linux kernel potentially slopifying.
@skyfaller Linux already is slopifying
@ytvwld @cwebber Yes, this is what I was looking for but I couldn't find the reference, looks like slop is infecting Guix projects already. It's probably not too late to change course if Guix can find its spine, but given how many previously respected projects have fallen already I'm not optimistic.
@cwebber Regardless of how quickly the Linux kernel is slopifying (you probably know better than I do), it doesn't really matter if the alternatives are not able to draw hard lines and keep the slop out; they'll just slopify slightly slower. Is Guix up to the task? Will there be alternatives, or will every option be slop?
I just haven't seen a policy yet from Guix and it's making me nervous.
@cwebber one positive thing about getting a Mac as a work laptop is that I can now give Guix a proper try on my ThinkPad without losing my only machine
This is a bonfire demo instance for testing purposes. This is not a production site. There are no backups for now. Data, including profiles may be wiped without notice. No service or other guarantees expressed or implied.