Home

LLM wrote it? Fine, but show us human documentation, demands EFF

The Electronic Frontier Foundation says it will accept LLM generated code from contributors to its open source projects but will draw the line at non-human generated comments and documentation.

Software engineer reveals the dirty little secret about AI coding assistants: They don't save much time

The venerable non-profit's Alexis Hancock and Samantha Baldwin laid out its policy this week, explaining: "We strive to produce high quality software tools, rather than simply generating more lines of code in less time."

And while "LLMs excel at producing code that looks mostly human generated" they pair said these models often have "underlying bugs that can be replicated at scale."

"This makes LLM-generated code exhausting to review, especially with smaller, less resourced teams."

Or put another way, "well intentioned people" submit code with hallucinations, submissions, exaggeration or misrepresentation.

This means project maintainers can end up refactoring, not just reviewing code, when a contributor doesn't actually understand the code they’ve generated. And that’s when the code is actually useful.

The EFF is clearly worried about a tidal wave of AI generated code that is "only marginally useful or potentially unreviewable."

So, it is asking contributors to "please refrain from submissions that you haven't thoroughly understood, reviewed, and tested." Baldwin and Hancock want them to disclose when their contributions came courtesy of an LLM and insist that documentation and comments is "human" in origin.

"Project leads can determine if submissions aren't reasonably reviewable."

OpenUK CEO Amanda Brock said the open source community was only now grasping the impact of AI-generated code on projects and maintainers.

"It's a combination of the breadth of content that's scraped and number of bots doing it, and the volume of contributions and level of 'garbage in' when we look at project contributions."

She predicted: "We're gonna see a lot more AI red cards in the coming weeks."

The EFF currently has four projects on the boil, all under GNU or Creative Commons terms: Certbot; Privacy Badger; Boulder; and Rayhunter. Needless to say, they all have a privacy and security bent.

It's no surprise the EFF's policy change has a slightly world-weary tone. Hancock and Baldwin said "It is worth mentioning that these tools raise privacy, censorship, ethical, and climatic concerns for many. These issues are largely a continuation of tech companies' harmful practices that led us to this point.

"We are once again in 'just trust us' territory of Big Tech being obtuse about the power it wields," they added. ®

Source: The register

Previous

Next