Ars has reached out to HackerOne for comment and will update this post if we get a response.
“More tools to strike down this behavior”
In an interview with Ars, Stenberg said he was glad his post—which generated 200 comments and nearly 400 reposts as of Wednesday morning—was getting around. “I’m super happy that the issue [is getting] attention so that possibly we can do something about it [and] educate the audience that this is the state of things,” Stenberg said. “LLMs cannot find security problems, at least not like they are being used here.”
This week has seen four such misguided, obviously AI-generated vulnerability reports seemingly seeking either reputation or bug bounty funds, Stenberg said. “One way you can tell is it’s always such a nice report. Friendly phrased, perfect English, polite, with nice bullet-points … an ordinary human never does it like that in their first writing,” he said.
Some AI reports are easier to spot than others. One accidentally pasted their prompt into the report, Stenberg said, “and he ended it with, ‘and make it sound alarming.'”
Stenberg said he had “talked to [HackerOne] before about this” and has reached out to the service this week. “I would like them to do something, something stronger, to act on this. I would like help from them to make the infrastructure around [AI tools] better and give us more tools to strike down this behavior,” he said.
In the comments of his post, Stenberg, trading comments with Tobias Heldt of open source security firm XOR, suggested that bug bounty programs could potentially use “existing networks and infrastructure.” Security reporters paying a bond to have a report reviewed “could be one way to filter signals and reduce noise,” Heldt said. Elsewhere, Stenberg said that while AI reports are “not drowning us, [the] trend is not looking good.”
Stenberg has previously blogged on his own site about AI-generated vulnerability reports, with more details on what they look like and what they get wrong. Seth Larson, security developer-in-residence at the Python Software Foundation, added to Stenberg’s findings with his own examples and suggested actions, as noted by The Register.
“If this is happening to a handful of projects that I have visibility for, then I suspect that this is happening on a large scale to open source projects,” Larson wrote in December. “This is a very concerning trend.”