Bipartisan House talks on expected artificial intelligence legislation are coalescing around a plan to preempt a specific set of state laws that rein in cutting-edge AI developers, according to two tech lobbyists and three AI policy advocates familiar with the discussions.
The people familiar, who were granted anonymity due to the sensitive and fast-moving nature of the talks, said the bill would specifically preempt AI safety laws like those recently passed by California and New York, which require top AI developers to disclose information about new models in order to identify critical safety or security risks. State AI laws that do not regulate model developers are not expected to be preempted.
Reps. Jay Obernolte (R-Calif.) and Lori Trahan (D-Mass.), the two lawmakers behind the talks, are also discussing a sunset provision that would allow states to once again regulate frontier AI development after two years, according to four of the people familiar.
Three of the people said Trahan and Obernolte are struggling to agree on whether a federal vetting regime for advanced AI developers should be compulsory. They said Obernolte is partial to a light-touch — or even voluntary — approach that would let AI companies decide whether to disclose certain information to the government. Trahan wants greater accountability for the companies, including mandatory data-sharing requirements, according to the people familiar.
The negotiations between the two lawmakers represent the latest attempt to craft federal rules governing AI, and follow several failed bids to reach consensus in Congress over guardrails for the technology.
Spokespeople for Trahan and Obernolte declined to comment.
New details about the talks, which have not previously been reported, come as the White House grapples with a similar set of questions posed by the emergence of Mythos, a powerful new model developed by top AI firm Anthropic that is reportedly able to find cybersecurity vulnerabilities that human hackers cannot.
Mythos has set off a scramble at the White House in recent weeks, as President Donald Trump mulls an executive order that would create a vetting process for risks posed by advanced AI. The debate has to a degree mirrored ongoing talks between Obernolte and Trahan, with some in the Trump administration advocating for a laissez-faire approach while others push for mandatory requirements — or even a pre-clearance regime that would require the White House to greenlight new AI models before release.
The AI industry has spent the better part of a year attempting to block what it frames as a growing “patchwork” of conflicting state AI laws. Those efforts have set off a furious backlash from AI safety advocates, who say state legislators have the right to protect their citizens from AI harms and that any federal rules that preempt state laws should be significantly stronger than what they’re replacing.
The preemption proposal now under discussion between Trahan and Obernolte is relatively narrow — it would technically apply only to laws that directly regulate the most cutting-edge AI development. But some safety advocates worry that if it becomes law, AI companies will argue in court that new state rules around issues like kids’ safety or privacy would force them to change how they develop their models, and would therefore be blocked.
“It will be a litigation magnet,” said one AI policy advocate familiar with the talks.
Trahan suffered immediate blowback for merely engaging in discussions with Obernolte, which were made public shortly after Rep. Sam Liccardo (Calif.) — a Democrat who represents Silicon Valley and is eager to strike a deal with Republicans — withdrew his support from Obernolte’s expected proposal.
Within hours of the news breaking, top Democrats in the Massachusetts legislature sent a letter to Trahan warning her against working with Republicans on a bill that would block them from regulating AI. Earlier this week, a coalition of AI safety advocates and Massachusetts voters launched a petition campaign urging Trahan not to cut a deal that would override AI safeguards in her state.
Asked about the political risks of talking with Obernolte on Wednesday, Trahan told POLITICO that she doesn’t think “there's anything wrong with having conversations about protecting our national security, our economy, from cyber security threats in a post-Mythos world. And that's exactly what we're doing.”
from Politics, Policy, Political News Top Stories https://ift.tt/8RkF3mN
via IFTTT
0 Commentaires