‘We better figure it out’: The politics trap that could slow a national AI law
Republicans worried that any new rules would lead to increased censorship of conservatives, while Democrats feared they’d open the floodgates for online hate speech and disinformation. Now those arguments are starting to resurface in an entirely new debate.
From the frequent invocation of Section 230 during OpenAI CEO Sam Altman’s Senate testimony on Tuesday to a squabble over disinformation and censorship at a separate Senate hearing on the government’s use of automated systems, familiar battle lines over social media are at risk of being redrawn as Congress turns its gaze to AI.
“The same muscle memory is coming back,” said Nu Wexler, a partner at public relations firm Seven Letter and a former Democratic congressional staffer who has worked at Google, Facebook and other tech companies.
A return to the politics of those earlier tech disputes will make it harder for the two parties to come together on AI policy. And even if they can stay united, lawmakers will likely need to look beyond censorship, disinformation, political bias or other issues raised by social media if they want to produce meaningful AI rules.
One reason many lawmakers are viewing AI through a social-media lens, say some on the Hill, is the basic knowledge gap around an extremely fast-moving new technology.
“Without discussing anybody’s names, some members of the House and Senate have no idea what they’re talking about,” said Rep. Zoe Lofgren (D-Calif.), the ranking member on the House Science Committee, in an interview with POLITICO on Thursday.
During a Tuesday hearing of the Senate Homeland Security and Governmental Affairs Committee, ranking member Rand Paul (R-Ky.) accused the government of colluding with social media companies to deploy AI systems that would “surveil and censor your protected speech.”
Paul later told POLITICO he won’t work on AI legislation with committee chair Gary Peters (D-Mich.) until the Democrat acknowledges that online censorship is a real problem.
“Everything else is window dressing,” Paul said. “We’re fine to work with [Peters] on it, but we’ll have to see progress on defending speech.”
In a conversation with reporters after Tuesday’s hearing, Peters said he shared Paul’s concerns about AI and civil liberties. But he also stressed that AI “is a lot broader than just related to potential misinformation and disinformation.”
“It’s a topic that we should consider — but it’s also a very complicated topic,” Peters said.
The mood was less partisan during Altman’s testimony before the Senate Judiciary Subcommittee on Privacy, Technology and the Law. But tech topics that typically spark intense fights were still front and center.
Senators from both parties, including Josh Hawley (R-Mo.) and Amy Klobuchar (D-Minn.), questioned the potential for AI systems to promote online misinformation about elections. Others, including Judiciary Chair Dick Durbin (D-Ill.) and ranking member Lindsey Graham (R-S.C.), questioned Altman about Section 230 of the Communications Decency Act. The provision protects online platforms from legal liability over content posted by users. Attempts to reform the 27-year-old internet law for the modern social media era have repeatedly snarled over partisan disputes around censorship, disinformation and hate speech. And Section 230 might not even apply to AI systems — a notion that Altman repeatedly tried to convey to the senators on Tuesday.
“It’s tempting to use the frame of social media, but this is not social media,” said Altman. “It’s different, so the response that we need is different.”
Lofgren, whose congressional district includes a chunk of Silicon Valley, shares Altman’s sense that Section 230 “is not really applicable” to AI. “Apples and oranges, really,” she said.
And if lawmakers hope to tackle politically fraught topics like disinformation, Lofgren said a federal data privacy bill would be more effective than new rules on AI. “If you want to get into manipulation, then you have to get into how you manipulate, which is really the use and misuse of personal data,” the congressmember said.
Wexler said it’s too early to tell whether congressional efforts to rein in AI will end up trapped by the same partisan gridlock that has derailed meaningful rules on social media. While acknowledging that the warning signs are there, he also pointed to clear areas of agreement — particularly on the need for greater study and more transparency into AI systems.
And while Lofgren thinks Congress should stop conflating social media with AI, she sees few signs of a similar partisan divide — at least for now. “Could that emerge? Maybe,” she said. “But I think everybody realizes this is a technology that could turn the world upside down, and we better figure it out.”
Other observers, however, believe it’s only a matter of time before the political feuds that undercut congressional efforts to unite on other tech issues emerge on AI.
“The left will say AI is hopelessly biased and discriminatory; the right will claim AI is just another ‘woke’ anti-conservative conspiracy,” said Adam Thierer, senior fellow for technology and innovation at the R Street Institute, a libertarian think tank.
“The social media culture wars are about to morph into the AI culture wars,” Thierer said.