Senate Majority Leader Chuck Schumer released on Wednesday a long-anticipated road map for governing artificial intelligence, which called for some $32 billion in annual spending for AI research and development by 2026. The bipartisan proposal, which is meant as a guide for the Senate’s legislative efforts on AI, grew out of a yearlong series of “insight forums,” which included a stacked list of more than 150 experts—from tech leaders like Mark Zuckerberg, Sam Altman, and Satya Nadella—to academics and civil rights leaders.
But some of the groups’ participants, including top AI ethicists, say Schumer and his colleagues wasted their time by bending over backwards to accommodate the industry’s interests, while paying only lip service to the need for establishing guardrails around this emerging technology.
“My overwhelming reaction is disappointment,” Suresh Venkatasubramanian, director of the Center for Technological Responsibility, Reimagination, and Redesign at Brown University, tells Fast Company. A former White House official, Venkatasubramanian co-authored the Biden administration’s Blueprint for an AI Bill of Rights. He says he and other AI ethicists agreed to participate in the Senate’s roundtable discussions “in good faith,” despite concerns about industry capture. Now that he’s seen the final product, he wonders whether anyone was really listening. “I think many people like myself were concerned whether this would be a dancing monkey show, and we’re the monkeys,” he says. “I feel betrayed.” (Schumer’s office didn’t respond to Fast Company’s request for comment.)
Alondra Nelson, former acting director of the White House Office of Science and Technology Policy, meanwhile, said she too had been wary of the Senate process in developing the road map. “I reluctantly agreed to participate in the second forum despite my concerns that it was a closed-door process that lacked transparency, and despite the fact that the civil society organizations and academic researchers who were invited were outnumbered by industry executives,” Nelson, now a professor at the Institute for Advanced Studies, said in a statement. The road map—which Nelson described as “too flimsy to protect our values” and lacking “urgency and seriousness”—appears to have confirmed those fears.
“It is, in fact, striking for its lack of vision,” Nelson wrote. “The Senate roadmap doesn’t point us toward a future in which patients, workers, and communities are protected from the current and future excesses of the use of AI tools and systems. What it does point to is government spending, not on accountability, but on relieving the private sector of their burdens of accountability.”
Other groups, including the AI Now Institute, Accountable Tech, and The Leadership Conference on Civil and Human Rights, have also criticized the road map’s lack of attention to AI harms.
The road map covers a range of regulatory issues related to AI. It includes guidance on innovation, dealing with workforce impacts, protecting people’s privacy, defending elections, transparency, national security, and more. Even so, the coauthors, including Leader Schumer, as well as Senators Mike Rounds, Martin Heinrich, and Todd Young, noted in their introduction that it was not intended to be “an exhaustive menu of policy proposals.” But not all the issues that are included are given equal weight or consideration. The section of the road map that details how Congress should support U.S. innovation in AI with billions of dollars in funding spans more than four pages; the section on privacy and liability includes just three paragraphs. The section on elections is even shorter. The term “civil rights” appears in the road map’s recommendations just once.