In the one-pager, Cruz noted that “most US rules and regulations do not squarely apply to emerging technologies like AI.” So “rather than force AI developers to design inferior products just to comply with outdated Federal rules, our regulations should become more flexible,” Cruz argued.
Therrier noted that once regulations are passed, they’re rarely updated and backed Cruz’s logic that AI firms may need support to override old rules that could restrict AI innovation. Consider the “many new applications in healthcare, transportation, and financial services,” Therrier said, which “could offer the public important new life-enriching service” unless “archaic rules” are relied on to “block those benefits by standing in the way of marketplace experimentation.”
“When red tape grows without constraint and becomes untethered from modern marketplace realities, it can undermine innovation and investment, undermine entrepreneurship and competition, raise costs to consumers, limit worker opportunities, and undermine long-term economic growth,” Therrier wrote.
But Therrier acknowledged that Cruz seems particularly focused on propping up a national framework to “address the rapid proliferation of AI legislative proposals happening across the nation,” noting that over 1,000 AI-related bills were introduced in the first half of this year.
Netchoice similarly celebrated the bill’s “innovation-first approach,” claiming “the SANDBOX Act strikes an important balance” between “giving AI developers room to experiment” and “preserving necessary safeguards.”
To critics, the bill’s potential to constrict new safeguards remains a primary concern. Steinhauser, of the Alliance for Secure AI, suggested that critics may get answers to their biggest questions about how well the law would work to protect public safety “in the coming days.”
His group noted that just during this summer alone, “multiple companies have come under bipartisan fire for refusing to take Americans’ safety seriously and institute proper guardrails on their AI systems, leading to avoidable tragedies.” They cited Meta allowing chatbots to be creepy to kids and OpenAI rushing to make changes after a child died after using ChatGPT to research a suicide.