Artificial Intelligence

How AI regulation in California, Colorado and beyond could threaten US tech dominance

William_potter | Istock | Getty Images

How AI regulation in California, Colorado and beyond could threaten U.S. tech dominance

  • California's recently vetoed state bill on safe and secure artificial intelligence systems could put the state's status as a technology hub at risk.
  • In 2024, AI bills have been introduced in 45 states plus Washington D.C., Puerto Rico and the U.S. Virgin Islands.
  • But at the federal level, the U.S. is one of the only G20 nations to not have a comprehensive data privacy law like the EU's GDPR.

The U.S. and, more specifically, California's Silicon Valley, are technology hubs of the highest degree. Along with that designation comes an expectation of consistent innovation.

Despite California's recently vetoed state bill on "safe and secure innovation for frontier artificial intelligence systems," U.S. tech leaders remain on edge. California's status as a national and global technology hub could be at risk of stifled innovation, opponents to similar legislation say.

"Regulating basic technology will put an end to innovation," Meta's chief AI scientist, Yann LeCun, wrote on X.

In 2024, AI bills have been introduced in 45 states plus Washington D.C., Puerto Rico and the U.S. Virgin Islands. The Colorado AI Act — which requires a developer of a high-risk AI system to avoid algorithmic discrimination — is the first of its kind to be signed into law in the U.S. and was even ahead of the European Union AI Act.

California Governor Gavin Newsom vetoed California's bill at the end of September but signed into law another bill which requires transparency in generative AI systems. Some of the critiques of the vetoed bill remain relevant in potential future regulation in California and beyond. The AI Alliance, a self-proclaimed community of creators, developers and adopters in the AI space, worries certain regulations "would slow innovation, thwart advancements in safety and security and undermine California's economic growth."

Democratic California state senator Scott Weiner of District 11 (which includes San Francisco), authored the vetoed bill. He spoke at the AI Quality Conference  in June about the bill and stated, "As human beings, we have a tendency to ignore risk until there's a problem." Wiener clarified that the bill did not intend to interfere with startup innovation, but rather keep tabs on "very large, powerful models" by requiring a minimum training budget of $100 million for specified company types.

Tatiana Rice, deputy director for U.S. legislation at non-profit, non-partisan think tank Future of Privacy Forum, said the U.S. as a whole is in a unique position given the fact that it's one of the only G20 nations to not have a comprehensive data privacy law (similar to the EU's General Data Protection Regulation, or GDPR). "A lot of the privacy risks associated with AI can be tackled through a comprehensive data privacy regime," she said.

The U.S. has historically approached data privacy with decentralized, state-by-state legislation, which is where AI regulation is currently headed. While the formerly introduced American Data Privacy and Protection Act (which has since been replaced with the proposed American Privacy Rights Act) tried to tackle data privacy at the federal level, it quickly became a culture war because of civil rights protections included in the text.

The White House Office of Science and Technology Policy has published a Blueprint for an AI Bill of Rights based on five principles: safe and effective systems; algorithmic discrimination protection; data privacy; notice and explanation; and human alternatives, consideration and fallback. But with a new administration on the way in, and President-elect Trump's approach to regulation expected to be more favorable to corporations — though there are critics of big tech within his ranks — the White House may reverse course and bring the federal approach to AI in line with a broader ethos of minimal government involvement to drive competitive technological innovation. In that case, the onus will remain at the state level, where individual states will have to toe the line between tech hub status and secure innovation.

'Common-sense AI regulation'

At the state level, Jonas Jacobi, CEO of ValidMind, an AI risk management company for financial institutions, said, "The wrong regulation can absolutely kill innovation." But, he added, "that doesn't mean there shouldn't be regulation. There should be common-sense regulation, especially around these huge foundational models, which are incredibly powerful."

Mohamed Elgendy, CEO of AI and ML testing platform Kolena, has a slightly different view. "Setting the threshold on the brain power of the model versus the application, it just doesn't make sense," he said. In this, Elgendy is differentiating models, like GPT, from their applications, like ChatGPT.

"The way I see the risks of AI is really not about the capability of it," Elgendy said. "It's on the security side, that malicious use." Jacobi echoes this part of the sentiment. "I don't think you can hold [developers] accountable for everything that happens when people use their models," he said.

Rice says U.S. companies threatening to leave their home states for an overreach of AI regulation have some say, but that it would take a lot for California's tech hub to dissipate. Despite state-by-state fragmentation, Rice said there are people working to find a model approach.

Senator Robert Rodriguez, a Democratic majority leader of the Colorado Senate, and Democratic Connecticut State Senator James Maroney are two people "trying to be cognizant of avoiding a patchwork of AI regulations in the way that has happened with data privacy," said Rice. Regarding Colorado's successful AI bill, Maroney wrote on Facebook, "It is unfortunate that Connecticut chose not to join Colorado as a leader in this space. But we will be back with a bill next year."

Even a model approach, if done wrong, could pose a major risk to the U.S. "Everybody's looking at California, especially when it comes to tech," Elgendy said. If anything, companies could leave the entire country, which would be a nightmare from a cybersecurity perspective, he said.

Elgendy's company is currently in the midst of a nine-month case study of sorts to develop AI "gold standards" by working with regulatory bodies for sectors like finance and ecommerce. Kolena is aiming to have an actionable set of guidelines and standards that can provide clear guidance for builders and regulators across these major industries, filling the void that has so far been left by federal lawmakers and regulators.

Even without congressional movement, Elgendy noted, industry leaders will make progress. "We definitely believe that no one team can do this alone," he said.

Copyright CNBC
Exit mobile version