While Washington D.C. dithers, a regulatory storm is brewing in state capitals across the United States. In the absence of a federal framework for artificial intelligence, states are forging ahead with their own laws, creating a complex and often contradictory patchwork of rules that is quickly becoming a CISO's worst nightmare.
Consider the landscape. In Colorado, the Artificial Intelligence Act (CAIA) is focused on preventing algorithmic discrimination in high-risk areas like hiring and lending. Move west to California, and the focus shifts to transparency, with laws requiring clear disclosure when generative AI is used. Head to Utah, and the primary concern is making sure consumers know when they're talking to a bot. Meanwhile, Texas is building a state council to monitor AI, and Tennessee is criminalizing malicious deepfakes.
For any company operating in more than one state, this is a minefield. A single AI system used for customer service or hiring could be subject to a dozen different legal standards. The compliance overhead is staggering, and the legal risks are multiplying. The debate over federal preemption—whether a single national law should override these state efforts—is heating up. But a federal solution is likely years away.
For now, the burden falls on CISOs and their legal teams to navigate this maze. It requires a state-by-state analysis of your AI deployments, a deep understanding of the nuances of each law, and a flexible compliance framework that can adapt to a constantly changing legal landscape. The age of AI is here, and so is the age of AI regulation. It's just not arriving in a neat, orderly package.