As the 2024 legislative sessions begin, state lawmakers are turning their attention to the intricate landscape of artificial intelligence (AI). Alongside the expected influx of AI-related bills, state regulators are gearing up for the widespread adoption of AI tools.
During the 2023 legislative session, approximately 25 states, Puerto Rico, and the District of Columbia introduced bills related to artificial intelligence, with 15 states and Puerto Rico adopting resolutions or enacting legislation. Of the 25 states, Washington was the only state to begin delving into this discussion in the Mountain States region. The issues they looked into included the "Charter of Peoples Personal Data Rights" and the "Use of Automated Decision Systems" as found by the Artificial Intelligence 2023 Legislation tracker.
Many states will take a bite at the apple in managing or controlling the access and usage of AI tools. A recent development in this arena comes from the California Privacy Protection Agency, which has released preliminary regulations surrounding businesses' deployment of "automated decision-making technology," warning of implications far beyond the tech sector. The proposed regulations in California aim to establish a comprehensive framework for governing the implementation of ADMT, leveraging personal information for decision-making or serving as a surrogate for human decision-making. If this isn't giving you Will Smith "I, Robot" vibes, I'm not sure what else will.
Graphic by Morning MultiState
State lawmakers are increasingly focused on understanding both the benefits and challenges posed by AI. A growing number of measures are being introduced to study the impact of AI or algorithms and explore potential roles for policymakers. Our policymakers must establish clear guidelines for liability in cases where AI systems cause harm or errors. This can be done through contractual agreements or industry-led initiatives, reducing the need for extensive government intervention. Policymakers need to encourage the setup of independent bodies in the AI field that don't just talk the talk but walk the walk when it comes to enforcing high standards. These groups must help our elected officials lay down comprehensive guidelines. Think of them as the playbook for AI. These rules should serve as a compass, guiding everyone in the AI landscape to develop cutting-edge tech responsibly and treat all stakeholders fairly. These groups can't operate in silos. They should communicate openly with the wider community, seek input from diverse perspectives, and be transparent about their actions. Inclusion is key—everyone should have a say in shaping the ethical direction of AI. It is imperative to actively encourage and support comprehensive initiatives aimed at educating the public on AI technologies, elucidating both their advantages and potential risks. The focus should be on empowering individuals to make informed decisions. By providing the public with a deep understanding of AI's intricacies, consumers are better equipped to assess its applications, benefits, and associated risks. This knowledge empowers them to make choices aligned with their values and preferences.
Transparency about how AI systems operate, the data they use, and the potential societal impacts fosters a sense of trust. Trust is foundational for the widespread acceptance and adoption of AI technologies. Public awareness initiatives should also address common misconceptions or fears associated with AI. Clarifying these concerns helps in dispelling myths, reducing apprehension, and fostering a more constructive dialogue between the public, industry, and policymakers. It's important to note that while free-market approaches emphasize minimal government intervention, there are also arguments for certain regulatory measures to address specific concerns such as bias, discrimination, and privacy. Striking the right balance between fostering innovation and protecting public interests remains a key challenge in the evolving landscape of AI regulation. Transparency isn't just a buzzword; it's the bedrock of trust in the AI landscape. This is why at Mountain States Policy Center we have emphasized its importance.
Companies should embrace and accentuate the need to be open about how their AI works. When companies willingly disclose their AI algorithms, they're essentially saying, "Here's how our magic works." This transparency builds trust with consumers and businesses, creating a transparent ecosystem where AI is not a mysterious black box but a comprehensible and trustworthy tool, and allows our elected officials to make informed and educated decisions... or at least that is the hope.