Can government mandate “unbiased AI”? Idaho is trying
- Chris Cargill
- 32 minutes ago
- 2 min read
Artificial intelligence is advancing rapidly, and policymakers are understandably trying to ensure that new technologies used by government are accurate, reliable, and fair. Idaho lawmakers deserve credit for engaging with this challenge.
But House Bill 687, the so-called “unbiased AI” bill now moving through the Legislature, risks creating more problems than it solves. While the goal of reducing bias in government technology is reasonable, the bill’s approach could discourage innovation, reduce competition, and ultimately limit Idaho’s access to the very tools policymakers want government to use responsibly.
There are five major concerns lawmakers should consider before moving forward.
Vague and undefined standards
Terms like “ideological neutrality,” “DEI,” and “unbiased AI” appear throughout the legislation, yet none are clearly defined. Without objective standards, companies have no reliable way to determine whether their products comply with the law. Courts have long warned that regulations built on vague concepts invite arbitrary enforcement. When the rules are unclear, businesses cannot confidently design compliant products.
Potential First Amendment concerns
Government has broad authority to choose its vendors, but conditioning contracts on how private companies design their products — including what information their systems generate or suppress — raises serious constitutional questions. If the state requires AI vendors to redesign their systems to reflect government-defined ideological preferences, that begins to look less like procurement policy and more like compelled speech.
Discouraging innovators from participating in Idaho’s market
Large technology companies have legal teams and compliance departments capable of navigating vague regulatory requirements. Startups and smaller AI firms do not. Faced with uncertain ideological compliance rules, many smaller developers will simply decide that doing business with the state of Idaho isn’t worth the risk. Ironically, a bill meant to address concerns about large tech companies could end up protecting them by pushing smaller competitors out of the market.
Threatening proprietary technology and trade secrets
One provision would require vendors to disclose detailed information about how their AI models are developed and trained in order to qualify for state contracts. Even under nondisclosure agreements, these kinds of requirements go far beyond what is typical in public procurement. AI model training methods, alignment systems, and internal development processes are core intellectual property. Forcing companies to disclose them simply to bid on a contract risks undermining the competitive innovation policymakers say they want to encourage.
Substituting political preferences for sound procurement policy
The most effective way for governments to manage AI tools is to focus on performance and outcomes — how systems behave when deployed — rather than trying to regulate the internal design or ideological orientation of complex technologies. Transparency and accountability should focus on what a tool does, not on compelled disclosure of how it was built.
Idaho lawmakers are right to think carefully about how government should use emerging technologies. But good intentions alone are not enough. Poorly designed regulations can unintentionally drive away vendors, reduce competition, and leave government agencies with fewer — and potentially worse — technology options.
If the goal is responsible AI in government, the Legislature should revisit H687 and pursue a framework that emphasizes clear standards, protects innovation, and ensures Idaho remains open to the best technology the market has to offer.



