This week, the White House released 10 “AI Principles,” intended as guidance for federal agencies while they consider how to appropriately regulate AI in the private sector. It’s an effort to help reduce the potential harms of AI that have been under scrutiny all over the world, while maintaining the benefits to society AI can bring. This is a moment the industry has been waiting for because of lingering uncertainty around how the US government will work to control this powerful technology and to ensure it doesn’t hurt people more than it helps.

The problem is, while it may be a good thing that the White House is taking an active role in the fight to regulate AI, their emphasis on light-touch regulation means the new rules fail to go far enough.

The principles themselves address some of the concerns raised by the AI ethics community and the academics who study the effects of technology on society. One such principle calls for lawmakers to consider whether the technology will “introduce real-world bias that produces discriminatory outcomes,” echoing the rallying cries of those academics who have warned for years that AI will codify existing societal biases into automated decision systems. These systems have been shown to adversely impact the most vulnerable people in society, including those marginalized by discrimination on the basis of race, gender, sexuality, and disability — and perhaps most alarmingly, our country’s poorest citizens. Unregulated algorithms can automate and thereby govern the human right to life in areas like healthcare, where flaws in algorithms have dictated that black patients receive inadequate care when compared to their white counterparts. In other cases, lawmakers suspect that algorithmic bias may perpetuate gender disparities in access to financial credit and employment.

The guidance also acknowledges that “current technical challenges in creating interpretable AI can make it difficult for agencies to ensure a level of transparency necessary” to foster public trust. It advises agencies to pursue transparency in two forms: disclosing when and where the technology is in use, and making the outcomes transparent enough to ensure that the algorithms comply, at least, with existing laws.

But the true extent of the harm AI does globally is often obscured due to trade secrets and government practices like the Glomar response — the infamous “I can neither confirm nor deny” line. Using these protective measures, entities can hide the extent and breadth of the AI-related programs and products they’re using. It’s entirely likely that many algorithms already in use violate existing anti-discrimination (among other kinds of) laws. In some cases, companies may even choose so-called “black box” model types that obscure the rationale behind decisions at scale, in order to claim ignorance and a lack of control over the actions that result. This legal loophole is possible because some types of AI are so complex that no human could ever truly understand the logic behind a particular decision, making it impossible to understand what happened if something goes wrong.

It is exactly this kind of behavior that has resulted in a massive loss of public trust in the technology industry today, and it’s further evidence that AI-specific regulation is severely needed to protect the public good. It’s been demonstrated time and again that, even with the best intentions, AI has the potential to hurt people in mass quantities, which makes our industry unique in the technology field overall. This incredible power to do harm at scale means those of us in the AI industry have a responsibility to put societal interest above profit. Too few companies currently embody this high degree of responsibility with their actions. It’s of utmost importance that we reverse this trend, or society will never enjoy the benefits that AI promises.

Bias mitigation, public disclosure, and a solution to the problematic “black box” are table stakes for any sufficiently effective regulatory framework for AI. But these “AI Principles” fall woefully short in their attempt to optimize the balance of societal good versus any potential dangers that the technology might one day bring.

In a surprise twist, with the first federal document to address AI lawmaking, the Trump administration focused mainly on the risks of losing out on great power rivalry, market competition, and economic growth. In doing so, the administration dramatically underestimates the ongoing harm facing Americans today, once again sacrificing the wellbeing of the public for unchecked, unregulated industry growth.

Importantly, although this is the first guidance to emerge from the federal government, many cities and states have already had success governing AI, where similar, comprehensive federal bills have stalled and ultimately failed due to congressional deadlock. Several cities have banned intrusive facial recognition practices from use by law enforcement, with many more algorithmically-centered proposals under consideration at the state and city levels.

It’s telling that the new “AI Principles” warn of regulatory “overreach” in one breath while undermining local legislative authority in another. The guidance advises that agencies may use “their authority to address inconsistent, burdensome, and duplicative State laws.” This language subtly indicates to lawmakers that a practice known as federal preemption could be used to undo some of the strong, grassroots, and broadly celebrated local regulations that have been championed by AI experts and civil liberties advocates like the ACLU.

Even more concerning is the fact that these strong local laws are the result of the public democratic will expressed in pockets of the country where technical work is most common (San Francisco, CA; Sommerville, MA [close to MIT]; and a likely proposal in Seattle, WA). These new local laws were enacted as a response to the inherent risks of using predictive technology to gate access to sensitive services like public housing, proactive healthcare, financial credit, and employment, and to a lack of action from Washington. The people who build these technologies know that any algorithm threatening to perpetuate human bias or to provide a “math-washed” license to discriminate must be closely monitored for misbehavior, or never implemented at all.

These AI Principles may be a small step in the right direction, and broadly speaking, they can introduce a degree of enhanced responsibility if correctly implemented by lawmakers who are earnestly seeking to reduce risk. But they are only a starting point, and they actually threaten further harm by raising the issue of federal preemption to undo the incredible work that’s already being done by local legislators. Industry workers with direct knowledge of the benefits and risks of AI have often been the strongest voices in the call for strict regulation, and the White House should take steps to better align its policies with the advice of those working hardest to bring AI to market.

Liz O’Sullivan is the cofounder of ArthurAI and technology director of STOP (Surveillance Technology Oversight Project)