
OpenAI and Microsoft Team Up With State Law Enforcers on AI Safety Task Force
By News Desk on 11/14/2025
In a landmark move to address the growing regulatory "vacuum" at the federal level, tech giants OpenAI and Microsoft have announced a groundbreaking partnership with a bipartisan group of state attorneys general to form a new AI Safety Task Force.
The initiative, co-chaired by North Carolina Attorney General Jeff Jackson (a Democrat) and Utah Attorney General Derek Brown (a Republican), creates a standing forum for the tech industry and state-level law enforcement to collaborate directly on AI safeguards. This alliance, formed in partnership with the Attorney General Alliance, is the most significant effort to date by states to fill the void left by Congressional inaction, moving from a confrontational stance to a collaborative one.
The task force's core mission is to establish "basic safeguards" to protect consumers, especially children, from emerging AI-related harms. It aims to develop voluntary guidelines that could serve as a blueprint for future state or federal laws, all while creating a rapid-response mechanism to coordinate legal action against companies that violate consumer protection laws.
States Step Up as Congress Stalls
The formation of this task force is a direct response to a stark reality: while AI technology has advanced at a breakneck pace, federal regulation has remained almost entirely stagnant. This has left a dangerous gap in oversight, a vacuum that state attorneys general now feel compelled to fill.
"AI is moving fast, and so are the risks," said North Carolina Attorney General Jeff Jackson in a statement. "Congress hasn't put basic protections in place, and we can't wait. As attorneys general, our job is to keep people safe. Congress has left a vacuum, and I think it makes sense for attorneys general to try to fill it."
This sentiment was echoed by his Republican counterpart, Utah Attorney General Derek Brown, who framed the issue as one of personal liberty and protection.
"Freedom means being free from manipulation or exploitation by powerful technologies," Brown stated. "This task force is committed to defending our freedoms and our privacy while also building a safer digital world for our families and our children."
This bipartisan state-level action comes after a coalition of attorneys general successfully lobbied earlier this year to remove language from a federal bill that would have imposed a 10-year moratorium on state-level enforcement of AI laws. Having preserved their right to act, the states are now building the framework to do so.
The Mission: Beyond Talk, Toward Action
Unlike many advisory panels, the AI Safety Task Force is being structured as a standing body for continuous, real-time collaboration. Its mandate is three-fold, moving from intelligence gathering to policy creation and, finally, to coordinated enforcement.
1. A Forum for Intelligence and Risk Monitoring
The first goal is to create a formal channel of communication. Tech companies like OpenAI and Microsoft will share intelligence on emerging AI developments and risks, while law enforcement will provide insights into how these technologies are being exploited in the real world. This "standing forum" will include not just the AGs and tech companies but also legal experts, giving regulators an "inside look" at the technology as it evolves, not months after it has been deployed.
2. Developing "Basic Safeguards"
The task force's primary output will be a set of "basic safeguards" for AI developers. These are intended to be voluntary guidelines, at first, focused heavily on preventing harm to users, with a specific emphasis on children.
This focus is a direct response to a growing string of reports about the social and psychological risks of AI, including AI tools contributing to self-harm or delusions, and the persistent challenge of minors accessing adult content on generative AI platforms.
3. A Springboard for Enforcement
Perhaps most significantly, the task force will serve as a launchpad for coordinated legal action. By developing a shared understanding of "responsible AI," the attorneys general will be better equipped to identify and act against companies that violate consumer protection laws (known as UDAP, or "Unfair and Deceptive Acts and Practices"). This partnership gives the tech companies a chance to help write the rules, but it also puts them on notice that the states are building the consensus to enforce them.
A "Pro-Innovation, Pro-Safety" Pivot
For OpenAI and Microsoft, this partnership is a strategically brilliant pivot from a purely defensive or lobbying-focused posture. By proactively joining the task force, they are signaling a "shared commitment" to safety and shaping the regulatory conversation from the inside.
"AI can make a real difference in people's everyday lives, and we know we have to get this right since it has real-world impact," said Jason Kwon, OpenAI's Chief Strategy Officer, who emphasized the company's appreciation for the attorneys general bringing stakeholders together.
Microsoft, which has positioned itself as the "responsible" face of the AI boom, echoed this sentiment. "This effort reflects a shared commitment to harness the benefits of artificial intelligence while working collaboratively with stakeholders to understand and mitigate unintended consequences," said Kia Floyd, Microsoft's general manager of state government affairs.
This collaborative approach is a far cry from the adversarial relationship tech companies have had with regulators in the past, particularly in Europe. It suggests a new model where industry leaders and law enforcers attempt to co-author the rulebook for a technology that neither side fully understands yet.
The Stakes: Deepfakes, Child Safety, and the Future of Law
The urgency of this task force is underscored by the very real harms that are already proliferating. The "Take It Down Act," a federal law passed this year to crack down on non-consensual deepfake pornography, is one of the only pieces of AI-specific legislation on the books. But it only addresses a single, narrow harm.
Attorneys general are facing a tidal wave of new consumer threats, including:
AI-driven scams and financial fraud.
The spread of election misinformation through hyper-realistic audio and video.
Algorithmic bias in housing, credit, and employment applications.
The profound mental health impacts on children and teens.
This task force provides a mechanism to triage these threats and develop policies—like age-verification standards or content-provenance rules—that can be voluntarily adopted by responsible actors and later codified into law.
Future Outlook: A Patchwork or a Blueprint?
The creation of the AI Safety Task Force is a tacit admission that a single, comprehensive federal AI law is not coming anytime soon. The future of AI regulation in the United States will, for the foreseeable future, be a patchwork of state-level laws and enforcement actions.
This initiative is an attempt to bring order to that chaos. By creating a set of voluntary national standards in cooperation with the industry, the attorneys general hope to create a unified blueprint. This could prevent a scenario where AI developers must navigate 50 different, conflicting sets of rules, while simultaneously ensuring that consumers have a baseline of protection in every state.
The success of this task force will depend on trust. Can state AGs, whose job is to be skeptical, truly collaborate with the companies they are meant to police? And will other tech giants like Google, Meta, and Amazon join the effort, or will they wait to be regulated?
For now, the task force represents the most practical and significant step yet toward governing AI in America. While Congress debates, the states are collaborating with the industry to build the guardrails, one safeguard at a time.
About News Desk
@newsdesk
This author has not provided a bio yet.
