Growing concerns over the safety of Artificial Intelligence, especially for children, have prompted two state attorneys general in the United states to collaborate with major tech companies to establish a task force to develop protective measures and identify emerging risks.
North Carolina Attorney General Jeff Jackson and Utah Attorney General Derek Brown announced on Monday the creation of the AI Task Force, with OpenAI and Microsoft already on board.
The attorneys general expect other state regulators and AI companies to join in the effort. The task force will work to establish “basic safeguards” that AI developers should implement to protect users and anticipate risks as AI technology evolves.
Currently, there is no comprehensive federal legislation governing AI. Earlier this year, Jackson and Brown were among 40 attorneys general who successfully opposed a proposed federal moratorium that could have blocked states from enforcing AI regulations for a decade. The only federal AI law passed this year, the Take It Down Act, targets non-consensual deepfake pornography.
Jackson stressed that while Congress has yet to fully regulate AI, state-level initiatives can help fill the gap, “Congress has left a vacuum, and it makes sense for attorneys general to step in and protect the public”.
Tech companies are also taking steps independently, with OpenAI and Meta working to restrict minors’ access to adult content. The task force aims to promote safety, enforce accountability, and ensure that AI development considers the welfare of all users, particularly vulnerable populations.