Trump Administration Eyes Regulatory Shift on AI as Cyber Threats Intensify
Escalating risks posed by advanced AI models are pushing the United States toward a potential shift in federal oversight.
News
- Trump Administration Eyes Regulatory Shift on AI as Cyber Threats Intensify
- Jensen Huang Rebuts AI Fears, Frames It as Industrial-Scale Job Creator
- Dubai Moves to Embed Agentic AI Across Private Sector Operations
- AWS and HUMAIN to Build a Full-Stack AI Ecosystem in Saudi Arabia
- AI Delivers More Accurate ER Diagnoses Than Doctors, Harvard Study Finds
- Oman to Scale AI Ecosystem With New Special Economic Zone
[Image source: ChetanJha/MITSMR Middle East]
The Trump administration is weighing a shift toward formal oversight of advanced artificial intelligence in the United States, signaling a potential recalibration of its previously hands-off stance amid escalating cybersecurity risks.
Reports suggest that the White House is contemplating an executive order to create a cross-sector AI working group. This group would include government officials and technology leaders to assess how a formal review process for new AI models might be developed.
The move comes amid growing concern that next-generation AI systems could materially amplify cyber threats. In particular, Anthropic’s latest model, Mythos, has drawn attention from security experts who warn it could “supercharge complex cyberattacks” due to its advanced coding capabilities and ability to identify exploitable vulnerabilities.
A White House official declined to confirm the discussions, stating, “Any policy announcement will come directly from the President. Discussion about potential executive orders is speculation.”
If implemented, the initiative would mark a notable reversal for President Donald Trump, who has consistently advocated for minimal regulation to accelerate AI innovation and maintain U.S. competitiveness, particularly against China.
The administration had previously rolled back a 2023 executive order introduced under President Joe Biden that required companies to share safety test results for AI systems, underscoring its earlier preference for deregulation.
However, officials are now increasingly focused on mitigating the risks of AI-enabled cyberattacks and avoiding the political and economic fallout of a major security breach. At the same time, policymakers are evaluating how advanced AI capabilities could be leveraged for defense and intelligence purposes.
The evolving stance highlights a broader tension shaping global AI policy: how to balance rapid innovation with the need for safeguards as frontier models become more powerful and potentially disruptive.