\n\n\n\n US AI Policy News: Fragmentation, Executive Orders, and the Evolving Landscape - AI7Bot \n

US AI Policy News: Fragmentation, Executive Orders, and the Evolving Landscape

📖 5 min read837 wordsUpdated Mar 26, 2026

The US approach to AI policy is a complex, fragmented, and evolving space. Unlike the EU’s thorough AI Act, the US relies on a patchwork of executive orders, agency guidelines, and state-level initiatives. Understanding this makes a difference for companies and researchers operating in or engaging with the American market.

The “Non-Regulatory” Approach (Initially)

For a long time, the US resisted thorough AI regulation, prioritizing innovation and rapid development. The philosophy was that existing laws were sufficient, and new regulations could stifle a nascent industry.

This changed under the Biden administration:

Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (2023). This landmark EO is the most significant piece of federal AI policy in the US. It established requirements for AI safety, security, and ethical use across the federal government and for critical infrastructure. Key provisions include:
– **Safety reporting:** Mandates that developers of foundation models (especially those that pose national security risks) report safety test results to the government.
– **Watermarking:** Requires the National Institute of Standards and Technology (NIST) to develop standards for AI-generated content authentication (watermarking).
– **Civil rights protections:** Directs agencies to ensure AI systems don’t discriminate or harm civil liberties.
– **Competition and innovation:** Encourages competition in the AI sector and supports R&D.

The EO gives federal agencies broad authority to develop their own AI policies within their respective domains.

Key Federal Players

National Institute of Standards and Technology (NIST). NIST leads the development of AI technical standards and guidelines. Its AI Risk Management Framework is widely adopted voluntarily. NIST also houses the US AI Safety Institute.

Department of Commerce. Oversees NIST and plays a key role in developing voluntary AI standards and promoting US AI competitiveness.

Federal Trade Commission (FTC). Uses its existing authority to combat unfair, deceptive, or anticompetitive practices involving AI. Focuses on consumer protection, privacy, and antitrust.

Equal Employment Opportunity Commission (EEOC). Focuses on preventing AI-driven discrimination in employment decisions.

Department of Health and Human Services (HHS) / FDA. Regulates AI in healthcare and medical devices. The FDA has been particularly active in approving AI-enabled medical devices.

Department of Justice (DOJ). Investigates and prosecutes cases where AI systems might be used to violate civil rights or antitrust laws.

Defense Advanced Research Projects Agency (DARPA). Funds high-risk, high-reward AI research for national security applications.

State-Level Initiatives

States are increasingly active in AI policy, often ahead of the federal government:

California. Has numerous AI-related bills proposed and debated, covering everything from AI in hiring to deepfake regulation to data privacy. Given California’s economic size, its laws often set national trends.

Colorado. Passed a pioneering law in 2024 regulating “high-risk” AI systems used in insurance, lending, and employment to prevent discrimination.

Illinois. Requires transparency and consent when AI is used in hiring processes.

The growth of state-level AI regulation creates a patchwork of rules that companies must navigate, adding complexity.

Industry Commitments

The White House secured voluntary commitments from leading AI companies (OpenAI, Google, Meta, Microsoft, Anthropic, etc.) to:
– Test AI models for safety before release
– Share safety information with government
– Invest in cybersecurity
– Develop watermarking for AI-generated content
– Prioritize AI research for grand challenges (climate, disease)

These are not legally binding but represent an effort by the industry to self-regulate and collaborate with the government.

Key Policy Debates

Legislation vs. regulation. Should Congress pass a thorough AI law, or should agencies continue to regulate under existing authority? Congress has struggled to pass significant tech legislation.

Safety vs. innovation. How to balance the need for AI safety with the desire to foster innovation and maintain US competitiveness.

Bias and fairness. How to ensure AI systems are fair and don’t perpetuate or amplify existing societal biases, particularly in critical applications like hiring, lending, and criminal justice.

Privacy. How to protect individual privacy in an era where AI can process vast amounts of personal data. The lack of a thorough federal privacy law in the US makes this particularly challenging.

National security. How to manage the national security risks and opportunities presented by advanced AI.

My Take

The US approach to AI policy is a pragmatic response to a rapidly evolving technology. It avoids the rigidity of thorough legislation in favor of flexible, adaptable rules that can be updated as AI capabilities advance.

The downside is fragmentation and uncertainty. Companies operating nationally must navigate a maze of federal and state rules. The lack of a single, clear regulatory framework creates challenges for both compliance and enforcement.

The direction is clear: the US government is increasingly engaged in AI policy, and the era of pure self-regulation is over. Companies need to pay close attention to executive orders, agency guidance, and state laws. Building AI that is demonstrably safe, fair, and transparent is no longer just good practice — it’s a growing expectation.

🕒 Last updated:  ·  Originally published: March 13, 2026

💬
Written by Jake Chen

Bot developer who has built 50+ chatbots across Discord, Telegram, Slack, and WhatsApp. Specializes in conversational AI and NLP.

Learn more →
Browse Topics: Best Practices | Bot Building | Bot Development | Business | Operations
Scroll to Top