Artificial intelligence is advancing quickly, as are the rules shaping how companies can use it.
For U.S. tech companies, 2025 brings policy challenges unlike anything we have seen before. New regulations are popping up in Washington, Brussels, Beijing, and state capitals across the country. Public affairs and lobbying teams need a clear plan to stay ahead.
In this post, we break down what is happening in U.S. and global AI policy, what it means for your company, and how your public affairs team can take meaningful action this year.
The U.S. AI Policy Picture: Federal Uncertainty, State-Level Action
The U.S. still has no law covering AI at the federal level. Congress introduced more than 120 AI-related bills in the last session, but most stalled as lawmakers debated balancing innovation with risk.
The White House has changed course as well. President Biden’s 2023 executive order on safe AI pushed federal agencies to create new safety rules. In January 2025, President Trump reversed many of those steps, focusing instead on making the U.S. more competitive in AI development.
With no clear federal framework in place, agencies like the Federal Trade Commission use existing consumer protection and competition laws to hold companies accountable.
Meanwhile, state governments are moving quickly to fill the gaps.
Colorado passed the first state-level AI law in 2024, which takes effect in 2026. It creates strict rules for “high-risk” AI systems, like those used in hiring or lending, to prevent bias and discrimination. Other states are following, with Virginia, New York, and California introducing AI bills. California has gone even further, passing transparency laws requiring AI companies to disclose how their systems work and what data they use.
By early 2025, more than 550 new AI bills had been filed across at least 45 states. This creates a major challenge for companies trying to keep up with inconsistent rules nationwide.
What this means for your team: Without a federal roadmap, you will need to track each state’s laws carefully and adjust your company’s approach to meet local requirements.
Europe’s AI Act: Global Standards That Reach U.S. Firms
If your company works internationally, Europe’s AI Act is probably already on your radar. Europe has been leading the way in regulation, such as GDPR, and AI regulation is shaping up to be a hot topic for European leaders.
The EU’s new law divides AI systems into risk categories. Minimal-risk AI faces no new rules. Limited-risk AI, like chatbots or deepfakes, must meet transparency requirements. High-risk AI, used in areas like credit scoring, hiring, or healthcare, will face strict controls around bias, safety, and documentation.
Even if your company is based in the U.S., you must comply if your AI systems touch the EU market. The fines are serious, reaching up to 6 percent of global turnover.
Many companies are already taking a “highest standard” approach, applying the EU’s tough rules across all markets to simplify compliance and avoid maintaining separate regional systems.
China’s Tight and Focused AI Rules
China has built detailed AI regulations, each aimed at specific uses and risks.
Key rules include:
- Generative AI Measures: Public-facing AI tools must register with authorities, undergo security reviews, and ensure their content aligns with Chinese laws and values.
- Algorithm Rules: Platforms using recommendation systems must offer transparency, opt-out options, and special reviews for tools that influence public opinion.
- Deepfake Controls: AI-generated audio and video must be labeled, and false or misleading deepfakes must be blocked or removed.
- Data Privacy Laws: Strict data controls limit what personal data can be used to train AI and often require local storage of sensitive data.
For foreign companies, this often means creating China-specific product versions or working with local partners to meet compliance needs.
Global Guidelines Beyond National Laws
Even without binding international laws, global organizations are shaping AI norms.
Groups like the OECD, G7, and UNESCO are driving agreement on principles like fairness, accountability, transparency, and human rights. In 2025, the OECD launched a voluntary reporting framework that invites companies to share how they manage AI risks. Participating can help companies signal leadership, strengthen reputation, and shape global conversations.
How Companies Are Shaping the Rules
Tech companies are not just waiting to see what laws get passed. They are actively working to shape them.
Lobbying on AI has surged. In 2024, more than 648 companies lobbied at the federal level on AI-related issues, a 141 percent increase from the year before. Even startups like OpenAI, Anthropic, and Cohere dramatically boosted their advocacy spending.
The most effective arguments focus on:
- Keeping the U.S. competitive globally
- Supporting smart, risk-based regulations
- Showing proactive safety commitments through voluntary efforts
- Working through coalitions and trade groups to speak with a united voice
These approaches have already helped shape laws, such as softening tough provisions in Europe’s AI Act and pushing back on state-level bills seen as too broad or heavy-handed.
Managing the Uncertainty: Four Areas to Focus On
Even as laws develop, your public affairs and legal teams can take action today.
Here are four areas to prioritize:
- Accountability: Run internal bias audits and fairness checks on key AI systems. Anticipate laws that will require impact assessments or independent audits.
- Safety: Follow voluntary frameworks like the NIST AI Risk Management Framework. Keep detailed technical documentation, test systems carefully, and maintain human oversight for sensitive use cases.
- Data Governance: Track how your data is collected, stored, and used across jurisdictions. Prepare for data localization requirements and make sure you can explain and justify your data sources.
- Transparency and Explainability: Make sure you can tell users when they are interacting with AI. Build tools and processes that let you explain how important automated decisions were made.
A Public Affairs Playbook for 2025
To stay ahead, your policy and lobbying teams should:
- Monitor global and local laws continuously
- Engage early with lawmakers and regulators to provide input
- Build alliances with trade groups, industry coalitions, and trusted academic or nonprofit partners
- Strengthen your internal governance, ethics, and compliance processes
- Develop a clear narrative about your company’s commitment to responsible innovation
- Use data and tools to target your advocacy and track results
- Prepare action plans for both sudden crises and new opportunities
AI policy in 2025 is evolving quickly. It presents risks and a chance for companies to lead and help shape how AI is governed for the future.
Public affairs teams that stay prepared, proactive, and globally aware will not just keep their companies compliant. They will help set the rules for how innovation and responsibility go hand in hand in the AI era.