AI Regulation: Tech & Startup Proposals for US Plan

Amazon’s Recommendations

Amazon is advocating for investments in energy infrastructure, equitable access to cloud computing and semiconductor technology, workforce development initiatives, federal adoption of AI solutions, and the establishment of interoperable international standards.

  • Streamlining Energy Regulations: Amazon emphasizes the significant electricity demands of AI and proposes streamlining nuclear power projects and transmission infrastructure upgrades to maintain U.S. competitiveness. The company underscores that AI’s energy needs are rapidly escalating, requiring proactive measures to ensure a stable and cost-effective energy supply. This includes expediting the approval process for nuclear power plants, which offer a reliable and carbon-free energy source, and investing in modernizing the electricity grid to handle the increased demand. By taking these steps, the U.S. can avoid energy bottlenecks and maintain its leadership in AI innovation.
  • Leading Global AI Discussions: Amazon calls upon the White House to take the lead in global AI efforts by promoting regulatory interoperability through international standards. The company argues that a fragmented regulatory landscape would hinder cross-border collaboration and innovation in AI. By working with other countries to develop common standards and principles for AI governance, the U.S. can promote a more open and competitive global AI market. This includes addressing issues such as data privacy, algorithmic bias, and the ethical use of AI.
  • AI Workforce Education: Amazon emphasizes that Americans should be educated on practical AI implementation, not just advanced technical training. The U.S. must invest in advanced AI researchers and engineers while also enabling workers to use AI tools at work. The company highlights the need for a multi-faceted approach to AI workforce development, including training programs for both highly skilled AI specialists and workers in other industries who can leverage AI tools to improve their productivity. This includes providing access to online courses, apprenticeships, and on-the-job training opportunities.
  • Transforming Government Agencies with AI: Amazon suggests federal agencies should adopt AI and cloud computing to move away from outdated data centers and transform operations. The company argues that AI and cloud computing can help government agencies improve their efficiency, reduce costs, and deliver better services to citizens. This includes using AI to automate routine tasks, improve decision-making, and enhance cybersecurity. By embracing these technologies, government agencies can become more agile and responsive to the needs of the public.

Anthropic’s Proposals

Anthropic anticipates that by late 2026, advanced AI systems could rival Nobel laureates in reasoning capabilities and should be treated as national assets. The company’s key recommendations include:

  • AI Threat Testing: Anthropic advocates for creating federal infrastructure to test powerful AI models for risks in cybersecurity and biological weapon development. The company warns that advanced AI systems could be used to develop new cyber weapons, create synthetic biological agents, or spread misinformation. By creating a dedicated testing facility, the government can identify and mitigate these risks before they materialize. This includes developing methods for detecting malicious AI behavior, assessing the vulnerability of AI systems to attacks, and developing countermeasures to protect against AI-enabled threats.
  • Strengthening Semiconductor Export Controls: Anthropic supports restrictions on advanced chips and agreements with other countries to prevent smuggling, including the Nvidia H20. The company argues that access to advanced chips is critical for developing powerful AI systems, and that restricting access to these chips can slow down the development of AI by adversaries. This includes working with other countries to establish common export control policies and prevent the illegal transfer of chips to unauthorized parties. The focus on the Nvidia H20 highlights the specific concern around high-performance computing chips that are essential for training large AI models.
  • Energy for AI: Like Amazon, Anthropic projects that 50 gigawatts of additional power will be required by 2027 for U.S. AI developers. The company emphasizes that the energy demands of AI are growing rapidly, and that the U.S. needs to invest in new energy sources to meet this demand. This includes supporting the development of renewable energy sources, such as solar and wind power, as well as exploring new energy technologies, such as nuclear fusion. Without sufficient energy capacity, the US risks falling behind in AI development.
  • Monitoring Economic Impacts of AI: Anthropic recommends enhancing data collection mechanisms to capture the economic impacts of AI adoption and prepare for ‘significant’ changes. The company argues that AI will have a profound impact on the economy, and that the government needs to collect better data to understand these impacts. This includes tracking the adoption of AI by businesses, measuring the impact of AI on employment, and assessing the distributional effects of AI on different groups of workers. This data can inform policy decisions aimed at mitigating the negative consequences of AI and maximizing its benefits.

Meta’s Perspective

Meta’s Llama models are central to its open-source AI leadership vision, reflected in its recommendations to the U.S. government:

  • Avoiding Stifling Open Source: Meta urges the U.S. to resist regulating open AI models, warning it would empower authoritarian regimes. The company argues that open-source AI is essential for fostering innovation and ensuring that AI is developed in a responsible and transparent manner. By regulating open AI models, the government would stifle innovation and give authoritarian regimes an advantage in the development of AI. Open source allows for broader scrutiny and collaboration, leading to more robust and secure AI systems.
  • Federal Agency Adoption: Meta advocates for using open models in government for security, customization, and national security use cases. The company argues that open AI models can be used to improve the security and efficiency of government agencies. This includes using open AI models to detect cyber threats, analyze intelligence data, and provide better services to citizens. Open models also offer greater customization options to meet the specific needs of different government agencies.
  • Fair Use Clarity: Meta seeks an executive order clarifying that training AI on public data is fair use to protect against copyright lawsuits, aligning with OpenAI and Google. The company argues that training AI on public data is essential for developing effective AI systems, and that the government needs to clarify that this is a fair use under copyright law. This would protect AI developers from costly copyright lawsuits and encourage the development of new AI technologies. The lack of clarity currently hinders progress and creates unnecessary legal risks.
  • State Rules Harm Innovation: Meta warns that fragmented state-level rules will raise compliance costs and stifle innovation. The company argues that a patchwork of state AI regulations would create a complex and burdensome regulatory environment for AI developers. This would raise compliance costs and discourage innovation, particularly for small businesses and startups. A unified federal regulatory framework is needed to ensure a level playing field for AI developers.

Microsoft’s Stance

Microsoft emphasizes the U.S. must remain at the forefront of AI, investing over $50 billion in U.S. AI infrastructure in 2025.

  • Enhancing Computational and Energy Resources: Microsoft calls for modernizing the electric grid, permitting data center construction, and enhancing U.S. manufacturing of critical grid components and AI hardware. The company argues that the U.S. needs to invest in its computational and energy infrastructure to remain competitive in the development of AI. This includes modernizing the electric grid to handle the increased demand for electricity from AI data centers, streamlining the permitting process for data center construction, and encouraging the U.S. manufacturing of critical grid components and AI hardware. A robust and reliable infrastructure is essential for supporting the growth of the AI industry.
  • Access to High-Quality Data: Microsoft wants to unlock government and publicly funded data for AI training. The company argues that access to high-quality data is essential for training effective AI systems, and that the government should make more of its data available to AI developers. This includes anonymizing and releasing data sets that can be used to train AI models for a variety of applications, such as healthcare, education, and transportation. Open access to data promotes innovation and allows for the development of more accurate and reliable AI systems.
  • Promoting Trust, Safety, and National Security with AI: Microsoft supports laws targeting deepfake fraud, harnessing AI in defense, and advancing cybersecurity protection. The company emphasizes the need to ensure that AI is used in a responsible and ethical manner, and that the government should take steps to mitigate the risks associated with AI. This includes enacting laws to combat deepfake fraud, harnessing AI in defense to improve national security, and advancing cybersecurity protection to protect against AI-enabled cyberattacks. Addressing these risks is crucial for building public trust in AI.
  • Upskilling the U.S. Workforce: Microsoft suggests government should lead national efforts to educate about AI and prepare them for future jobs. The company argues that the U.S. needs to invest in its workforce to prepare workers for the jobs of the future. This includes providing access to training and education programs that teach workers the skills they need to use AI tools effectively, as well as supporting workers who are displaced by AI automation. Preparing the workforce for the AI era is essential for ensuring that the benefits of AI are shared broadly.

Mistral AI’s Advocacy

France-based Mistral, with operations in Palo Alto, Calif., champions open-source innovation.

  • Supporting Open Source: Mistral argues that transparency and public access to model weights improve research, security, and democratization of AI development, similar to Meta. The company emphasizes that open-source AI is essential for fostering innovation and ensuring that AI is developed in a responsible and transparent manner. By making model weights publicly available, researchers can more easily study and improve AI models, and developers can build new applications on top of them. Openness promotes greater security through community review and ensures broader access to AI technology.
  • Weakening Monopolies: Mistral advocates for antitrust enforcement to ensure startups and small- to medium-size businesses (SMBs) can compete. The company argues that antitrust enforcement is needed to prevent monopolies from stifling competition and innovation in the AI industry. This includes preventing large companies from acquiring promising startups, as well as ensuring that small businesses have access to the resources they need to compete effectively. A competitive market fosters innovation and ensures that the benefits of AI are shared broadly.
  • Enhancing Global Chip Trade: Mistral said overregulating chips or AI exports could shift innovation to other countries. The company warns that overregulating chip or AI exports could harm the U.S. AI industry and shift innovation to other countries. This includes carefully considering the impact of export controls on the competitiveness of U.S. companies, as well as working with other countries to develop common standards for AI regulation. A balance is needed between protecting national security and promoting global collaboration in AI.
  • Global AI Cooperation: Mistral wants the U.S. to balance protecting national security while encouraging multinational innovation partnerships. The company emphasizes the importance of international collaboration in AI research and development. This includes supporting joint research projects, sharing data and resources, and developing common standards for AI regulation. Global cooperation is essential for addressing the challenges and opportunities posed by AI.

Uber’s Considerations

Uber notes AI’s increasing role in mobility services and has invested in AI governance for accountability.

  • Avoiding Overregulating Low-Risk AI: Uber says many mobility-related AI applications pose minimal risk and should not be burdened with complex new rules. The company argues that many AI applications in the mobility sector, such as route optimization and ride matching, pose minimal risk and should not be burdened with complex new regulations. This includes focusing regulatory efforts on high-risk applications, such as autonomous driving, and allowing for experimentation and innovation in low-risk areas. Overregulation can stifle innovation and prevent consumers from benefiting from new AI-powered mobility services.
  • Stopping Patchwork of State Rules: Uber urges federal preemption to eliminate inconsistent state AI laws. The company argues that a patchwork of state AI regulations would create a complex and burdensome regulatory environment for businesses operating across state lines. This would raise compliance costs and discourage innovation, particularly for small businesses and startups. Federal preemption is needed to ensure a consistent and predictable regulatory environment for AI.
  • Using Existing Laws First: Uber says current regulations on privacy, discrimination, and consumer protection already address most AI-related risks. The company suggests that existing laws and regulations, such as those related to privacy, discrimination, and consumer protection, are already sufficient to address most of the risks associated with AI. This includes enforcing existing laws to prevent discriminatory AI practices, as well as ensuring that consumers have the right to access and correct their data. Leveraging existing frameworks reduces the need for new and potentially burdensome regulations.
  • Adopting a Risk-Based Framework: Uber suggests regulations should focus on high-risk use cases and boost innovation in less risky ones such as pricing. The company advocates for a risk-based approach to AI regulation, where regulations are tailored to the specific risks associated with different AI applications. This includes focusing regulatory efforts on high-risk use cases, such as autonomous driving and facial recognition, and allowing for greater flexibility and innovation in low-risk areas, such as pricing and route optimization. A risk-based framework ensures that regulations are proportionate and do not stifle innovation.

CrowdStrike’s Focus

CrowdStrike’s comments center on using and securing AI in cybersecurity.

  • Focusing on AI for Cybersecurity: CrowdStrike emphasizes that AI detects cyber threats, giving the U.S. an ‘enormous’ advantage because it can defeat new threats based on behavior. The company argues that AI is a powerful tool for detecting and preventing cyber threats, and that the U.S. needs to invest in AI-powered cybersecurity solutions to protect its critical infrastructure and data. This includes using AI to analyze network traffic, identify malicious software, and detect insider threats. AI can provide a significant advantage in the fight against cybercrime.
  • Regulation Should Not Stifle Innovation: CrowdStrike says new AI regulations should not harm innovation and development. The company warns that new AI regulations could stifle innovation and harm the development of AI-powered cybersecurity solutions. This includes carefully considering the impact of regulations on the competitiveness of U.S. companies, as well as ensuring that regulations are flexible and adaptable to new technologies. A balance is needed between security and innovation.
  • Protecting the Models: CrowdStrike calls for robust protections around AI systems and training data for resilience. The company emphasizes the need to protect AI systems and training data from cyberattacks and theft. This includes implementing robust security measures to protect AI models from being compromised, as well as ensuring that training data is protected from unauthorized access and modification. Protecting AI assets is crucial for maintaining the integrity and reliability of AI systems.

JPMorgan Chase’s Perspective

JPMorgan, running over 500 AI and machine learning systems, calls for greater AI governance.

  • Using Existing Frameworks: JPMorgan argues thatcurrent banking regulations are well-suited to handle AI. The bank suggests that existing banking regulations are already sufficient to address many of the risks associated with AI in the financial services industry. This includes leveraging existing regulatory frameworks for risk management, compliance, and consumer protection to ensure that AI is used responsibly and ethically. Adapting existing frameworks minimizes disruption and provides a familiar regulatory landscape.
  • Sector-Specific Regulation: JPMorgan supports a sector-by-sector approach, where financial regulators lead AI oversight for banks. The bank advocates for a sector-specific approach to AI regulation, where financial regulators are responsible for overseeing the use of AI in the banking industry. This allows regulators to develop expertise in the specific risks and challenges associated with AI in their sector, as well as to tailor regulations to the unique characteristics of their industry. Sector-specific regulation ensures that regulations are relevant and effective.
  • Leveling the Playing Field: JPMorgan wants non-banks offering financial services to be subject to the same standards, especially for AI in credit underwriting and fraud detection. The bank emphasizes the need to ensure a level playing field between banks and non-bank financial service providers. This includes subjecting non-banks to the same regulatory standards as banks, particularly with regard to the use of AI in credit underwriting and fraud detection. This ensures that all financial service providers are held to the same standards of safety and soundness.
  • Unifying Federal and State Regulation: JPMorgan echoes concerns about state laws and calls for federal preemption. The bank argues that a patchwork of state AI regulations would create a complex and burdensome regulatory environment for financial institutions. This would raise compliance costs and discourage innovation, particularly for small banks. Federal preemption is needed to ensure a consistent and predictable regulatory environment for AI.

Examining the Nuances of AI Regulation: A Deep Dive into Industry Perspectives

The ongoing discourse surrounding the regulation of artificial intelligence in the United States has attracted a diverse range of voices, each offering unique perspectives shaped by their respective industries and strategic objectives. As the White House prepares to unveil its AI Action Plan, it’s crucial to analyze the specific proposals put forth by key players such as Amazon, Anthropic, Meta, and others, to understand the complex interplay of interests at stake.

Amazon’s Call for Comprehensive AI Ecosystem Development

Amazon’s recommendations reflect its position as a dominant force in cloud computing, e-commerce, and digital services. The company’s emphasis on streamlining energy regulations underscores the immense energy consumption associated with AI workloads, particularly for training large language models and running complex AI applications. Amazon’s advocacy for nuclear power and transmission upgrades highlights the need for a reliable and scalable energy infrastructure to support the continued growth of AI. The company understands that its own growth, and the growth of the broader AI ecosystem, depends on a stable and affordable energy supply. Furthermore, Amazon’s recommendations include the simplification of bureaucratic processes related to energy infrastructure projects, acknowledging that lengthy approval times can hinder progress and make it difficult to meet the escalating energy demands of AI.

Furthermore, Amazon’s call for global leadership in AI and workforce development demonstrates its commitment to fostering a thriving AI ecosystem in the United States. By promoting international standards for AI adoption and investing in AI education and training programs, Amazon seeks to ensure that the U.S. remains competitive in the global AI landscape. This includes supporting initiatives that promote digital literacy and provide workers with the skills they need to thrive in an AI-driven economy. Amazon’s commitment extends beyond technical skills, emphasizing the importance of ethical considerations and responsible AI development.

Anthropic’s Focus on Responsible AI Development and National Security

Anthropic, an AI safety and research company, brings a distinct perspective to the regulatory discussion. Its projection that AI systems could rival Nobel laureates in reasoning ability by 2026 underscores the transformative potential of AI and the need for careful consideration of its societal implications. Anthropic’s vision acknowledges the potential for AI to revolutionize various fields, but also stresses the importance of preparing for the ethical and societal challenges that may arise. This perspective positions AI as a powerful tool that must be wielded responsibly, with careful consideration of its potential impact on humanity.

Anthropic’s call for AI threat testing and strengthened semiconductor export controls reflects its concern about the potential misuse of AI for malicious purposes, such as cyberattacks and the development of biological weapons. By advocating for robust federal infrastructure for testing AI models and restricting access to advanced chips, Anthropic seeks to mitigate the risks associated with powerful AI systems. The company emphasizes the need for proactive measures to prevent AI from being used to harm society, including the development of countermeasures and safeguards. Anthropic’s proposals aim to strike a balance between promoting innovation and mitigating potential risks.

Meta’s Defense of Open-Source AI and Innovation

Meta’s recommendations align with its strategy of promoting open-source AI development through its Llama models. The company’s warning against stifling open AI models reflects its belief that open-source AI fosters innovation, transparency, and collaboration. By resisting regulations that could limit the accessibility and use of open AI models, Meta seeks to empower a wider range of developers and researchers to contribute to the advancement of AI. Meta believes that open-source AI promotes a more decentralized and democratic approach to AI development, allowing for broader participation and innovation.

Meta’s emphasis on fair use clarity and federal agency adoption of open AI models further underscores its commitment to promoting open and responsible AI development. By clarifying the legal framework for training AI on public data and encouraging government agencies to adopt open AI models, Meta seeks to create a more level playing field for AI innovation. This approach enables wider access to AI technologies and encourages innovation across various sectors, including government and public services.

Microsoft’s Vision for a Trusted and Inclusive AI Future

Microsoft’s recommendations reflect its broader vision for a trusted and inclusive AI future. The company’s emphasis on enhancing computational and energy resources underscores the need for significant investments in AI infrastructure to support the development and deployment of AI technologies. Microsoft recognizes that the future of AI depends on having the resources necessary to support its growth.

Microsoft’s call for access to high-quality data and the promotion of trust, safety, and national security with AI highlights the importance of ensuring that AI systems are developed and used responsibly. By advocating for laws that target deepfake fraud, harness AI in defense, and advance cybersecurity protection, Microsoft seeks to mitigate the risks associated with AI and promote its beneficial applications. This approach fosters a more secure and trustworthy AI ecosystem, encouraging wider adoption and acceptance of AI technologies.

Mistral AI’s Advocacy for Competition and Global Collaboration

Mistral AI, a French startup with a presence in Silicon Valley, brings a unique perspective to the discussion, advocating for competition and global collaboration in the AI space. The company’s support for open-source AI aligns with its mission of democratizing access to AI technology and fostering innovation. Mistral AI’s vision promotes a more inclusive and accessible AI landscape, allowing for broader participation and innovation from diverse stakeholders.

Mistral AI’s call for antitrust enforcement and enhanced global chip trade reflects its concern about the potential for monopolies and protectionist measures to stifle competition and hinder the development of AI. By advocating for a level playing field for startups and promoting international partnerships, Mistral AI seeks to foster a more vibrant and collaborative AI ecosystem. This approach fosters greater innovation and ensures that the benefits of AI are shared more widely.

Uber’s Pragmatic Approach to AI Regulation in Mobility

Uber’s recommendations reflect its pragmatic approach to AI regulation in the context of mobility services. The company’s emphasis on avoiding overregulating low-risk AI applications highlights the need for a nuanced regulatory framework that takes into account the specific risks and benefits of different AI use cases. Uber’s proposals emphasize a balanced approach that encourages innovation while mitigating potential risks.

Uber’s call for federal preemption of state AI laws underscores the importance of creating a consistent and predictable regulatory environment for businesses operating across state lines. By advocating for a risk-based framework that focuses on high-risk use cases, Uber seeks to ensure that AI regulations are proportionate and do not stifle innovation in less risky areas. This approach promotes a more efficient and effective regulatory framework.

CrowdStrike’s Focus on AI-Powered Cybersecurity

CrowdStrike’s recommendations reflect its expertise in cybersecurity and its belief in the transformative potential of AI for detecting and preventing cyber threats. The company’s emphasis on using AI to detect cyber threats underscores the importance of leveraging AI to enhance national security and protect critical infrastructure. CrowdStrike’s proposals highlight the critical role that AI can play in combating cybercrime and protecting against increasingly sophisticated threats.

CrowdStrike’s call for protecting AI models and ensuring that regulations do not stifle innovation highlights the need to strike a balance between security and innovation. By advocating for robust protections for AI systems and promoting a regulatory environment that encourages the development of new AI technologies, CrowdStrike seeks to foster a more secure and innovative cybersecurity ecosystem. This balanced approach ensures that AI can be used effectively to enhance cybersecurity without hindering innovation.

JPMorgan Chase’s Emphasis on AI Governance and Existing Regulatory Frameworks

JPMorgan Chase’s recommendations reflect its focus on AI governance and its belief that existing regulatory frameworks can be adapted to address the challenges posed by AI. The bank’s emphasis on using existing frameworks and sector-specific regulation underscores the importance of leveraging established regulatory mechanisms to ensure that AI is used responsibly in the financial services industry. This approach leverages existing expertise and infrastructure to ensure that AI is used in a safe and responsible manner.

JPMorgan Chase’s call for leveling the playing field between banks and non-banks highlights the need to ensure that all financial service providers are subject to the same regulatory standards, regardless of their business model. By advocating for a unified regulatory approach, JPMorgan Chase seeks to promote fairness and stability in the financial system. This approach ensures that all financial service providers are held to the same standards, promoting a more level playing field and protecting consumers.