The landscape of artificial intelligence (AI) has drastically transformed in the past year, marked by a significant uptick in corporate lobbying initiatives aimed at influencing legislation concerning this rapidly evolving technology. According to recent data from OpenSecrets, the number of companies engaging in lobbying activities related to AI surged from 458 in 2023 to 648 in 2024. This astonishing 141% increase underscores the growing recognition of AI’s implications and the urgent need for regulatory frameworks. As the regulatory environment remains ambiguous, the corporate sector’s push for clarity and governance becomes increasingly vital.
Major technology players are stepping up to advocate for legislative measures that will shape the future of AI. Companies like Microsoft have thrown their weight behind key initiatives, such as the CREATE AI Act. This proposed legislation aims to establish benchmarks for AI systems developed in the U.S., setting an important standard for ethical AI practices. Concurrently, OpenAI has signaled its support for the Advancement and Reliability Act, which would create a dedicated government research center to focus on AI. These initiatives reflect a broader trend among tech companies to align their interests with regulatory frameworks, positioning themselves as responsible players in an increasingly scrutinized field.
The data indicates that AI labs, focused explicitly on commercializing AI technologies, drastically increased their lobbying efforts in 2024 compared to the previous year. For instance, OpenAI’s expenditures ballooned from $260,000 to an impressive $1.76 million, while Anthropic increased its spending from $280,000 to $720,000. Such a rise is a clear indication of the urgency felt by these companies to influence policy decisions that will impact their operations and the wider AI ecosystem.
Recognizing the uphill battle against regulatory hurdles, companies are not only increasing their financial outlay for lobbying but are also making strategic hires to navigate the complex policymaking landscape. Anthropic, for instance, appointed its first in-house lobbyist, Rachel Appleton, a veteran from the Department of Justice. OpenAI, on the other hand, brought on Chris Lehane, an experienced political figure, as its new VP of policy. The total lobbying expenditure of OpenAI, Anthropic, and another startup, Cohere, reached $2.71 million in 2024—significantly higher than their combined spending of $610,000 in 2023. While this amount may still seem small compared to the overall $61.5 million invested by the broader tech industry, it highlights a growing commitment towards active engagement in policy discourse.
The legislative landscape surrounding AI in the U.S. has been tumultuous, as evidenced by over 90 proposed pieces of AI-related legislation in Congress during the first half of 2023 alone. Simultaneously, state lawmakers have been proactive, proposing more than 700 laws. Landmark legislation has emerged, such as Tennessee’s law to protect voice artists from AI cloning, and Colorado’s tiered, risk-based approach to AI policy. Even California’s governor, Gavin Newsom, introduced multiple safety bills demanding transparency from AI developers.
However, despite these efforts, comprehensive federal regulation akin to the European Union’s AI Act remains elusive. Recent attempts at stricter regulations faced significant pushback, with Governor Newsom vetoing legislation that aimed for broadly encompassing safety requirements. In Texas, the TRAIGA bill seems poised for a similar fate, revealing the challenges involved in reconciling state-level advocacy with federal dynamics.
As the clock ticks towards more decisive policies, the question arises: can the federal government advance AI legislation more effectively in 2024? Previous efforts have been stymied amid competing interests, with the current administration hinting at a move toward deregulation. President Donald Trump’s recent executive order suspending Biden-era AI policies signals a shift that could stifle proactive regulatory measures.
With AI companies like Anthropic advocating for “targeted” federal regulation within an 18-month timeframe, a collective voice is emerging within the tech community urging the government to take substantial action. OpenAI has joined in this call, pushing for infrastructure that supports development in a responsible manner.
The upcoming year will be crucial in determining whether the AI industry can navigate the existing uncertainties and secure a regulatory environment that balances innovation with consumer protection. As corporate lobbying continues to ramp up, the debate surrounding AI governance will undoubtedly be one of the defining challenges of our time. The outcome of this struggle for regulatory clarity will not only influence the future of AI development but will also set important precedents about how technology is governed in the digital age.