The New Frontier: AI Companies Require New Approaches to Security

February 3, 2025
Share on Bluesky

AI companies are fundamentally different, and the industry is unlike anything we’ve seen before. While there are surface-level similarities to other high-tech organizations in how they operate (e.g. commonalities like Kubernetes, cloud-native architectures, and a reliance on everything-as-code), the reality is far more nuanced. AI companies face evolving and unprecedented challenges and dynamics that make their security and operational needs fundamentally different from those of other industries. 

Traditional approaches to security often act as bottlenecks, but in AI, the cost of slowing down isn’t just lost revenue—it’s lost relevance. The first-mover advantage is especially evident in AI, and since the success of a billion-dollar company hinges on its ability to innovate faster than its competitors, security must feel frictionless to the user.

Pomerium has partnered with some of the most forward-thinking, foundational leaders in the AI space. These companies are managing complex, hybrid environments to explore new frontiers of what is possible, while simultaneously needing to safeguard billions of dollars worth of intellectual property. Their challenges are unique, and recognizing that difference is the first step toward building security solutions that work for them, not against them.

Surface Similarities

AI companies share some commonalities with other tech-forward organizations. Their workloads are often orchestrated on Kubernetes with pipelines optimized for automation and infrastructure engineered to operate at massive scale. These pipelines and infrastructure are essential for organizations tackling data-intensive problems like training machine learning models or deploying AI systems across global platforms.

However, it’s worth noting that operating like this isn’t the norm for most companies outside the AI sector. Many organizations, especially those in traditional industries, operate with fragmented infrastructure and slower development cycles. In contrast, AI companies embrace modern tools and standardization to maintain their competitive edge and rapid pace.

A network architecture diagram showing user authentication flow. At the top, there are two user groups: 'on-prem users' shown in a blue box with 4 user icons, and 'remote users' represented by 3 individual user icons. Both user groups connect via dotted lines to Pomerium's identity-aware proxy (shown as a purple bridge-like structure) in the center. An 'Identity Provider' (shown as a blue triangular icon) interacts with the proxy through two arrows: an orange arrow representing the authentication process labeled 'Identify User', and a green arrow indicating the authenticated user status. Below, a blue box labeled 'AI Company's Internal Network' contains three identical grey server icons labeled as 'secured apps' and 'secured services'. Green arrows connect Pomerium's identity-aware proxy to these secured resources, representing context-based access control.
Network Architecture Diagram Showing User Authentication Flow

What Makes AI Companies Different

1. Heterogeneous Compute Environments

Legacy enterprises naturally inherit complexity over time for a variety of reasons (e.g. acquisitions). What’s interesting is that many brand new start-ups and foundational AI companies have also resorted to operating in heterogeneous environments—from the beginning—in order to secure GPUs from any available source—public cloud providers, on-premise clusters, or even specialized bare-metal data centers. For these companies, compute is the currency of progress, and the winners will be those who can aggregate and orchestrate it most effectively.

No matter why you’re operating in heterogeneous environments, the solution for such environments is the same: a security model that could seamlessly unify access controls without slowing down researchers or sacrificing agility. Although balancing security and usability can be a challenge—especially when using outdated security practices that make you sacrifice the agility and speed that AI researchers expect when it comes to model training—companies cannot afford to overlook securing these environments as the value of the models themselves is measured in the billions of dollars. 

2. Unprecedented Threats

AI companies take on unprecedented risks when protecting their intellectual property because there are inconceivable consequences to having their work and data breached. Access to highly performative models allows others to skip the initial time, investment, and effort that was necessary to reach that stage. Once stolen, intelligence can be copied, modified, and redeployed indefinitely; even against their own developers. AI companies may find that their foundational work seeds others’ advancements if not properly protected.

As such, hypervigilance is required, and security for AI companies must go beyond traditional reactive security measures to proactively safeguard models, training data, and infrastructure. The fungibility of AI models fundamentally changes the risk landscape, demanding a new level of vigilance and control.

3. The Economics of Speed and Security

While most companies may accept slowing down their teams in the name of securing their systems, AI companies cannot afford to make this tradeoff between security and usability. 

In Q4 2024 alone, VC-backed companies raised over $62.2 billion—a 57% increase from Q3 2024 (EY). Large investments mean teams endure high pressure to show returns in this ultra competitive field where new advancements are being deployed every single day.

As such, the opportunity cost of slowing down is existential in AI, and teams must innovate at breakneck speed—while also keeping their assets secure. Researchers can’t afford to be slowed down by tools that don’t work in real time. Whether they’re training models on vast datasets or deploying systems to millions of users, the demands on their infrastructure—and their security—are unprecedented. As such, AI teams need security frameworks that scale with the speed of innovation. The balancing act between moving fast and staying protected defines the economics of security in this space.

Closing Reflections: A New Paradigm for Security

AI companies are rewriting the rules of what’s possible in technology. Their speed, scale, and value of their IP demand a rethinking of how we approach security. Security can’t be an afterthought or a reactive measure, and it’s no longer enough to apply off-the-shelf solutions or assume that what works for one industry will work here. 

The AI industry needs to treat these risks with the urgency they deserve so that they’re not left scrambling to contain consequences that can’t be reversed. Unlike traditional software where patches and updates can mitigate damage, AI advancements will spread once released and cannot be un-leaked. We’ve seen this before in cybersecurity. Exploits, like Stuxnet, developed for strategic use inevitably get repurposed, often by the very adversaries they were meant to defend against. AI follows the same pattern—with greater scale and implications.

AI is different. Its security challenges and usability requirements are different. The way we think about securing it must be different, too.

Share: Share on Bluesky

Stay Connected

Stay up to date with Pomerium news and announcements.

More Blog Posts

See All Blog Posts
Blog
The Human Factor in Security: Lessons from the CyberArk Employee Risk Survey
Blog
Common Pitfalls and 5 Must-Dos When Creating a Password
Blog
What is a "Pomerium"?

Revolutionize
Your Security

Embrace Seamless Resource Access, Robust Zero Trust Integration, and Streamlined Compliance with Our App.

Pomerium logo
© 2025 Pomerium. All rights reserved