The cloud industry saw a large increase in growth in 2024. Total global expenditure on cloud services increased by over 20%, with US-based cloud computing leaders Amazon, Microsoft, and Google seeing remarkable increases in their cloud business at 19%, 20%, and 35%, respectively.
While all this is very rosy, it is not all roses. A trend called «repatriation» is growing rapidly, as some organizations, faced with rising costs, complexities in cloud environments, or underperformance of heavy data workloads, have opted to «move back» to an on-premises setup. Furthermore, reports of cloud-related security breaches, as well as law enforcement backdoors for data access and end-to-end encryptions, have additionally stirred skepticism and hesitation among IT decision-makers when it comes to consistently following a cloud migration strategy.
This brings us to the critical question: When should an organization opt for a cloud solution or remain with an on-premises infrastructure?
In many situations, cloud computing does provide benefits for organizations. Compared with on-premises infrastructures, the pay-as-you-go model of the cloud is a better fit for the cost structure for many dynamic, growing businesses with limited upfront investment opportunity and a potential for lower operating costs. However, such services need to be tightly managed with a close adherence to FinOps principles to avoid exploding costs (learn more about FinOps in cloud operations from our expert Frederic Kottelat here).
Cloud providers help users handle this proactively with built-in observability and alerting mechanisms, as well as technical measures like API throttling. But these must be configured correctly. To do so, organizations need to properly prepare and train their people so they can leverage the extensive array of pre-built tools and services, as well as experiment and innovate without significant upfront costs in hardware, network, data center, or license expenses. But upskilling and organizational changes also require appropriate planning and investment.
Another often-overlooked factor in preparing for cloud adoption is the thorough estimation of recurring infrastructure costs. Cloud providers offer specific calculators for their platform, but these tools come with a broad range of possible configurations. To apply these tools effectively, one needs to know the precise workload requirements regarding service, storage, security, or network traffic. Otherwise, the actual costs after migration will most likely not match the upfront estimations and can easily exceed the reserved budget.
The pay-as-you-go options play an important compensating role in this regard, as they allow organizations to pay only for the resources they consume. For processes that have peak loads during certain times or only sporadically show increased resource needs, this pricing model is the best way to save operational costs.
Another consideration is opting for Software as Service (SaaS), where the service easily scales at a moment's notice if the circumstances require flexibility and change. In this case, cloud providers also facilitate software maintenance and upgrades, reducing the workload and costs for internal technical resources.
Furthermore, comprehensive disaster recovery options based on multiple available data centers across different geographical regions ensure high availability and data redundancy, which minimizes data loss in case of unforeseen events.
From a security standpoint, most cloud providers offer robust frameworks, including access controls, encryption mechanisms, DDoS protection, and compliance-supportive tools. Especially the US hyperscalers Microsoft, Amazon, and Google offer compliance portals and support for known international standards such as NIST CSF, PCI DSS, GDPR, SOC 2, and the ISO 27k family.
However, it's essential to understand the shared responsibility model and the policies that need to be applied to comply with regulations. While cloud providers offer dedicated services for creating transparency on the security posture and secure the physical infrastructure, organizations are responsible for securing their data, workloads, access policies, and configurations. This is often misunderstood and can lead to security incidents resulting from, for example, misconfigured storage buckets and overprivileged roles in identity and access management (IAM).
Cloud providers have started to address this problem by delivering their services with secure configurations per default. As an example, AWS S3 has been configured with private access per default since 2020, and Microsoft ship storage accounts with disabled public access per default since late 2023 as well. Nevertheless, organizations are still responsible for the correct configuration and easily able to change the security settings (our expert Salvatore Fagone takes a more detailed look at the security aspects here).
Finally, leading cloud providers offer a vast ecosystem of ready-to-use services. Even if a required feature is not covered by a native service, third-party pre-built integrations to external services are also available, creating growing and innovative ecosystems.
For industries handling sensitive information, such as financial institutions, healthcare service providers, defense, and government entities, this is of strategic importance and value. Their data cannot be at any risk for loss of protection, confidentiality, or operational sovereignty. On-premises environments allow organizations to implement even the most stringent access controls, design security measures that meet their own organizational requirements, and be sure that nothing from third parties can access or process their data without undivided and explicit consent.
Legal and regulatory reasons also contribute to the argument. In jurisdictions with state-of-the-art compliance obligations, an on-premises IT infrastructure generally provides the most direct means to satisfy laws about data residency, data retention, and privacy. Especially in places such as Switzerland and the EU, on-premises IT eliminates the ambiguity associated with the international transfer of data and the convoluted compliance requirements normally connected with unique vendor cloud implementations.
For example, critical military IT infrastructures, in which persons, devices, and processes need to be vetted, must remain in accredited facilities and in dedicated tactical networks that remain intact if general purpose long-haul fibers are destroyed. Another example is the Swiss Interbank Clearing (SIC) system, which needs to run on a highly secure dedicated infrastructure, while banking apps and SaaS solutions connected to SIC are allowed to be hosted in the cloud in Swiss regions if they are FINMA-compliant.
Performance and reliability are also valid reasons for on-premises implementation due to latency and high throughput. There are just some workloads that require less latency (e.g., instant payment processing) or guaranteed availability without relying upon external network components. Solutions related to industrial automation require minimal latency; police emergency response systems and associated data analytics demand real-time processing with guaranteed availability. Any of these areas require stable systems within an organization's own on-premises infrastructure.
Economic reasons are an at times undervalued argument in discussions about on-premises infrastructures. Although cloud services generally offer flexibility and scalability, they often bring unexpected costs to an organization. Cost unpredictability occurs due to varying consumption of resources included in the services, scaling the required storage of services, and the vendor-specific charge rates . Conversely, an on-premises setup – while requiring a large investment upfront – stabilizes an organization with reduced total cost of ownership (TCO) over time. This is particularly true with stable and high workloads.
Finally, the notion of strategic independence plays a role. Local infrastructures free organizations from relying on proprietary technologies, pricing models, and roadmaps of a vendor cloud provider. With their own on-premises architecture, an organization controls its design, development, and evolution.
Ultimately, on-premises IT is not irrelevant; it is still a deliberate and rational business decision for organizations focused on control, compliance, performance, cost control, and autonomy.
«Technology decisions aren’t just technical. They’re about trust, control, and the realities of day-to-day operations.» Salvatore Fagone |
|
When you work in security, especially in IAM, you learn quickly that there’s no silver bullet. Every setup – cloud, on-prem, or some mix of both – comes with trade-offs. What makes sense on paper often looks very different once you're in the middle of an actual implementation.
In such sectors as finance and public services, it’s not uncommon to see organizations sticking with on-prem. Not because they’re lagging – but because they need to be sure where their data lives and who can touch it. There are regulatory reasons, of course, but it’s also about accountability. If something goes wrong, these organizations want to be able to point to exactly where, and why.
On the flip side, in industries such as logistics or retail, the flexibility of the cloud is a huge advantage. People are on the move, systems need to scale quickly, and remote access is a must. The cloud handles that well – plus, modern IDaaS platforms make it easier to manage users without a ton of infrastructure overhead.
But are cloud environments inherently more vulnerable than on-premises infrastructures?
It’s true that hyperscale public clouds attract more attention from attackers due to the potential «treasure trove» of data. However, they also benefit from far more extensive and professionalized security operations than most on-prem environments. Cloud vendors employ large teams of security experts, run 24/7 security operations centers (SOCs), and implement continuous vulnerability scanning and patching – often far beyond what individual organizations can afford.
Can hackers move laterally within a cloud to access data from various tenants?
In principle, no. Cloud providers enforce strict tenant isolation using virtualization, network segmentation, and hardware-level protections. For example, AWS, Azure, and Google Cloud all implement multi-layered sandboxing and access control to prevent lateral movement between customers. There are edge cases – typically caused by customer misconfigurations or rare zero-days – but successful cross-tenant attacks are extremely rare.
Are there attack vectors in the cloud that are outside the customer’s control?
Yes. While the shared responsibility model places many controls (e.g., identity, access, encryption, config) in the hands of the customers, infrastructure-level defenses (hypervisor, physical security, hardware-level isolation) remain the responsibility of the provider. This is potentially both a strength and a limitation: you rely on the vendor’s capabilities – but you also relinquish some visibility.
Is the division of responsibility an advantage or disadvantage?
This is one of the most debated points. The shared model can be a major advantage: organizations don’t need to maintain their own data centers or patch hardware-level firmware. However, the downside is that many customers misunderstand where their responsibilities begin and end. Misconfigured S3 buckets, overprivileged roles, and missing encryption settings are among the most common causes of cloud breaches – not flaws in the cloud itself, but in how it’s used.
Cloud providers have responded by enforcing security defaults and offering posture management tools (e.g., AWS Security Hub, Azure Defender), but in the end, responsibility remains shared – not outsourced.
Ultimately, hybrid models have emerged as the most pragmatic choice. Sensitive workloads stay on-premises, while scalable and flexible services move to the cloud. The challenge is integrating both worlds securely. IAM is central to that – ensuring consistent policies, visibility, and access control across environments.
The takeaway?
There’s no universally «right» architecture. The best setup is the one that supports business needs today and is able to evolve with them tomorrow. For most, that means a bit of both – cloud and on-prem. And that’s not just an acceptable option – many times it’s the smartest path forward.
«Control, cost, and clarity – the pillars of secure cloud decisions.» Frederic Kottelat |
|
Cost is the primary driver for organizations contemplating a switch back to on-premises resources. Our consultants are observing a significant trend: the growing importance of FinOps in managing and optimizing cloud expenses. So how do cloud and on-premises architectures stack up when it comes to financial operations, especially in an AI-driven world? Let’s break it down.
Are GPU-based cloud instances a FinOps challenge?
Absolutely. In the AI era, GPU-intensive workloads are often non-negotiable, especially for training large models. But that power comes at a price: a single GPU instance from a cloud provider can cost upwards of 20,000 CHF per month. If your data scientists only need these resources for a few hours or days, leaving them running idle is a costly mistake. That’s where FinOps practices like automated deprovisioning and usage-based scaling come in.
What drives cloud costs beyond compute?
Specialized tools and architectural choices. Security services like web application firewalls, SIEMs, and posture management tools, but also AI/ML platforms and big data infrastructure like data lakes. These tools are powerful – and pricey.
What are the cost implications for running specialized tools on-prem?
You need skilled staff to install, configure, and maintain them. In the cloud, the setup is faster, upgrades are automatic, and maintenance is minimal. But the tradeoff is that you will most likely pay more per month in exchange for the savings in time and personnel.
Does architecture affect your cloud bill?
More than most realize. Legal and compliance requirements often demand strict environment separation. As an example, this can mean spinning up separate Kubernetes clusters for your different environments (dev, UAT, pre-prod, and prod). That’s secure, but also expensive. A way to address this cost issue is by implementing scale-down strategies for idle environments. Compliance doesn’t have to mean cost explosion, it just requires thoughtful design.
The takeaway?
The cloud gives you flexibility, but without FinOps discipline, that flexibility becomes a financial risk. The best approach blends automation, architecture, and awareness. Know what you’re running, why you’re running it, and when you can shut it down.
From a security perspective, a hybrid approach may be recommended that allows sensitive workloads to remain on-premises while leveraging the cloud for scalability, agility, and innovation. These mixed environments require additional governance frameworks, such as cloud security posture management (CSPM) and zero trust architecture, to ensure consistent policy enforcement and threat mitigation.
In addition to this, organizations need to prioritize applications in a way that minimizes disruption. For those organizations stuck in «repatriation», lack of planning and adequate execution may turn a promising element of innovation into an expensive mistake.
Repatriation from the cloud should not be seen as retreating from modernization, but as a strategic correction. Organizations with a failed cloud migration plan may not have fully grasped the magnitude of the challenges involved. Often, it is driven by an underestimation of security complexities, cost unpredictability, or the need for more tailored control.
Security-specific triggers for repatriation include:
But what seems to be a common refrain in repatriation stories is that cloud migration plan was done too hastily and applied to systems not suitable or ready for cloud computing because they are not understood well enough. So before migrating workloads, applications, or data into the cloud, organizations must do their homework and answer some key questions:
These key considerations are then followed by business and application-specific deep-driving questions, such as:
A cloud journey is a multi-step process involving adequate analysis, decision-making about total (or partial) cloud adoption, and planning the required steps. So, it is not always an on-prem vs. cloud consideration – the result may be a hybrid approach. We will be shedding some light on the ideal usage of such a hybrid approach, as well as its pros and cons, in the not-too-distant future.
[snippet_article_cta id="blog_cloud_vs_on-prem"]