Adnovum Blog

How Banks and Fintechs Adopt AI the Safe Way – in the Cloud or On-Premises

Written by Alexander Eppenberger | Jul 2, 2025 11:34:42 AM

AI technology (GenAI, AgenticAI) and solutions are being deployed at an incredible pace. Like other industries, they enable the financial sector to leverage advantages for employees in customer-facing roles, in the middle or back office, as well as within the nextgen software delivery process. Plus: AI adoption not only improves operational efficiency but also user experience.

How exactly can banks and fintechs benefit from AI’s huge potential, in particular in the cloud, while being on the safe side in terms of regulations? 

Find the answer here, including real-life use cases, a specific guide for AI adoption in the cloud and on-premises, respectively, as well as information on how to be compliant with Swiss and EU regulations.  

AI and its strategic and operational potential  

More than in any other industry, efficiency is key in banking and finance. Let us start by looking at it from three different perspectives: enablers, operational, strategic.

Business enablers GenAI and Agentic AI  

New technologies have a major impact on today’s business models. This is especially true for Generative AI (GenAI) – a type of artificial intelligence which creates new content – and Agentic AI solutions, business enablers that take decisions and support the process flow with a high degree of automation.

Various studies provide a wide set of use cases on how banks and fintechs may take advantage of GenAI and Agentic AI. Applying a structured analysis across the value chain of a bank, we see in general two dimensions of importance: the operational and the strategic layer.

Simplified high-level value chain of a bank

Based on the type of bank (retail, wealth management, Fintech, etc.), the business model and the pillars it is built on may differ, depending on the strategic focus on services and products offered and the targeted client segments. 

For example, a small cantonal bank’s focus is on Swiss retail customers rather than portfolio management or back-office activities. It might thus use GenAI to increase efficiency in payment transactions.

Strategic focus

 Strategic focus 
   Where AI can help  Benefits
Growth Identifying areas of growth in client lifecycle management
  • Optimize touchpoints and increase customer satisfaction
Efficiency Providing additional communication channels for customer interaction
  • Outsource/Automate processes with self-service portals
  • Speed up support process with virtual assistants 
Risk management

Complying with legal requirements

  • Avoid irregularities thanks to automated KYC reviews, AML monitoring, and fraud detection 

Growth
Unlocking the potential of data from internal and external sources, GenAI solutions help identify areas of further growth in client lifecycle management (prospecting, acquisition, retention), client interaction (cross/upselling, behavior analytics, churn prevention, omni-channel analytics), client nextgen segmentation and product management (product creation, product pricing, discount management).

Benefits

  • All these touchpoints can be optimized across channels with smart GenAI tools, thus boosting customer satisfaction as well as user/employee confidence. The result: an increasing demand and a growing business.

 

Efficiency
Besides growth, improved efficiency is key in the finance industry, when it comes to optimizing the cost-income ratio and shareholder value. New technologies provide additional communication channels for customer interaction. 

Benefits

  • New channels allow customers to consume and provide information based on a self-service principle. Certain activities across the value chain are thus outsourced and processes automated.
  • Virtual assistants like co-pilot solutions, concierge services, chat/voicebots, or intelligent document processing tools help speed up low-value and support processes, leading to higher accuracy at lower costs.

 

Risk management
The high risk management and compliance standards defined by government authorities and the regulator have a major impact on banks’ and fintechs’ procedures and processes.

Benefits

  • Using a highly automated solution for regular KYC reviews, AML monitoring, and general anomaly and fraud detection helps avoid irregularities that are usually detected during regulatory assessments and comply with FINMA requirements.

 

Once they have defined their strategic direction, banks and fintechs need to address implementation on an operational level. 

 Operational focus

 Operational focus 
   Where AI can help  Benefits
External consumer Optimizing customer information flow 
  • Increase efficiency by combining automated tasks with self-service tools 
  • Find specific information easily thanks to chat-your-data function 
Internal consumer Optimizing response time and compliance 
  • Free up time for employees to focus on value-adding tasks 
  • Find existing content and gather information more easily
  • Provide  deeper insights to employees in support functions (e.g., KYC)  

External consumer
From an outside-in perspective, the communication channels are the central place where the flow of information is generated and where AI technology directly influences the customer experience. Typical examples include e- and mobile banking solutions, but also support request handling in contact centers by means of chatbots and voicebots, e.g., for customer identification or password resets. 

Benefits

  • For requests with a high volume and low complexity, automated workflows combined with self-service tools are best suited to increase efficiency and, in certain cases, lead to higher customer satisfaction as experience shows.
  • In addition, for product scouting and retrieving bank-specific information, a chat-your-data function supports in market intelligence, ensuring selected investment products do not contain any unpreferred underlying.

 

Internal consumer
From an internal perspective, providing GenAI solutions for employees facilitates research, documentation and reporting of information. For example, a chat-your-data function, an RM assistant co-pilot, or data verification checks lead to a faster response time and incident resolution. In addition, running automated corporate policy and instant data verification checks will improve banks’ and fintechs’ compliance.

Benefits

  • Experienced employees are more efficient in conducting their routine tasks so that they can invest more time and resources in value-adding activities.
  • Finding existing content and gathering information needed is faster and more condensed.
  • Employees in supporting functions such as marketing, HR, or compliance can get deeper insights and/or validate and challenge information with publicly available data (e.g., KYC screening, preparation for customer meetings).

Moving from vision to business

Based on a structured approach that covers aspects of the value chain and includes a deep-dive into the strategic and operational dimension, a bank can identify specific use cases. Taking the expected added value (business impact) and its feasibility (technical viability) per use case into account, allows it to prioritize AI-related initiatives.

Once a roadmap aligned to the business vision has been created, make sure to consider some key facts before you start with AI implementation. 

6 key factors to keep in mind when using AI tools

Adding AI tools to the work and activities of a bank or a fintech can significantly improve efficiency and the quality of decision-making. However, there are additional factors to consider in six major areas: r compliance, technical and organizational challenges, risk assessment, appropriate LLM, transparency, and user/employee/client consent.

  1. Compliance


    When using AI tools, it is essential to ensure that they comply with applicable laws and regulations such as data protection (e.g., EU GDPR, Swiss FADP), sector-specific (e.g., in finance or healthcare), and new regulations such as the EU AI Act. These include responsible data handling, transparency, non-discrimination, and accountability. Compliance with regulations not only reduces legal and reputational risks, but also promotes the ethical use of AI and strengthens the trust of customers, partners, and supervisory authorities. Regular audits, documentation, and clear governance structures are key to maintaining compliance throughout the AI lifecycle.

  2. Technical and organizational challenges


    From a technical perspective, a strong infrastructure is needed to deploy AI tools, ensuring that the systems can manage the computational needs – whether they’re hosted on-premises or in the cloud

    From an organizational perspective, an appropriate governance framework should be in place, including accountability for AI-related activities. AI training of employees must be prioritized to close the gap between technical complexity and practical use of technology. In other words: the bank or fintech must be ready to undergo a change in culture. 

  3. Risk assessment


    An in-depth risk assessment is key to successfully implementing AI. This should include data biases, security weaknesses, and what to do in case of a model error. Banks or fintechs should develop risk remediation tactics which may include regular audits, red-teaming activities to identify weaknesses, and triggers to recover from catastrophic failures. To assess data protection risks, a bank may need to conduct a Data Protection Impact Assessment (DPIA)

    A DPIA is a tool used by data controllers and processors to identify, assess, and mitigate risks related to the protection of personal data. It is mandatory if the data processing is likely to result in a high risk to the rights and freedoms of natural persons – in particular if the processing may adversely affect their privacy, personality, or fundamental rights (art. 35 GDPR and art. 22 Swiss FADP).

    High risks may result from the use of new technologies such as AI, or from the nature, scope, context, or purpose of the processing. The DPIA allows privacy and data protection to be considered early in the lifecycle of a project, making it easier and more cost-effective to resolve issues before they escalate. 

  4. The appropriate LLM model


    When choosing a Large Language Model (LLM), it is essential to ensure that it meets the bank’s or fintech’s expectations. Based on how the model is to be deployed, aspects such as size, training dataset, fine-tuning, and cost need to be considered. For example, a smaller, fine-tuned LLM may perform better on a precisely defined task, whereas larger general-purpose models are more suitable for tasks with more flexibility for a range of applications. Other considerations should be whether the model follows ethical AI principles and employing methods to prevent its misuse.

  5. Transparency


    A bank or fintech must ensure transparency at all times so that stakeholders can trust it and hold it accountable for any decisions based on an AI model, especially Agentic AI solutions. This involves making the model explainable and understandable, in particular for high-risk activities like those in healthcare or finance. The model’s construction, limitations, and means for correcting biases should be clearly documented. 

  6. User, employee, and client consent


    Along with transparency, obtaining user (externally: client, internally: employee) consent is mandatory for certain jurisdictions such as Switzerland and the EU, as well as for several cases. Banks or fintechs need to obtain consent to process sensitive data while using AI tools, e.g., for profiling and automated decision-making (Agentic AI). 
Adopt a holistic view to make the most of AI
Introducing AI tools is often seen as a simple technology deployment. However, it is a multidimensional endeavor. By considering the six dimensions described above, banks and fintechs have the opportunity to take advantage of AI, while mitigating risk and building trust. This integrated approach allows them to use AI as a vehicle for innovation.

Implementing AI, hands-on

Within the banking industry, there is a strong need for modernizing IT system landscapes, i.e., moving away from a monolithic approach and functionality provided by the core banking system towards a more modular banking approach. Ideally, the core consists of basic banking functions, which are enriched with a system integration layer and with best-of-breed third-party applications. The dilemma here is obvious: customization or standardization?

The same dilemma arises again with AI.

Standardized or customized solution?

Those who opt for a tailored solution instead of AI and cloud infrastructure standards not only retain control over critical data but also increase agility and innovation capability. We show how to successfully implement an individual solution and achieve concrete results.

Solution
  Customized Standardized
Advantages
  • Retain control over data 
  • Meet security and compliance requirements 
  • Adaptable to changing needs 
  • Increased innovation capability 
  • Less dependent on SaaS providers 
  • Stand out from competition
  • Easy to integrate, incl. GenAI 
  • Lower long-term TCO
  • Ready to use immediately
  • Based on best practices and client experience
  • Easier to use for non-tech users 
Disadvantages
  • Initial costs
  • Procurement process
  • Time needed for major adjustments to interfaces

Getting started with implementation

AI is widely and intensively discussed. Many banks and fintechs therefore want to get started quickly. However, a hasty start can backfire later: with a fragmented system landscape, redundancies, and high costs. Especially the choice of cloud technology can only be corrected later with great effort, as all major providers offer similar but not identical environments and services. Therefore, it is worthwhile to determine and evaluate the right architecture and the technology stack to be built on from the beginning.

Once done, proceed step by step:

  1. Clarify whether a cloud strategy already exists
    If so, it should be built upon. If not, it doesn't mean you have to wait. Gaining experience with a proof of concept is an important step. Therefore, initial cloud or AI projects can start, but they should not be in business-critical areas. A good example is an internal digital assistant that answers questions. This way, you can gain initial experience while developing the cloud strategy in parallel.

  2. Choose the right cloud deployment option

    Ask yourself: Which cloud options and services suit our organization? And how do we combine public cloud, private cloud, and on-premises systems into a hybrid system landscape if needed?

    The decisive factors are security requirements and the need for control over the system, data, and the use of GenAI with LLMs. All cloud options can generally be extended with various LLMs. In the public or private cloud, providers directly offer these models. In an on-premises solution, an open-weight model like Meta's Llama or Mistral is usually used. These models can be downloaded, fine-tuned, and run in your own infrastructure. For the mentioned digital assistant that answers internal questions, an on-premises or private cloud approach can be the best choice, especially when dealing with particularly sensitive data.

  3. Ensure agility, data control, and IP ownership

    Markets and customer needs are constantly changing. Therefore, companies must be flexible and able to act quickly. Those who want to stand out from the competition need more than a standard solution. Using the same tools as everyone else may not be the best match due to a lack of true differentiation.

    A customized cloud solution offers advantages here. It ensures that high enterprise requirements for security and compliance are met. It also allows the solution to adapt to the bank – not the other way around. The cloud solution can also be modularly expanded and flexibly scaled. Those who choose this path design their own roadmap and reduce dependence on SaaS providers who also serve competitors. Another advantage of an individually developed solution in a private cloud is the increased control over your own data. Depending on the chosen cloud variant, this data is not shared with third parties. In an on-premises solution, it even remains entirely within the controllable infrastructure.

  4. Focus on seamless integration 

    Companies usually already rely on various systems, providers, and solutions. Therefore, it is crucial that new applications can be seamlessly integrated into the existing environment. SaaS solutions often reach their limits here. A custom-developed solution can usually be integrated much better, provided the appropriate interfaces are available.

    A cloud solution helps to better utilize existing resources. This applies not only to technical but also to personnel and administrative resources. If a bank already uses a cloud platform, no new contracts with new suppliers are needed. This saves time, as the review and approval of new providers can take several months in an enterprise environment. Internal teams already familiar with the chosen cloud environment can develop their own solutions, significantly reducing time-to-market. And with the support of an external implementation partner, these teams can later take over operations and further development, preferably using a DevOps approach.

  5. Conduct a cost-benefit analysis
    When looking at the required investments, the initial costs of in-house development stand out. These can be around 20 to 40 percent higher in the first year than with a standard solution. However, in the long term, the picture looks different: the Total Cost of Ownership (TCO) is often lower, as there are no ongoing SaaS fees. Instead, only the actually used cloud resources are charged.


  6. Strike a balance between knowledge and dependence
    In a competitive environment, it is crucial to be quick to market. A SaaS solution is often ready to use immediately, giving it a time advantage over a custom solution that first needs to be developed. However, this advantage can be lost if a lengthy procurement process for a new supplier relationship or major adjustments to interfaces and workflows are necessary.


    While a SaaS solution incorporates the experiences of other clients and best practices, a custom solution may not be able to match this breadth of experience. An experienced implementation partner can help by bringing in this perspective and experiences from other client projects.

    Users without a technical background can usually achieve a goal with a SaaS solution without prior knowledge and with little effort. With a custom solution, there remains a need to rely on IT or an external service provider for adjustments. In practice, however, every SaaS also needs its internal experts. This knowledge must also be built up first, which can create internal bottlenecks if these experts are overloaded.

The pace of AI development remains high. Keeping up or staying ahead of competitors will be a key competency for companies. To avoid rushing into the adventure without a plan, a long-term and well-thought-out cloud and AI strategy is needed. One key factor is regulatory compliance.

Below, we are providing the most important information on AI regulation in Switzerland and the EU.

Regulating AI in Switzerland and the EU

The European Union and Switzerland are both pursuing the same basic aims when it comes to regulating AI. Specifying the goals of each country’s approach involves requiring the protection of fundamental rights, the safety and transparency of AI technologies, as well as trustworthy and responsible use of AI technologies. However, while the EU and Switzerland still have the same objectives, there are certain differences in the two regulatory frameworks in terms of legal structure, enforcement, and strategic orientation.

Regulating AI in Switzerland compared to the EU

Switzerland is taking a more cautious and innovative approach than the EU. At this point, there is no specific AI legislation. The Federal Council of Switzerland has decided to rely on existing legislation such as the Swiss FADP, Information Security Act, product safety law, Trade Mark Protection Act, Designs Act, and the Code of Obligations. Switzerland wants to adopt the risk-based reasoning of the EU, but be less restrictive and pro-innovation in its regulations. Currently, the focus is on «soft law», guidance, standards, and industry self-regulation. 

Existing institutions (State Secretariat for Economic Affairs/SECO, Federal Office of Communications/OFCOM, etc.) may provide methods of regulating the AI field when appropriate.

For regulatory purposes, the EU legislation targets AI systems directly as it attempts to create a framework for the market that is consistent. Switzerland underlines the responsibility of users and social implications, using a decentralized regulatory approach. Although Switzerland has not created a formal risk classification system, the Federal Council acknowledges that this model is useful, and is likely to create some sort of similar system in the future.

The greatest difference between the EU and Swiss legislation is in enforcement and penalties. The EU has designated supervisory authorities, mandated conformity assessments, and severe sanctions to ensure compliance. Switzerland has not set up any new enforcement mechanisms, or specifically created any sanction mechanisms related to AI. It will largely continue to rely on existing legal remedies and oversight structures.

Swiss FADP

 

The Federal Data Protection and Information Commissioner (FDPIC) emphasizes that current data protection legislation is directly applicable to AI. The FADP requires manufacturers and providers of AI systems to make the purpose, functionality and data sources of AI-based processing transparent when developing new technologies and planning their use, and to ensure digital self-determination. Users have a legal right to know whether they are speaking or corresponding with a machine and whether the data they have entered is being processed. In addition, they have the right to contest automated decisions.

The use of AI tools involving high risks is only permitted provided appropriate measures to protect the data subjects are taken and require a DPIA. Applications that fundamentally breach users’ right to privacy – e.g., social scoring or comprehensive face recognition – are prohibited under FADP.

FINMA guidance 08/24

The Swiss Financial Market Supervisory Authority FINMA has published guidelines on AI (FINMA Guidance 08/24) in financial institutions. It approaches the topic from a risk-based perspective, where it is essential to identify, limit, and control the risks associated with the use of AI.

FINMA noted that institutions often focus on data protection but neglect AI-specific risks such as bias, lack of robustness, and explainability, particularly due to decentralized development and unclear responsibilities. It emphasized the need for strong AI governance that includes risk-based inventories, clear accountability, thorough testing, and increased oversight of outsourced solutions.

EU AI Act

In August 2024, the EU’s legally binding and comprehensive AI Act came into force, which is a regulation that is not subject to transpose into national law and will begin taking effect gradually in 2026 with the majority of rules. The regulation is based on a risk framework approach, which categorizes AI systems into four levels of risk: prohibited, high risk, limited risk, and minimal risk. There are different obligations for each risk category, from bans to transparency requirements. High-risk systems such as those used in healthcare, human resources, judicial, or critical infrastructure have robust requirements for data quality, documentation, human oversight, and monitoring.

The consequence for breach of the AI Act can incur heavy penalties, namely fines of up to 35 million euros or 7% of the total global annual turnover. One further consideration deserves mention: the AI Act has an extraterritorial effect, meaning that the law will apply to providers from outside the EU as long as their systems have been used or marketed within the EU.

EU Data Act

The financial sector and especially the use of its CID-data (client identifying data) is highly regulated in the EU with the EU Data Act which came into force on 11 January 2024.

The Data Act  is a law designed to enhance the EU’s data economy and foster a competitive data market by making data (in particular industrial data) more accessible and usable, encouraging data-driven innovation and increasing data availability. To achieve this, the Data Act ensures fairness in the allocation of the value of data amongst the actors in the data economy. It clarifies who can use what data and under which conditions. 

In addition to technology, trust is crucial

AI offers banks and fintechs enormous potential – whether as a business enabler, at a strategic or operational level. A systematic approach is essential to make the best possible use of it: Define goals, include key factors in the use of AI tools, decide between standard and customized solutions and choose the right infrastructure, be it a cloud, on-premises or hybrid setup.

One particularly important factor on the path to greater efficiency or more growth thanks to AI is compliance with regulatory requirements. This allows banks and fintechs to avoid reputational damage or financial losses, for example as a result of fines. Above all, it helps them create a solid foundation of trust in order to successfully service their customers in the age of AI.

Please note:
The information provided in this blog post does not constitute legal advice.

[snippet_article_cta id="blog_ai_in_the_cloud"]