Adnovum Blog

How to empower your employees with AI

Written by Dávid Balakirev | Apr 11, 2024 2:07:39 PM

Adnovum sees itself as a strong proponent and advocate of the accelerating developments of AI. From providing conversational AI solutions to our clients, applying AI tools to our own in-house processes, to examining the impact of AI on business and society, we have gathered substantial experience in the practical application of this exciting new technology. Dávid Balakirev, Regional CTO of Adnovum Hungary, recently spoke with Prof. Dr. Clemente Minonne of the Lucerne University of Applied Sciences and Arts about AI-related topics that tend to get overlooked: the real-world implications and challenges of rolling out AI in a company and its psychological repercussions on staff. With this interview as a departure point, Dávid gives us a comprehensive examination on how to create an environment in which employees accept and make secure use of AI’s potential.   

The role of AI in the psychological empowerment of employees

What are psychological factors affecting employees? 

Generally speaking, the major psychological factors of importance that affect personnel can be divided into four categories:

Employees …

… want to feel that what they do has an impact.

… want to feel competent, to be an expert within their field.

… strive for a feeling of self-determination.

… want to do meaningful work and thus gain a sense of purpose and value. 

This classification is inspired by the categories Daniel Pink outlined in his 2009 book Drive: The Surprising Truth About What Motivates US, these being autonomy, mastery, and purpose. These factors play a vital role in increasing employee satisfaction and retention, as well as enhancing their motivation to leave their comfort zone, raise their aspirations, and achieve greater performance. 

How AI can affect psychological factors at work

The ways generative AI can positively influence employees in all four of these categories are evident. Increased productivity results in a greater contribution to a project, which means this contribution figures more prominently in the final product. Expanded capabilities, improved working speed, and a satisfying workflow greatly strengthen their confidence in themselves as experts and achievers in their field. Assistance provided by a copilot and other AI programming aids makes them less reliant on superiors or colleagues; all these factors combined give them a greater sense of value and purpose.

But there are also other aspects at play. Junior staff may be unsure of the software code or other work they created and hesitate to present it for fear of appearing incompetent. The AI can inspect such code, check its viability, and offer pointers for ironing out any flaws. It can basically act as a non-judgmental reviewer who is available whenever needed, even on a Friday evening when no one else is around.

This can be of particular help to young programmers who are entering the workforce for the first time. Assistance through AI could strengthen their confidence in themselves and be a source of comfort in dealing with certain psychological issues, such as anxiety or the imposter syndrome.  

Why introduce generative AI in your company?

  • Productivity 
    Possibly the most heralded potential of AI is its boost to productivity. AI has the potential to improve efficiency and productivity in every unit and in every step of every process within an organization. Methodologies, the way work is organized, chains of command, project management, procedures, and processes all stand to gain from AI enhancements. Achieving more in a shorter time with the available capacities is the big hope of AI, and the first results are very promising. The use cases in software engineering alone speak for themselves.  
  • Job enrichment 
    Directing our attention to employees, AI can promote job enrichment. Employees want fulfillment in what they do, they want to be part of something that has meaning. In our business, this most often boils down to creating software and solutions that people enjoy using to solve real-world problems or that simply make users’ lives a little bit easier. But both tech and non-tech staff can use AI assistance to increase their involvement in a project, focus more on the rewarding facets of their profession, and go home at the end of the day with a sense of accomplishment and professional satisfaction.  
  • Empowerment 
    AI can help employees achieve this greater sense of purpose through its great potential for empowerment. With the help of AI tools, employees can increase their output significantly by delegating repetitive and tedious, but nonetheless time-consuming tasks to automated software. Getting immediate feedback on their ideas and concepts lets them show greater initiative and participate more in the ideation process.  

    Furthermore, AI opens up a completely new avenue for accessibility. Users that struggle with monitor screens and conventional input devices can operate computers through AI interfaces. The same applies to non-tech workers with little to no IT knowledge: they can potentially become «citizen developers» with AI assistance and thus create applications for their specific purposes or optimize software they already use.  
  • Education 
    AI not only enables employees to engage in more tasks, but also a greater variety of tasks. A seasoned software programmer with a long record of successful projects may one day feel a lack of challenge in their everyday work and want to try something new, for example, UX design. In such a scenario, generative AI can serve as an assistant that introduces them to the basics and stands by their side while they create their first computer screen mock-ups and rudimentary mobile applications. 

Adnovum sees great potential in all of this. Looking beyond the obvious productivity boost, we also recognize the great contribution AI can make to employee satisfaction, a crucial aspect in recruiting and retaining talent in today’s competitive labor market. And specialists dipping their toes into new fields can provide new perspectives on existing projects and procedures, leading to fresh ideas and innovative approaches. 

«Despite all managerial and business benefits: Our first and foremost aim is to help our employees unlock their true potential.»

Karin Bühler
CPO, Adnovum AG

 

 

Who should be in charge of introducing AI?

For companies aiming to harness the potential of AI, a big question is the responsibility for rolling out AI-related tools and policies within a company. Generally, there are two basic models for this: top-down and bottom-up. 

Top-down 

CTO

In theory, the top-down approach emphasizes an overarching and abstract examination of AI opportunities and their general usefulness for the company. As practiced by Adnovum, the top-down approach consists of the CTO and his team being at the forefront of identifying and examining industry trends. This analysis is converted into a company-wide strategy that defines the approach the company wants to follow and the goals it intends to achieve. This is communicated to the entire company with the aim of getting everyone on board by explaining the vision and convincing staff of the viability and sound footing of their approach.

And everyone in this context literally means everyone. Adnovum is working on a program that does not distinguish between tech and non-tech staff. All employees in our organization are equipped with appropriate AI tools, official company guidelines for their use, and a number of tasks and sample projects that help them gain a good fundamental knowledge of and practice in using AI. This guided introduction aims to help personnel recognize the idea, the potential, but also the risks of AI. 

Market Units

Within Adnovum, the market units are a major factor in shaping the company strategy for AI. They have long established contacts with and profound insight into the industries they focus on, be it banking, insurance, logistics, or public service. This allows them not only to provide valuable feedback regarding the suitability of the company approach for these industries, but also to relay industry-specific trends in the use of AI to the CTO team for consideration in their strategic planning.

Bottom-up

Early adopters and people «in the trenches»

The other approach is to, knowingly or unknowingly, leave staff to their own devices. They can tinker with AI tools in the context of their specific tasks and assess their performance and value in every-day use. Of special note are the early adopters. These particularly eager employees will always start using new technology, either in their work or in their own time, as soon as they can get their hands on it. In general, such enthusiasts can act as «champions» within the company: major proponents who encourage others to engage with the new tools and serve as a contact point for questions and concerns.

In practice, however, the bottom-up strategy is basically equivalent to the phenomenon of «shadow adoption». Without the guidance or maybe even knowledge of executive management, this approach may easily evolve into a major headache for the entire company. Early adopters can be overenthusiastic in their embrace of new technologies and disregard the associated pitfalls. And employees who aren’t provided enterprise-ready tools will most likely make use of free-to-use offerings available online, which is exceedingly problematic in terms of data security. 

Hybrid approach

As implied in the descriptions above, Adnovum follows a hybrid approach. The CTO with his team, as well as the Market Units, are responsible for rolling out AI solutions and, most importantly, defining, updating, and enforcing a regulatory framework. And all the risks of the bottom-up strategy notwithstanding, we clearly recognize and welcome the valuable input departments, project teams, and every individual employee contribute with their continuous feedback and suggestions.

In sum, our explorations into the AI field are by no means bound by hierarchical structures. Management must ensure adherence to the guidelines, but individual Market Units, departments, project teams, and employees are encouraged to initiate and pursue their own pilot projects, proofs of concept, and experiments. This more “democratic” approach generates a wealth of perspectives on the AI matter and helps us detect more use cases for our offering and internal processes. And above all, it prevents a rigid tunnel vision that overlooks promising applications of AI and fails to arouse enthusiasm and gain acceptance among employees.

Best practices for rolling out AI in your company

How to set up AI governance 

With the number of pitfalls associated with any new evolving field of technology, and particularly so with AI, it is of vital importance that the leadership team define a common understanding, a roadmap that specifies the rollout of AI. As this is an intricate matter with many unknowns, a generic guideline won’t suffice. Complex questions must be answered and important issues clarified to ensure that employees are comfortable using AI tools and can leverage them to the greatest advantage. These issues include but are not limited to: 

  • Selection and constant re-evaluation of AI services 
    The leadership team should define which AI services are to be used and make them available in appropriate form to their employees. But this procedure shouldn’t be a one-and-done process; the selected AI services must undergo continuous re-evaluation, facilitated by feedback loops between executive management and rank-and-file staff. These considerations must be made against the backdrop of changing legal regulations and potential advantages of evolving competitor services.
  • Keeping an eye on third-party providers
    This also includes keeping a close eye on the third party that provides the AI model you are using. These companies may not be forthcoming about the inner workings of their software and lack transparency regarding the origin of its training data and its general processes. And the terms and conditions of these providers can change at short notice.

    While these circumstances already pose a challenge regarding compliance and accountability, any adverse event on the part of a third-party provider, say a security breach or a proven copyright infringement, will most probably have further ramifications on processes or products that employ their services. Robust countermeasures must be in place to avert or at least alleviate any consequences to your business.
  • AI model maintenance
    Even AI models that have proven satisfactory can eventually see a decline in performance, generally referred to as model drift or model decay. This can negatively impact on work results and undermine employees’ trust in the technology. Accordingly, AI models should continuously be monitored and tested along an established procedure for any decline in output quality. Models can furthermore be updated and refreshed regularly with new data to maintain the beneficial effects of AI on your processes.
  • Protecting sensitive data
    The threat to sensitive data has been one of the most prominent concerns of Adnovum employees vis-à-vis AI technologies, as evidenced by an internal survey (see Potential challenges of introducing AI). And not without reason: like Adnovum, many companies collect and process sensitive data during their daily operations, including information concerning clients and trade secrets. Any careless handling of such data could seriously hurt the reputation of the entire company. Furthermore, with Switzerland and the EU already having comprehensive regulation in place with the FADP and GDPR, respectively, substantial legal consequences are a distinct possibility. Businesses must establish and tightly enforce data protection standards that specifically cover the new threats to data privacy posed by AI technology.
  • Strengthening your employees’ abilities and awareness
    If you’re aiming to empower your employees, it is critical to ensure they receive thorough instruction on how to employ AI both effectively and responsibly. Executive management must provide access to the tools and train the employees sufficiently in their use. This can occur through small pilot projects or workshops during which employees can practice their skills and become acquainted with the tools. Furthermore, and probably more importantly, leadership must clearly establish the company AI guidelines in order to raise employees’ awareness of the risks attached to AI and promote best practices. This requires clear communication and regular feedback from the users. 

How to communicate the usage of AI to employees (and keeping up with rollouts)

Although the accomplishments in AI undoubtedly are among the most momentous events in recent decades, the process of introducing AI in your company doesn’t need to be as revolutionary as the technology itself. In fact, Adnovum has a proven procedure in place for rolling out any new major tool, platform, or innovation, which furthermore serves as an example of a hybrid variant of the top-down and bottom-up strategies described above. The general idea is a multi-channel approach that allows for clear communication from the leadership to set the foundation on the one hand, and enables the participation and continuous input of the staff on the other.

The first step in this process are webinars as a practical way to reach all employees. Within these presentations, executive management should address the following issues, preferably in dedicated sessions where more detail is required:

  • Vision 
    Get the workforce on board by explaining the company strategy, the goals you intend to achieve and why, and the business value you aim to accomplish.
  • Roadmap 
    Establish a clear procedure with which AI will be rolled out in your company, including when the tools will be made accessible, the schedule for trainings and pilot projects, and the targeted milestones.  
  • Instructions 
    Provide preliminary guidance on how your employees can access the AI tools and elementary instructions on how to use them.
  • Governance
    Clearly lay out the governance guidelines put in place to avoid any detrimental outcomes regarding data security, ethical implications, and property right infringements.

The webinars lay the groundwork for using AI tools, which should be complemented by further education that covers both theoretical and practical grounds. This can be internal workshops that present instructions for specific use cases, accompanied by smaller exercises or sample projects to give staff practice with AI tools. External training offerings, potentially including certification, are also imaginable. 

While the webinars and workshops provide ample opportunity for staff to ask questions and clarify open matters, the true communication within the company at large occurs on intranet forums. These fulfil a number of purposes:

  • Employees can direct further questions to the leadership team or AI champions and request more detailed information.
  • Employees can provide feedback on the selected tools or the viability of the strategy or roadmap, as well as point out new risks that should be covered in the governance guidelines. 
  • Subject matter experts can discuss with their peers how the tools are optimally deployed in their specific field.
  • Project leaders can compare notes on how the use of AI is best supervised within their teams.
  • All employees can use the forums as a resource section and benefit from the information continuously gathered in this evolving knowledge base.

This last point is particularly important, as communication in this regard will never be done. One way of keeping staff in the loop are regular intranet news posts. These inform personnel of, for example, updates and other maintenance to the AI models, new functionalities, and action to be taken by staff. Larger rollouts and major adjustments to the governance guidelines, however, should again be conveyed through mandatory and company-wide webinars.

«We are at the very beginning of AI’s development. Many more disruptions are undoubtedly coming and will require continuous adoption.»

Beat Fluri
CTO, Adnovum AG

 

 

Empowering software engineers with AI

Use cases of AI in software development

Software development is one of the first areas that come to mind when utilizing generative AI, which isn’t too surprising. It is the field it emerged from, and lends itself very well to the strengths of AI in producing abstract code. From creating, refactoring, translating, and reviewing code to generating documentation and fixing bugs, the potential applications of AI in software development are plentiful. For more details on AI in software engineering, take a look at our deep dive into the topic on our blog.

Benefits to software engineers using AI: Adnovum’s experience

In a recent (non-representative) survey among our staff, we asked our employees what benefits they have experienced so far from using generative AI: 

As could be anticipated, increased productivity took the number one position. Software developers and coders make up a large share of our company, and it is in this field where generative AI really shines. The numerous ways in which AI can assist developers have been mentioned above, and also presented in great detail in several of our blog posts.

But having enhanced creativity tied for the top spot wasn’t quite that expected. Of course, hopes were high that delegating mundane tasks to AI would free developers’ hands for more interesting tasks, but that it would translate into a boost in creativity so quickly and so strongly comes to some surprise. This may be because generative AI, apart from giving room for creativity, can help in gathering ideas from which users can take inspiration or even individual elements to create something of their own.

This last aspect most probably also plays a role in improved decision-making, coming in a distant third with nonetheless notable 17 mentions. Generative AI can help identify available options and excels at analyzing large sets of data. This paves the way for a particularly organized and comprehensive approach in which decision makers can diligently weigh up the pros and cons of each individual course of action.

Looking at the lower end of the spectrum, we see job enrichment with 11 mentions. But this doesn’t have to be bad news for companies seeking to improve employee satisfaction. We are still at the beginning of the AI journey, and caution is highly advised. Accordingly, our employees are proceeding carefully with our exploration of AI: they predominantly remain within their own fields of expertise and closely observe the input and output data to guarantee the results are accurate and serviceable.

The feeling of job enrichment will most likely rise as employees become more capable and more comfortable in using AI and assessing its risks. But for now, they are following clear guidelines in implementing AI to ensure its use is secure and there is no ill effect on business operations. 

Dos and don’ts of using AI for software development (and other purposes)

Dos
Don'ts

Develop guidelines for the use of AI

Overestimate the capabilities of AI; humans should remain firmly in the driver’s seat

Protect sensitive data 

Rely exclusively on AI

Start using generative AI on minor, non-business-critical projects 

Neglect other tools and aids for the task at hand

Use AI in fields in which you have a substantial degree of expertise

 

Keep a keen professional eye out for biases, problematic content, and hallucinations

 

Review AI-generated materials for copyrighted source material and bugs (potentially placed by malicious actors)

 

Disclose to colleagues, superiors, and clients when generative AI has been (or is planned to be) used in producing code or other work results

 

Solicit user feedback to spot problems and improve working techniques

 

Experimenting with generative AI at Adnovum

At the fundamental level, we continuously examine the enterprise-ready solutions currently on the market, such as Microsoft Copilot, GitHub Copilot, and Azure OpenAI Service. One aspect of primary concern are the assurances these providers offer regarding legal protection, the quality of training data, and data security, among others. Furthermore, we experiment with these services to determine which is most suitable for our purposes and offers the most added value. 

In software engineering
This predominantly occurs in the field of software development. Within test use cases and matters concerning our internal processes, we examine the performance of each service and how they fit in with our operations and tasks. These tests were and are being conducted within a safe environment in non-business-critical, non-sensitive projects to prevent any violation of our security protocols.

Assistance in administrative tasks
Our evaluations aren’t limited to software development. Non-technical staff have also been testing AI tools in their specific areas. Initially, this often was of a more playful character, such as generating images or texts for news or social posts on our intranet. The experience gained from these applications was soon transferred to more business-relevant matters. For example, staff are experimenting with generative AI integrated in conventional office applications to visualize specific problems or procedures, create drafts for texts and presentations, analyze datasheets, and summarize internal correspondence and communications. And all this in a fraction of the time when compared to previous processes.

Improving and expanding the portfolio
And finally, we have been undertaking initiatives to examine how AI can enrich our products and services or help create new offerings. In an ideal scenario, this leads to market-ready solutions, as has been the case with our conversational AI products. But even if an attempt is not crowned by prompt success, we gain a lot of experience and insights during these efforts. This deepens our understanding of the potential and opportunities offered by AI and lays a sound foundation for our future endeavors toward new use cases and applications.

Potential challenges of introducing AI

With all the potential and promise that AI and its various use cases hold, this new frontier is not without risks. The entire process of adopting AI is a delicate balancing act. You don’t want to be a late adopter; experts in their field with a keen interest in new developments will most likely feel the urge to be at the forefront of exploring, experiencing, and trying out new opportunities as soon as they become available. On the other hand, you don’t want to risk tarnishing the reputation of your organization or even cause harm to your clients by falling into the pitfalls that inherently accompany new technologies.

The potential roadblocks that could cause a company to stumble on their path towards AI adoption can be divided into two categories: external and internal challenges.

External challenges

  • Regulation of AI 
    Official regulations, such as the AI Act in the EU or similar considerations in Switzerland, are struggling to keep up with the constantly evolving situation. This means that regulators could eventually impose restrictions on use cases or AI tools that have already become integrated and appreciated elements of working processes, necessitating larger retooling of established procedures. Furthermore, companies like Adnovum that are active internationally must be aware that legal regulations can vary greatly between different countries. Executive management and legal teams must cooperate closely to stay informed of new, and upcoming, legislation in every country they are active in.
  • Client acceptance
    Clients may be reluctant to accept services and solutions that have been created with AI assistance, be it due to the sensitivity of the project or copyright concerns regarding the code. Such reservations can be alleviated to some extent by pointing out the improved productivity of AI-driven workflows or the legal protection that some AI service providers, such as Microsoft, offer. But ultimately, acceptance of AI depends on the client’s openness to new technologies and the results of their risk-benefit calculation.
  • Technological issues
    As a new technology, AI is not without its imperfections: subpar or protected training data can lead to biases or copyright issues in the final product; the black-box nature of the solutions sometimes creates mysterious results, most famously the blatantly inaccurate outputs known as hallucinations. While corporate providers are making great efforts to constantly improve their services and the underlying data, user vigilance will remain a necessity for the time being. Users must always apply their expertise, experience, and common sense when employing AI tools.  

Internal challenges

  • Hesitant leadership and shadow adoption
    Leadership teams that are somewhat hesitant about adopting a new technology may someday find out that AI is already being employed in projects without any sort of guidance for secure implementation. It therefore is of utmost importance that executive management stays abreast of technological developments, as well as defines and communicates company policies as soon as possible, to ensure that new technologies are applied within reasonable boundaries.

    Furthermore, it is important to take the «shadow» out of «shadow adoption». As described above, early embracers can be valuable contributors to a successful AI rollout in a company. Yet any reckless AI experimentation without the approval or even knowledge of the leadership could harm the company significantly. These early adopters must commit to absolute openness and transparency regarding their activities, and furthermore adhere to the company guidelines.
  • Staff concerns
    In our company survey, we asked our staff to put the potential implications of AI implementation in the order to which degree they concern them. The results: 

And a more detailed look:

The results reveal a sober and realistic perspective among our employees. It is indeed easy to make the case that data security and the AI-generated hallucinations represent, at least in the short term, the chief obstacles to secure and beneficial AI adoption. Not only are they a source of risk regarding both safe application and quality of output, they also require extra effort to ensure satisfactory results. Several respondents to the survey pointed out that time savings through AI can be minimal if users need to take additional precautions for data security, prompt the AI, and keep an alert eye out for hallucinations.

While the first two spots reflect the most immediate concerns regarding AI, the lower ranks mirror the more long-term potential risks. Seeing the issue of job security in last place is indeed a relief, but this doesn’t mean it can be ignored entirely. There is a distinct chance that this concern will become more prevalent as AI gains sophistication and reliability. This goes in line with another impression respondents have expressed in the survey:

It is striking that feelings of isolation in the workplace have already emerged, even if it is only reported by a minority. Like the worries over job security, this phenomenon may very well spread as AI’s evolution proceeds. If employees increasingly use AI as a first point of contact, combined with the continued use of home office, the workplace could eventually see less inter-personal relationships, and consequently become less enjoyable for employees. For companies seeking not only to boost productivity but also enrich the working experience of their staff, this could be a risky development.   

  • Employee acceptance
    Your staff isn’t a homogenous group of people who all think alike. According to our first observations, the largest discrepancy in acceptance arises across seniority levels. Younger co-workers generally are more enthusiastic about the opportunities of AI, which isn’t too surprising: especially the factors of empowerment and self-determination are arguably felt the strongest among such employees.

    Seasoned experts, on the other hand, tend to be more skeptical and more vocal with their criticism. It is easy, and maybe not entirely unfounded, to wave off these concerns as a fear of competition from younger (and less expensive) programmers empowered by AI. A more convincing explanation is that their experience of countless projects has heightened their awareness that typically human strengths – to include ingenuity, creativity, as well as problem-solving and improvising skills – are indispensable for both innovative solutions and successful project management.

    Overreliance on AI tools poses a two-fold risk in both contexts. First, consistently falling back on the capabilities of AI in code generation could very well lead to cookie-cutter end results, and in the long run stunt the professional growth of budding programmers. And second, while generative AI provides helpful input for the purely technical aspects of coding, it yet has little to offer when it comes to embedding in practice the entire process of software development in the frameworks of project management and business goals.

    For the practical implementation of software development within the real-world business environment, the human capabilities mentioned above will remain the most valuable assets. Company leadership must erase all doubt about their appreciation of experience and human abilities and foster the sharing of knowledge, expertise, and know-how from senior to junior employees.    

«I have always benefitted from the assistance of senior colleagues and superiors. They can impart knowledge, wisdom, and the tricks of the trade that come only with years of experience.»

Dávid Csákvári
Principle Software Engineer, Adnovum AG

 

 

Our vision for AI

We firmly believe that generative AI will bring substantial changes to businesses in the coming years. The astonishing capabilities, the numerous use cases, and the many areas of application for AI all have tremendous potential. It is our stated goal to bring benefits and improvements to virtually all levels of our company:

  • Organizational level
    AI provides a boost to productivity, streamlines processes, and optimizes operations.
  • Personnel level
    AI shows great potential for giving staff an improved work experience by giving their work impact, enhancing their abilities and sense of self determination, and inspiring a feeling of purpose and value.
  • Offering level
    AI helps enhance existing products and services, as well as create new innovative solutions.
In more specific terms, here is a small selection of the short-term AI targets we are aiming for:
  • Copiloting is key 
    We will give all employees an AI copilot tailored to their duties that relieves them of time-consuming and cumbersome routine tasks.
  • Pair programming 
    We will provide all developers with an AI sidekick that shields their workflow from disruptive side tasks, reviews code in real time, and much more.
  • Multi-agent software engineering 
    We will explore the possibilities of combining different LLMs, such as increasing the quality and quantity of AI output.
  • Expanded offering 
    We will build upon our well-received offering in conversational AI with improved solutions and innovative products in other branches of AI.
  • Looking beyond generative AI 
    We will explore the opportunities offered by other forms of AI, such as causal AI, predictive AI, retrieval techniques, and clustering. Our strategic partnership with Squirro is an ideal basis for this.

But we are very aware of the dangers involved in this new technology, and we must walk down this path carefully. We believe a risk-conscious approach here serves a dual purpose: for one, we must prevent any threat to our company, our employees, our solutions, and our clients. Furthermore, we want to provide added value to our clients by making the potential of AI available to them without the associated risks. The three cornerstones for this are:

  • Ethical implementation
    Avoiding biases and copyright infringements, as well as addressing employee concerns and empowering them to take part in this exciting new future.
  • Data security
    Making the protection of both internal and external sensitive data the highest priority.
  • Accurate output
    Applying our proven expertise and decades of experience to ensure the best possible results.

Final thoughts

Throughout its more than 30 years of history, Adnovum has seen many major developments in IT, but few of them can rival the paradigm shift AI has in store for the industry and society at large. Like the internet, which evolved from a platform mostly used by tech aficionados into the dominant means of social interaction today, AI is transcending the technological milieu and starting to permeate our personal and professional lives. And this will come with dramatic changes.

AI will not replace employees, but employees using AI may replace those who don’t. Companies must therefore do their best to aid their personnel in welcoming, adopting, and embracing the new possibilities of AI: create a regulatory framework that ensures secure use, provide staff with the appropriate tools and opportunities to acquire AI skills, and be open to hearing and addressing the concerns and feedback of employees. And most importantly, promote the appreciation of human talent and abilities.   

AI may make us painfully conscious of our shortcomings in, for example, the analysis of big data or speedy generation of texts and images. But one must keep in mind the shortcomings of AI regarding creativity, ingenuity, resourcefulness, and common sense. After all, AI can only tread down the paths that millions of people have laid out for it. So, maybe the greatest potential of AI isn’t its boost to productivity or profitability, but the ability to give us all more room to celebrate our uniquely human qualities.

 

This blog post was inspired by an interview between Dávid Balakirev, Regional CTO of Adnovum Hungary, and Prof. Dr. Clemente Minonne of the Lucerne University of Applied Sciences and Arts.