Artificial intelligence (AI) is gradually changing our private and working lives. Generative AI has been on everyone's lips since ChatGPT at the latest and many people are reporting on their experiences with human-machine interaction. But what does this mean for companies in concrete terms? Are employees perhaps already using the diverse possibilities of AI-supported applications, which range from scheduling, text and image generation to research activities, to fulfil their work tasks? Are managers aware of this use? To what extent are such tools already being used? What data is fed into the tools? Does the company's own data flow into the training of external systems? Does this result in disadvantages and legal risks for the company? How can these be minimised? But how can AI also be used to optimum advantage?
The use of AI tools can optimise work processes and improve their results. The efficiency gains leave more time for innovative ideas and solutions. However, unauthorised use can give rise to liability risks under data protection, image and copyright law, among other things. Even if not all companies are certainly using AI yet, the topic is the subject of much discussion and is under constant observation in terms of competitiveness.
It is therefore advisable to deal with an AI guideline at management level in order to take advantage of shaping opportunities and minimise legal risks.
What is an AI policy ?
With an AI policy, the company sets the framework for the use of the technology - and thus opens up opportunities. It therefore serves as a management tool to guide the use of the technology in innovative and safe ways. By setting limits at the same time, the guideline also serves to minimise risks. In an AI policy, the management thus defines how the topic of AI should be dealt with in the company. This creates internal clarity on the one hand and external positioning on the other.
So why should companies invest in an AI policy? What are the objectives?
- Responsible business behaviour: The use of AI puts the ethical aspects of corporate behaviour under a magnifying glass. For example, unbalanced data sets make social, environmental and ethical consequences more quickly and more visible due to the greater scaling of digital products. Existing human prejudices (so-called bias), discrimination and a lack of data security are emphasised and can cause greater damage.
- Compliance & legal protection: The use of AI tools also involves legal aspects that need to be taken into account. These also depend on the type of company. However, there are many different regulations that need to be observed, for example with regard to data protection and copyright. The planned EU regulations such as the AI Act, which will affect various companies, also come into play here.
- Risk management: An AI policy is also an effective tool for minimising business risks. If employees use AI without the knowledge or consent of management, as described above, this can result in economic damage, including possible liability claims. If the issue is not adequately addressed in advance, this can lead to unethical results that damage the company's reputation and cause it to fall behind in the market. Other damaging events, such as a leak of information and the resulting loss of business secrets, cannot be ruled out either.
- Transparency & trust: In return, a company can create even more transparency and trust in the market if it shows that it is engaging positively with such future technologies. A company's core values can thus be transferred from the analogue to the digital world.
- Innovation & competitiveness: In addition, a clear AI policy can define and promote the use of AI for innovative purposes. The time freed up by efficiency gains can thus be used for innovative solutions that ensure the company's competitiveness in the future.
What should the AI policy therefore contain? Which aspects are particularly important?
Firstly, the company should decide on the general regulatory approach to dealing with AI. There are various options for this, such as a general ban with a reservation of authorisation or general authorisation with defined restrictions. In addition, a middle way can be chosen in which there is a general ban, but use is generally permitted under certain conditions in non-sensitive areas. From a legal point of view, a general ban subject to authorisation is initially the safest option, but the middle way is more innovation-friendly. The decision should be made in line with the digital strategy and corporate values.
It is also advisable to define a dynamic list of tested tools that can be used by employees in order to ensure safety and practicability.
In addition, the AI policy should clearly define which data may not be entered into AI systems if closed systems are not used. This primarily includes personal data and business secrets. In practice, it is particularly important to comprehensively classify the data available in the company according to its worthiness of protection and to keep this transparent for employees - this can be done as part of the information security system, for example. A clear categorisation can also serve as a basis for generating added value from the data within the company itself. The requirements for the use of AI should be based on this categorisation.
In addition, it should be clearly communicated to employees how the output from an AI system is to be handled. The human oversight approach is recommended here. Before final use, a human should always check what has been output by the machine in order to correct any errors such as hallucinations or the adoption of human prejudices based on the data set.
It is therefore advisable to address the topic of AI at an early stage in order to familiarise yourself with the potential and risks for your own company. An AI policy can serve as a basis for the further AI strategy, create clarity and should be written with appropriate legal expertise in order to address risks appropriately. Once the policy has been considered, it should also be communicated and practised within the company as part of employee training and other change management measures. A culture of using AI tools should be promoted in order to realise individual efficiency gains for employees. It is also important to provide ongoing training in this area, as things are changing at a rapid pace. Regulation must also be kept in mind, as legislators must also keep pace with these changes. Recent developments such as the Data Act, the Digital Services Act and the Digital Markets Act, as well as the AI Regulation, must be taken into account by companies. Overall, a strong fundamental structure must therefore be established in order to digitise well and withstand the competition.
Are you looking for legal support on an eye-level basis? Get in touch with us.