KI-Chatbot

What companies should consider when dealing with generative AI applications

"Generative AI" refers to systems that create new content such as text, images, audio, or code based on existing data. Currently, the most well-known application is ChatGPT, alongside many other generative AI applications. Key considerations for businesses in utilising such tools have been summarised here by the DIHK.

"Generative AI" refers to systems that create new content such as text, images, audio, or code based on existing data. Currently, the most well-known application is ChatGPT, alongside many other generative AI applications. Key considerations for businesses in utilising such tools have been summarised here by the DIHK.

Ten aspects companies should consider when using ChatGPT and other generative AI applications

1. Data Protection

ChatGPT's data processing is currently rather opaque. It is unclear on what legal basis personal data is transferred to the USA and there is no legal foundation for processing and storing personal data on servers in the USA. For this reason, companies should always check where data processing occurs when using generative AI systems. Avoid inputting or utilising personal or other sensitive/confidential data in generative AI systems as a general rule. This also applies to third-party data obtained and/or processed in other contexts. In general, users are advised to carefully assess the information fed into the systems, as it can be used to train and improve the AI. Recently, ChatGPT users can opt out of their data being used for AI training.

2. Data Quality

The work results of AI tools strongly depend on the quality, quantity, and weighting of the individual datasets they are trained on. Generative AI may produce inaccurate, misleading, or outdated statements. For many generative AI systems, it is not transparent what data sources are used or what opinions are represented. Contents and responses may be subject to bias. The reliability and objectivity of the output should always be questioned.

3. Intellectual Property

Data used to train the AI might be protected by copyright – such as text modules, terms, or images. Therefore, the AI-generated output may constitute a copyright infringement. Reproduction can be punishable by law. Caution is advised when dealing with AI-generated output. Using this output for external communication is particularly risky.

4. Transparency

Companies are advised to make their use of generative AI models transparent, including information about which processes they are used in. This can help build trust with customers, employees, and other stakeholders.

5. Liability and Risk Management

Companies should consider potential legal and financial risks associated with the use of generative AI, including clarifying liability in case of errors or damages caused by the use of AI.

6. Human Oversight

Companies should ensure that AI-generated content is reviewed by a human, especially in cases where a false statement could have serious consequences.

7. Training Staff

It is crucial to raise staff awareness of how generative AI applications work and how they can be integrated into work processes. Legal subjects such as data protection and ethical aspects should be discussed to ensure responsible use of the applications. Due to rapid technological developments, training sessions should be regularly updated to keep employees up to date.

8. Ethical Considerations

Companies should consider the potential impact of their use of generative AI on various stakeholders, such as customers, employees, and society as a whole. It should be ensured that its use aligns with the company's ethical principles.

9. Coding

Should companies use generative AI for programming and coding, they should first familiarise themselves with the tool's syntax and commands, and read the explanations thoroughly. Errors in code could affect the performance, functionality, and security of applications.

10. Plugins

Since early April, OpenAI, the company behind ChatGPT, has enabled new plugins for direct integration of ChatGPT into corporate systems. These plugins allow tailored searches of (real-time) datasets via interfaces or have AI assume tasks, such as booking trips. Although plugin features are currently limited, AI experts anticipate that a dedicated ecosystem, similar to the Apple App Store, will emerge in the medium term. Companies interested in using AI through such plugins should thoroughly address issues related to data protection, copyright, and data security.

Need support?

There are numerous neutral competence centres, support services, and funding programmes at federal and state levels to help companies use artificial intelligence, including:

The German Federal Office for Information Security (BSI) provides a report on the opportunities and risks of AI language models. The corresponding PDF "Large AI Language Models: Opportunities and Risks for Industry and Authorities" can be found on the BSI website (only available in German).

The Office of Technology Assessment at the German Bundestag (TAB) has analysed the fundamentals, application potentials, and possible impacts of ChatGPT and other computer models for language processing. The report is available in PDF format on the TAB website (only available in German).

The DIHK-Bildungs-gGmbH has compiled facts about ChatGPT concerning data protection and copyright.

As part of the initiative #TogetherDigital, the IHK organisation also offers free further training opportunities on the potential and operation of AI technologies.

Relevant in topic: