Avoidable ChatGPT actions in the workplace
ChatGPT and the like are good little helpers in the workplace but also a potential security risk. IT security specialist Forcepoint explains which tasks are still better done by yourself.
Answer questions, write texts and even create source code: Generative AI tools such as ChatGPT, Bard and Copilot have amazing capabilities and are also very popular in the workplace. No wonder, because they make work easier.
What many people don't realize is that these tools pose a potential security risk. Their providers not only use information that is freely available on the internet to train their AI models and generate the tools' output, but also user input. Data that you enter yourself could find its way into the answers of other users.
Companies and employees should therefore think carefully about which tasks they assign to ChatGPT. Forcepoint explains what they should not use generative AI tools for at work.
Writing answers to customer queries or support tickets. Such texts almost always contain personal information about customers and the company's intellectual property. There is a risk of giving the competition an advantage and getting into trouble with the data protection authorities.
Create content for a product launch or other important company announcements. The latest acquisition is still super-secret and has to be kept under wraps until all the signatures are in? A prompt from a third party could cause it to appear in their response and thus find its way into the public domain.
Analyze prices, the financial performance or budgets of your own company. If a competitor is looking for information about the company's financial situation, he could find it this way. It is therefore better to use a locally stored tool or a calculator.
Debug code or write new code. If generative AI creates code, it may contain malware or a backdoor. If you use it for debugging, your own code may end up in the hands of other programmers.
Summarize personal content such as CVs or internal company presentations and documents. Such content has no place on ChatGPT and the like. Especially as the operators of these tools could themselves fall victim to a data breach and sensitive data could then also be leaked in this way.
"Generative AI tools harbor major security risks. However, simply blocking access in the office is overstepping the mark and is often pointless. Such tools make employees more productive and they often access them from outside the company network anyway," explains Fabian Glöser, Team Lead Sales Engineering at Forcepoint. "It is better to sensitize employees to the risks and protect them from serious carelessness in their hectic working day with data security solutions."
Source: www.forcepoint.com