Are Chatbots’ Benefits Worth the Security Risks? Ask ChatGPT.

In the past year, we have discussed the importance of cybersecurity and highlighted the risks associated with the use of artificial intelligence (AI) in employment decisions. As AI continues to evolve, more businesses will adopt AI-based tools as ways to improve efficiency and customer service with minimal investment. However, as ChatGPT, the most popular AI application to date, demonstrates, the more people who use AI, the greater the security risk.

ChatGPT

OpenAI’s much-touted chatbot ChatGPT (chat generative pre-trained transformer) is a natural language tool that interacts with users via text to answer questions, create content, improve writing, and even debug code. Released November 30, 2022, the number of ChatGPT users exceeded 100 million in January 2023. And ChatGPT interactions originating at the workplace are also trending upward.

Unlike virtual assistants like Siri and Alexa, ChatGPT has a conversational style akin to texting with humans. The newest version GPT-4, adds advanced reasoning, with capabilities such as understanding images and following instructions. The popularity of the app has caused server problems, prompting OpenAI to offer ChatGPT Plus, a $20/month subscription with unlimited access, faster response time, and priority access to new features.

Incorporating chatbots for business

Employment attorneys and AI security experts alike have called attention to the legal risks and privacy issues associated using ChatGPT to automate business processes. Even new versions that claim to address security issues have failed to protect users from these concerns.

Data privacy

Chatbots collect massive data sets in order to interact with users. Employees might provide personal details like name, email address, and phone number. When chatbots are integrated with social media, email, and messaging apps, data can be exposed even more. Since conversations are online, their content is exposed.

Exposure of proprietary data and trade secrets

Just as casual human conversations can unintentionally refer to company confidential information, employee interaction with ChatGPT about job responsibilities may inadvertently include protected products and processes. Even though ChatGPT technically does not retain information from conversations, content become as part of the chatbot’s data set for future interactions.

Malicious intent

Cybercriminals find new ways to hack corporate servers as quickly as new systems are installed. They can create chatbots that impersonate legitimate ones and obtain sensitive data directly. These fraudulent chatbots also can launch phishing attacks and spread malware. Despite OpenAI’s assertion that GPT-3 and GPT-4 have features to prevent third-party access, the developer admits that revised versions not only have risks similar to previous models, but new ones as well.

Inaccurate information

Since ChatGPT’s information is only as accurate as its sources, the chance for gaps and errors is high. Due to the time needed to train chatbots on data sets, the knowledge base has an inherent risk of misinformation. If employees do not fact-check information used for work, problems will arise.

Mitigating vulnerabilities

Many of the essential security measures for companies utilizing chatbots should already be a part of your infrastructure and policies. But the humanlike interaction of ChatGPT and similar apps fosters a more casual approach to their use.

An important first step in protecting data and confidential information is to educate employees about how chatbots work and the risks involved in their use. Train employees to interact only with trusted sources and activate alerts of attempted access by suspicious chatbots. Make sure all workers use strong passwords for all accounts and never share login credentials.

The cybersecurity community is working on prototypes for security software that can train new versions of GPT-based chatbots to detect malicious activity in their datasets. Meanwhile, investigate using a virtual private network (VPN) to tighten security and protect the company network.

Are chatbots worth the risk?

While employment law attorneys are necessarily focused on mitigating potential threats to employers, businesses should also pay attention to the potential benefits of ChatGPT and its successors. Chatbots can simplify labor-intensive processes, create or redesign company websites, improve content and online communication, assist IT professionals in identifying software bugs or programming errors, and much more.

The benefits can outweigh the risks—but only if employers take steps to mitigate security vulnerability. That includes policies that specify how employees use chatbot information in connection with their jobs. Be sure to consult an employment attorney who is experienced in cybersecurity and understands how companies can protect themselves. We are, as always, ready to help.

 

 

Previous
Previous

FTC Decision Means Your Severance Agreements Could Be Illegal

Next
Next

Amy E. Davis in the news