If you suspect an active attack on your business, call our emergency hotline at: 612-399-9680
If you suspect an active attack on your business, call our emergency hotline at: 612-399-9680

The Effect of ChatGPT on the Information Security Community

blog photos (12)

What makes ChatGPT revolutionary?

Our team believes ChatGPT is the most significant step forward in the evolution of technology since Bitcoin. Let us take a look at a few basic questions: Why does computer science exist? Why do we even create technology and computing systems? It is because we are trying to make life better and more efficient.

ChatGPT’s technology has been studied and tested for approximately five years. In December 2022, Open AI released ChatGPT for the public to use for free, and it became the fastest-growing app ever with over one million users in a week. ChatGPT uses an expansive data set and understands human language quite well. It is incredibly conversational, allowing users to talk to it like it’s an actual person. ChatGPT also has a working memory, so users do not need to repeatedly enter the same prompt, making it even more accessible.

How do ChatGPT and other AI tools positively affect the information security community?

 Tools like ChatGPT are created with the purpose of being used for good. It can make life easier and work more efficient. For example, brainstorming can take up substantial time and is not always as fruitful as you might hope.

Chatting with ChatGPT is like brainstorming with the most knowledgeable person on Earth who can return information instantly. Not only is the output in perfect English, but ChatGPT can respond in the tone or style that fits your needs. When using ChatGPT, the only limit is your imagination. 

ChatGPT can read and write code as well as find vulnerabilities and bugs, saving time and making people more efficient. There is a close connection between software and information security, and this aspect of ChatGPT is incredibly helpful for our industry.  

For the information security (infosec) community, ChatGPT can improve security education training awareness (SETA). It can be used to create training scenarios and brainstorm ideas about how to train and educate employees. ChatGPT can provide ideas for policy development.

How does ChatGPT negatively affect the infosec community?  

Cybercriminals benefit from this technology in the same ways as everyone else. It can make their work—committing crime—easier, faster, and better executed.  

Open AI has tried implementing restrictions and filters on the output to prevent users from asking questions that are malicious or dangerous to society. Cybercriminals regularly try to bypass these restrictions to get harmful results by essentially social engineering the AI (artificial intelligence) or using the unfiltered API version of the app. Criminals craft prompts that use the technology against itself. For example, a user told ChatGPT to be in “Dan” character (Do Anything Now), which told the AI to pretend it could do anything with no restrictions. It responded that it was in Dan mode and gave negative and unrestricted output.

Major red flags identifying phishing emails are grammatical and formatting errors because they often come from hackers in foreign countries where English is not their first language. ChatGPT’s output in perfect English makes these indicators nonexistent. Cybercriminals can now enter a prompt into ChatGPT asking it to write a phishing email designed to get someone to click a link or download a file, and the output will be grammatically correct. The same drawback also applies to crafting telephone scripts (vishing). 

Similarly, this gives cybercriminals an advantage when executing business email compromise (BEC) attacks, where criminals masquerade as stakeholders and send a phishing email to another employee urgently asking for money. Cybercriminals can ask ChatGPT to write these emails in the style of a specific person to make the ask seem more realistic.  

Additionally, ChatGPT’s ability to write code makes it easier for cybercriminals to write malicious or shell code to hide in computer systems or email attachments.  

In the future, there may be video chat attacks using a deep fake (a video wherein a person’s appearance or voice has been altered to be like someone else). A cybercriminal could look and sound like a CEO asking for a wire transfer because so much data is available to train the AI. Employees are not prepared for this level of attack. For so long, the focus of social engineering training has been on identifying and preventing email attacks. Because of this, it will be a challenge to train people on how to avoid attacks that use AI.

What can the industry do about the risks of ChatGPT?  

When it comes to managing these risks, there will be a technical solution and a human solution. There are a few AI detection tools currently available, and security companies will create more of these tools in the future.

Our experts believe the human solution, SETA, will be far more critical. Employees need training about the tool, knowledge about red flags to look for, and guidelines for what they should and should not use the tool for. If your business has an AI detection tool, employees should be trained on it. 

People need to constantly be aware of new technologies, changes and updates with AI tools. Open AI puts out release notes about any changes made to ChatGPT, including adjustments to stop bypass methods. Open AI is constantly searching for ways to make ChatGPT better while preventing it from causing harm.

Have a question about ChatGPT or want to understand how the tool may affect your business? Contact us today and a Blue Team Alpha cybersecurity expert will be more than happy to answer your questions.

Facebook
Twitter
LinkedIn
Pinterest