Is Generative AI a Security Risk?

by Pranav Ramesh
April 19, 2023
Generative AI regulations

With over one-hundred million monthly active users, ChatGPT has become the fastest-growing consumer application in history. With such fast growth, along with the attention it’s been receiving in the media, generative AI’s story is not unlike that of social media. Like with social media, the public and lawmakers are concerned that they may lose control over its growth. The growth of generative AI has specifically sparked conversations surrounding security.

Lawmakers in Europe have already enacted laws to manage AI technology; Italy has even temporarily banned the use of AI. While AI allows us to automate jobs, give us new ideas, and create exciting and entertaining content, the government, the public, and companies need to take precautions to ensure security. The goal of this article is to highlight specific security risks and provide ideas to organizations that want to reap the benefits of AI while maintaining the security of their organization, employees, and users.

The U.S. and AI Regulation

On April 11, the Biden administration announced they are seeking public input on potential accountability measures for AI systems as security and educational concerns grow, having received advisement from the National Telecommunications and Information Administration. Meanwhile, the Center for Artificial Intelligence and Digital Policy, a tech ethics group, requested that the U.S. Federal Trade Commission stop OpenAI from issuing new commercial releases of GPT-4 because it is “biased, deceptive, and a risk to privacy and public safety.”

Security Risks With AI

Individuals and organizations who benefit from AI’s ability to increase productivity and efficiency may wonder how many security risks, and what kind, are truly a threat. The following describes several security risks generative AI may present to organizations and how leaders can protect the information of their company, employees, and users.

More Advanced Phishing Attempts

You can often spot phishing attempts by poor spelling and grammar, but platforms like ChatGPT can create more realistic and detailed messages that make scams more legitimate-sounding and pump them out in larger volumes.

Even though ChatGPT is programmed to make malicious free content, hackers can still trick the system with the right wording of prompts. Forbes writes, “Aside from text, ChatGPT can generate very usable code for convincing web landing pages, invoices for business email compromise (BEC) attempts and anything else hackers need it to generate.”

Forbes also writes, “AI is the problem, and AI is the solution.” For example, AI tools can train themselves to recognize legitimate email content and its context to automatically determine whether an email’s language content and style resemble that of past messages from legitimate senders. AI can even take into account the time of day or month these emails typically arrive and the headers, bank account numbers, and customer IDs included in these emails. AI can also learn the paths emails take through the Internet.

Data Security Risks

To make its content as informed and realistic as possible, generative AI uses massive amounts of data, mostly without permission. This, of course, violates privacy, especially when users’ data is sensitive. 

Nonconsensual data collection also affects creators, like visual artists whose art is being used to generate countless “original” images for users’ entertainment. “It’s the opposite of art,” says children’s book author and illustrator Rob Biddulph in the Guardian. “True art is about the creative process much more than it’s about the final piece. And simply pressing a button to generate an image is not a creative process.”