Is Generative AI a Security Risk?

by Pranav Ramesh
April 19, 2023
Generative AI regulations

With over one-hundred million monthly active users, ChatGPT has become the fastest-growing consumer application in history. With such fast growth, along with the attention it’s been receiving in the media, generative AI’s story is not unlike that of social media. Like with social media, the public and lawmakers are concerned that they may lose control over its growth. The growth of generative AI has specifically sparked conversations surrounding security.

Lawmakers in Europe have already enacted laws to manage AI technology; Italy has even temporarily banned the use of AI. While AI allows us to automate jobs, give us new ideas, and create exciting and entertaining content, the government, the public, and companies need to take precautions to ensure security. The goal of this article is to highlight specific security risks and provide ideas to organizations that want to reap the benefits of AI while maintaining the security of their organization, employees, and users.

The U.S. and AI Regulation

On April 11, the Biden administration announced they are seeking public input on potential accountability measures for AI systems as security and educational concerns grow, having received advisement from the National Telecommunications and Information Administration. Meanwhile, the Center for Artificial Intelligence and Digital Policy, a tech ethics group, requested that the U.S. Federal Trade Commission stop OpenAI from issuing new commercial releases of GPT-4 because it is “biased, deceptive, and a risk to privacy and public safety.”

Security Risks With AI

Individuals and organizations who benefit from AI’s ability to increase productivity and efficiency may wonder how many security risks, and what kind, are truly a threat. The following describes several security risks generative AI may present to organizations and how leaders can protect the information of their company, employees, and users.

More Advanced Phishing Attempts

You can often spot phishing attempts by poor spelling and grammar, but platforms like ChatGPT can create more realistic and detailed messages that make scams more legitimate-sounding and pump them out in larger volumes.

Even though ChatGPT is programmed to make malicious free content, hackers can still trick the system with the right wording of prompts. Forbes writes, “Aside from text, ChatGPT can generate very usable code for convincing web landing pages, invoices for business email compromise (BEC) attempts and anything else hackers need it to generate.”

Forbes also writes, “AI is the problem, and AI is the solution.” For example, AI tools can train themselves to recognize legitimate email content and its context to automatically determine whether an email’s language content and style resemble that of past messages from legitimate senders. AI can even take into account the time of day or month these emails typically arrive and the headers, bank account numbers, and customer IDs included in these emails. AI can also learn the paths emails take through the Internet.

Data Security Risks

To make its content as informed and realistic as possible, generative AI uses massive amounts of data, mostly without permission. This, of course, violates privacy, especially when users’ data is sensitive. 

Nonconsensual data collection also affects creators, like visual artists whose art is being used to generate countless “original” images for users’ entertainment. “It’s the opposite of art,” says children’s book author and illustrator Rob Biddulph in the Guardian. “True art is about the creative process much more than it’s about the final piece. And simply pressing a button to generate an image is not a creative process.”

For organizations who want to protect their employees’ and consumers’ data, it’s important to use good data hygiene. This includes using only necessary data types to teach AI and only maintaining their data for as long as it is needed to accomplish the specific goal at hand.

Misinformation and Bias

The massive amounts of data used to inform generative AI may be biased and contain misinformation, which means the output of generative AI may perpetuate or even worsen information biases in the media. 

NewsGuard, a company that tracks online misinformation, conducted a recent experiment in which they fed ChatGPT conspiracy theories and false narratives. “This tool is going to be the most powerful tool for spreading misinformation that has ever been on the internet,” said Gordon Crovitz, a co-chief executive of NewsGuard. “Crafting a new false narrative can now be done at dramatic scale, and much more frequently — it’s like having AI agents contributing to disinformation.”

Organizations that want to prevent biased AI output should reduce algorithmic bias by ensuring data sets are broad and therefore inclusive. It is also essential to be aware that AI biases mostly affect women, minorities, and other minority groups like the elderly and the disabled.

Conclusion

The future can still be bright in terms of generative AI, but more thought is required to reap its benefits while minimizing security risks. With the U.S. government taking initiative in its search for an accountability mechanism for generative AI, we are getting closer to balancing technological innovation with security, safety, and accuracy.

Read more on CyberSecurity  
26+ Years in IT Placements & Staffing Solutions

Illinois

1030 W Higgins Rd, Suite 230
Park Ridge, IL 60068

Texas

222 West Las Colinas Blvd.,
Suite 1650, Irving, Texas, 75039

Mexico

Av. de las Américas #1586 Country Club,
Guadalajara, Jalisco, Mexico, 44610

Brazil

8th floor, 90, Dolorez Alcaraz Caldas Ave.,
Belas Beach, Porto Alegre, Rio Grande do Sul
Brazil, 90110-180

Argentina

240 Ing. Buttystreet, 5th floor Buenos Aires,
Argentina, B1001AFB

Hyderabad

08th Floor, SLN Terminus, Survey No. 133, Beside Botanical Gardens,
Gachibowli, Hyderabad, Telangana, 500032, India

Gurgaon

16th Floor, Tower-9A, Cyber City, DLF City Phase II,
Gurgaon, Haryana, 122002, India

Work with us
Please enable JavaScript in your browser to complete this form.
*By submitting this form you agree to receiving marketing & services related communication via email, phone, text messages or WhatsApp. Please read our Privacy Policy and Terms & Conditions for more details.

Subscribe to the PTP Report

Be notified when new articles are published. Receive IT industry insights, recruitment trends, and leadership perspectives directly in your inbox.  

By submitting this form you agree to receiving Marketing & services related communication via email, phone, text messages or WhatsApp. Please read our Privacy Policy and Terms & Conditions for more details.

Unlock our expertise

If you're looking for a partner to help build talent management solutions, get in touch!

Please enable JavaScript in your browser to complete this form.
*By submitting this form you agree to receiving marketing & services related communication via email, phone, text messages or WhatsApp. Please read our Privacy Policy and Terms & Conditions for more details.
Global Popup