The Wall Street Journal reported last week that cybersecurity firm Palo Alto Networks was lined up to acquire another cybersecurity firm, CyberArk, and Bloomberg pegged the cost at $25 billion.
It’s a move, like many we’re seeing in cybersecurity today, about AI. For Palo Alto, CyberArk can address a growing area of need: securing identities not just for humans but also for machines and AI agents.
Last time in the PTP Report, we covered major cybersecurity news for June and July in our bi-monthly roundup, but today our focus is specifically on cybersecurity in the AI space.
With AI being infused in almost all aspects of work (and life overall), what it’s doing to both accelerate and fight cybercrime is of increasing importance.
Today we look at updates on using AI agents for cybersecurity, ways it’s being employed by criminals, and ongoing vulnerabilities in AI models to be aware of.
AI in Cybersecurity Defense
Among the positive breakthroughs in cybersecurity for AI: much improved bug detection in software.
A recent UC Berkeley study profiled the extent to which AI models (OpenAI, Google, Anthropic, and open-source from Meta, DeepSeek, and Alibaba—using agents OpenHands, cybench, and EnIGMA), working across 188 open-source codebases could find bugs that had been missed by humans. This included 15 of the zero-day variety, some deemed critical. They displayed their findings on the CyberGym leaderboard.
Wired also profiled in June the tool Xbow, an AI startup that’s gaining on human bug-hunters on the HackerOne leaderboard.
UC Berkeley professor Dawn Song, who led the research, said the results far exceeded their expectations, pointing to how AI is improving fast in its ability to identify issues we may miss in existing code.
The Changing Role of Junior Cybersecurity Staff
ISC2, a nonprofit focused on training and certification and one of the world’s largest security organizations, conducted a global survey on AI in 2025 and found that 30% of organizations had already integrated AI cybersecurity tools, with another 42% in the testing and evaluation phase (with plans to integrate).
And of those who’ve integrated, 70% are already seeing positive results (with another 14% saying it’s still too soon).
More than half (52%) believe it will reduce the need for entry-level hires in the field (another 31% believe it could create new entry-level roles or increase demand).
The trend is concerning for many respondents in cybersecurity leadership, including one who said:
“Reducing entry-level cybersecurity roles could lead to a significant skills gap, limiting talent growth and innovation. Long-term impacts include increased strain on senior professionals, slower response to threats, and higher costs due to competition for experienced talent.”
As with software engineers, the trick may be to harness the benefits of AI while continuing to stock a pipeline of tech talent that will continue to be needed, with greater experience, in the future.
And already new roles are being created. Some currently advertised and targeted to entry-level pros include:
- AI-Assisted SOC Analyst: to work with AI-enhanced Security Information and Event Management (SIEM) and Security Orchestration, Automation and Response (SOAR)
- Security Data Analyst / Junior Threat Intelligence Analyst: to manage large datasets of in-use threat indicators in training and to validate AI models
- Automation and Security Orchestration Assistant: to maintain and aid in the creation of security automation scripts for AI platforms and AI-driven workflows
- AI Governance or Compliance Associate: entry-level oversight roles to aid in aligning AI security systems and ensuring ongoing proper functioning
- Security Testing Assistant: to test and oversee AI-driven security tools based on their handling of adversarial inputs
- Cloud Security Support Analyst: to utilize AI-enhanced security monitoring tools for the defense of cloud data repositories and services
AI Cybersecurity Threats Growing in 2025
So far there’s been nervous anticipation of the use of AI in cyberattacks beyond deepfakes (see below) and its explosion in scams, and in June the World Economic Forum warned of more sophisticated AI use, such as agents, deployed to amplify the speed and scale of attacks.
AI Cracking Passwords
On this front, do you know how fast AI can hack a PIN?
The Cybernews reported in late May on Mesente research’s demonstration, showing AI can crack a four-digit PIN faster than most of us can even unlock our own phones.
PINs with repeated digits are the easiest for AI, broken in 0.44 seconds on average.
Other patterns increased the time (slightly), with PINs having consecutive numbers averaging 0.69 seconds, grouped pairs at 0.91 seconds, and random PINs, showing no discernible pattern, were broken in 1.03 seconds on average.
Far better for security? Authentication with multiple layers, like one-time passwords (OTPs) or time-sensitive tokens.
The State of AI-Generated Deepfakes in 2025
At a Federal Reserve conference in July, OpenAI CEO Sam Altman warned the financial industry of a “significant impending fraud crisis” due to AI’s ability to now consistently fool voice-print authentication.
He urged institutions to only use more AI-resilient methods, suggesting that video clones will also soon be at this level. Fed Vice Chair for Supervision Michelle Bowman added that this need is recognized and something they could potentially partner on.
Meanwhile, the Washington Post reported in July on a deepfake of Secretary of State Marco Rubio being used to contact both US officials and foreign ministers. These audio messages were also accompanied by texts matching his writing style.
Wired in July reported on pro-Russian disinformation operations like Operation Overload that use consumer-grade AI tools to mass-produce videos, images, QR-coded propaganda, and websites all with the aim of flooding online channels. One example manipulated real video of a French lecturer and researcher to make it appear as if she were telling Germans to riot and support certain political positions. (In the actual source video, she was discussing a prize she had won.)
These are just some examples of a surge in AI-powered scams, from fraudulent job offers and romance cons, to fake applicants applying to (and landing) jobs under stolen identities.
AI, Ransomware, and Disguised Apps
Ever since third-party AI apps first appeared, fake versions have also been deployed by cybercriminals, as we’ve reported on numerous times in the PTP Report cybersecurity roundups.
Cisco Talos issued a recent report on several current variants: a fake version of novaleads (as novaleadsai), ransomware posing as ChatGPT 4.0 full version-Premium.exe, and a fake of InVideo AI.
These have succeeded at manipulating search engines to get the fake variants up top, and once clicked on, can install ransomware that encrypts files and deploys a ransom note, or destructive malware that can monitor activity and hijack the GUI.
When downloading AI tools, companies need to be increasingly vigilant, as hackers are trying to take advantage of the varied and often irregular way that AI is used across workplaces.
Securing AI Tools in the Workplace
With more companies recognizing the importance of AI and describing themselves as AI-first, enthusiasm can sometimes spur adoption ahead of security considerations.
Virtually all Fortune 500 companies are using or experimenting with AI, while governance and protocols around AI use are often left trailing behind, which can lead to problems like data leaking and infrastructure vulnerabilities.
As Emanuelis Norbutas, Chief Technology Officer at nexos ai told the Cybernews:
“As adoption deepens, securing model access alone is not enough. Organizations need to control how AI is used in practice—from setting input and output boundaries to enforcing role-based permissions and tracking how data flows through these systems.
Without that layer of structured oversight, the gap between innovation and risk will only grow wider.”
PTP’s Founder and CEO has written before about adopting existing AI frameworks to aid with AI implementation.
The McDonald’s AI Hack of Olivia
Security researchers recently disclosed a number of such security flaws in McDonald’s AI hiring chatbot Olivia, which was developed by third-party provider Paradox ai.
This included simple things like being able to access chatlogs and contact information just by decrementing one’s applicant ID number and, as has been widely reported, gaining admin access using the login of “admin” with password “123456.”
These vulnerabilities may have exposed the personal data for up to 64 million applicants, and while they have been reported on and resolved, it was thanks to researchers who took it upon themselves to poke around.
This underscores the necessity of ensuring third-party providers are following even basic security principles when providing AI services.
Researchers Find Major Gemini AI Vulnerability…
With the majority of AI tools still using a handful of available models, vulnerabilities in the models themselves can be of critical importance.
One frequent example comes from prompt injection, where criminals get commands into AI systems that appear legitimate and even come through legitimate means but provoke undesired behavior as a result.
One example was submitted by researchers to security firm 0DIN, who verified and included the vulnerability in their blog.
By including text in an email that’s invisible to the human eye, attackers can pass prompts to LLMs that do email summarizing, tricking them into outputting messages to the viewer without even including attachments.
Below is an example demonstrating the email and ensuing Gemini summary with the malicious warning included:
These kinds of prompt injections mean that third-party text introduced into your model can be used like malicious code.
And while these methods have existed since 2024 (with mitigations published by Google), the technique they demonstrated still works today.
… and Copilot 365 Security Issues
Researchers from the Aim Labs Team found the same kind of vulnerability could be exploited in Copilot 365, though Microsoft assigned the greatest severity to the issue and claims to have fully mitigated it.
Named EchoLeak, this form of prompt injection could be ingested in a similar way, but instead of outputting a message, the model was instructed to collect sensitive user data.
Attackers could exfiltrate the data by having Copilot construct a link with stolen information attached as parameters. And while users maybe wouldn’t click on such a suspiciously long link, it could be constructed in such a way that the browser would fetch it without even requiring a user click.
Prompt injections like these have also recently been found to work on GitHub’s MCP server, GitLab’s Duo Chatbot, and, per British programmer and co-creator of the Django Web framework Simon Willison, have previously impacted ChatGPT, Google Bard, Writer dot com, AmazonQ, Google’s NotebookLM, AI Studio, Slack, Mistral Le Chat, xAI’s Grok, Claude’s iOS app, and most recently ChatGPT Operator in February.
For GenAI systems built to field and reply to all prompts, the difficulty is for the systems to discern between good faith requests (or parts of requests) and manipulations being submitted on the sly.
Its Popularity Is Surging, but How Secure Is Vibe Coding?
For vibe coding, simplicity and speed are keys to its success. You go to a tool like Cursor, Windsurf, Copilot, Codex, or Claude, chat with it in natural language, and get code (and potentially a working application you maybe couldn’t have managed as a result).
Of course, this combination of popularity and simplicity also makes it ripe for exploitation by attackers.
This was demonstrated in June by researchers, who uncovered malicious VS Code extensions, including one with over 200,000 installs, made to specifically target vibe coders. With many of the most popular tools unable to access the vetted VS Code Marketplace, they often instead rely on more risky third-party platforms.
Criminals then submit malicious code in the form of fake tools to these markets to trick vibe coders.
Disguised as productivity boosters, one of these extensions, for example, once downloaded could execute PowerShell scripts giving attackers remote access to the vibe coder’s system.
Koi Security found a similar vulnerability in the Open VSX Registry, a popular extension marketplace used by many of these vibe-coding environments.
A critical flaw in its auto-publishing system allowed attackers to hijack and overwrite extensions, potentially compromising millions of developer environments.
This points out that vibe coding products don’t only need to be double-checked for the security of AI-generated code—the environments themselves can also be compromised, hoping for a similarly casual (or inexperienced) treatment.
And of course, in addition to vulnerabilities, vibe coding is also being used by hackers themselves, who can also build malware and phishing lures with far greater ease.
As covered by Wired in June, tools like WormGPT and FraudGPT have come and gone for this purpose, but it’s also not difficult to jailbreak ChatGPT, Gemini, and Claude for this purpose, with whole communities devoted to finding ways to bypass their guardrails. (Sometimes it’s as easy as telling the chatbot you need the malicious code for cybersecurity exercises you’re working on.)
The bottom line with AI, like most technological innovations, is that advances that can be leveraged to aid in cybersecurity will also be mirrored in use for attack.
AI Security Recommendations, PTP, and the Importance of Experience
On his weblog, Simon Willison warns against what he calls the Lethal Trifecta when it comes to AI agents and organization security:
If your LLM uses tools and meets the above criteria, he warns that it can be easily tricked into sending private data to an attacker.
It’s almost impossible to gain the benefits of these models without asking them to go out and consume content beyond our control, and he warns that major models cannot reliably distinguish between valid and undesired instructions found in that content.
For application developers to guard against this, he recommends using design patterns that ensure that once an LLM has ingested untrusted content, it is confined in such a way that it cannot then be tricked into taking consequential actions (such as sending emails or making API calls).
Of course, the best way to take advantage of these suggestions and protect your organization overall is to employ talent with comparable AI experience and expertise.
This can come in the form of ensuring you’re using experienced AI professionals paired with a firm like PTP to hire cybersecurity experts.
With nearly three decades of experience providing the best tech talent, PTP excels at helping you get the talent you need most, onsite or off.
Conclusion
It’s not news that AI’s being rapidly adopted throughout the world; recent OpenAI stats alone indicate they’re near 700 million weekly users (or some 8.7%+ of the world’s population).
But with this speed also comes exposure. With many AI applications being rushed to market to capitalize on the heat, companies that want to seize on its benefits can find themselves dealing with ensuing difficulties, from lacking governance to poor data readiness to missing experience and documentation.
Critical among such issues is cybersecurity.
While AI will increasingly be a powerful tool for security, it’s also a significant risk factor, not only from attackers but also from a lack of clear understanding of AI vulnerabilities and sufficiently careful implementation.
When it doubt, you can always reach out to PTP. As early adopters of AI within our own rigorous recruiting standards, you can trust us to provide both quality talent and experienced consulting services.
References
Palo Alto Networks Nears Over $20 Billion Deal for Cybersecurity Firm CyberArk, The Wall Street Journal
AI Agents Are Getting Better at Writing Code—and Hacking It as Well, A Pro-Russia Disinformation Campaign Is Using Free AI Tools to Fuel a ‘Content Explosion’, and The Rise of ‘Vibe Hacking’ Is the Next AI Nightmare, Wired
ISC2 Survey: 30% of Cyber Pros Using AI Security Tools, ISC2
AI agents: the new frontier of cybercrime business must confront, World Economic Forum
AI first, security later: all Fortune 500 companies use AI, but security rules are still under construction, AI cracks your 4-digit PIN in less than a second – so why are we still using them?, and Check before you click: ransomware gangs are disguising their tools as popular AI apps, Cybernews
OpenAI’s Sam Altman Warns of AI Voice Fraud Crisis in Banking, Security Week
A Marco Rubio impostor is using AI voice to call high-level officials, The Washington Post
Google’s Gemini AI Vulnerable to Content Manipulation, Dark Reading
Phishing For Gemini, 0DIN AI blog
Malicious VSCode extensions with millions of installs discovered, BleepingComputer
Hackers target vibe coders with malicious extensions for their code editors, Cybernews
The lethal trifecta for AI agents: private data, untrusted content, and external communication, Simon Willison’s Weblog
FAQs
What is prompt injection and why does it matter?
Prompt injection is the process of hiding unwanted instructions inside the content that an AI processes. This can come from any uncontrolled/third-party material that an AI handles, be it emails, websites, images, PDFs, copy/pasted text, etc.
The danger can extend to various types of undesired behavior from the AI system, including tracking behavior and leaking private data.
Why did Sam Altman recently call voice authentication “effectively dead”?
Today’s GenAI systems are sophisticated enough to be able to clone voices capable of fooling voiceprint systems using just seconds of provided material, and some financial institutions still use this form of verification.
With AI video generation deepfakes not far behind, it’s essential that institutions upgrade their defenses to verify they’re actually dealing with the people they believe they’re dealing with.
What are some of the security risks from vibe coding?
Vibe coding is an exciting area that’s increasingly popular, but as such it’s also a new attack vector for cyber criminals. In addition to verifying that vibe coding–generated software is sufficiently secure, it’s essential to protect your environment, and be vigilant about any extensions downloaded from vibe coding platforms.