TechForge

September 3, 2024

Share this story:

Tags:

Categories::

Ryuki Hayashi accessed a free generative AI (genAI) tool on his computer and smartphone, using indirect questions to bypass safeguards and piece together ransomware code. His unfinished malware was discovered during a Japanese police raid for unrelated fraud.

Cybersecurity experts across Japan and Asia expressed both relief and alarm, recognising the potential risk posed by his ability to use genAI to create functions for encrypting data and demanding ransom from companies.

“We need to look beyond traditional defences,” said Verizon’s Vice President for APAC, Rob Le Busque. “Small and medium-sized government agencies can lead in cybersecurity by embracing proven, innovative strategies and leveraging industry partnerships.”

GenAI expands attack surfaces due to its versatility and accessibility, unlike specialised AI, which is limited to specific tasks. GenAI can create diverse threats, such as phishing and deep fakes, making it a broader and more formidable risk. The World Economic Forum warns companies to prepare for these sophisticated AI techniques, which may also include synthetic identity fraud.

Source: Shutterstock

While concerning, the foiled Hayashi AI cyber attack is an exception, not a rule. It requires manual creativity, while larger-scale attacks would depend on scores of people and time to laboriously gather large amounts of data to launch an AI offence.

In fact, while genAI introduces elements of unpredictability and discomfort, the stark reality is that humans default to the most accessible and straightforward means of cybercrime using traditional methods like social engineering, phishing, and exploiting stolen credentials. These methods remain dominant in Asia Pacific, with many breaches involving human error or manipulation. Consequently, despite growing concerns about AI-driven threats, conventional cyber threats continue to pose the region’s most immediate and prevalent risks.

Although AI-related attacks currently make up a small percentage of overall attacks in Verizon’s 2024 Data Breach Investigations Report (DBIR), they are still an important topic due to their potential growth in the future.

What keeps cyber defenders on edge is nation-state actors using genAI to compromise critical infrastructure across APAC. By automating and scaling attacks utilising the technology, water plants, electrical grids, and public safety assets risk physical damage and downtime when scaled to this degree. Espionage accounts for 25% of cyberattacks in APAC, which is significantly higher than the 6% in EMEA and the 4% in North America. This suggests that attacks motivated by sensitive data collection are a significant concern.

Reports are already surfacing of state-sponsored cyber actors prepositioning within critical infrastructure to potentially disrupt key sectors like communications and energy in crisis scenarios, a trend that genAI may accelerate.

Source: Shutterstock

OpenAI and Microsoft recently terminated accounts linked to five state-affiliated actors from China, Iran, North Korea, and Russia who misused AI for tasks like researching targets, debugging code, generating scripts, and creating phishing content. Fortunately, no significant damage was reported.

Despite the dystopian power attributed to genAI, it also offers a transformative opportunity to turn defence into offence. According to Gartner®, “By 2027, generative AI will contribute to a 30% reduction in false positive rates for application security testing and threat detection by refining results from other techniques to categorise benign from malicious events.”[1] At Verizon, security teams are nearing 90%, a notable achievement considering AI engines ingest more than 70 billion data points from the network daily.

Utilising zero-trust architectures, cyber defenders are finding novel ways to keep bad actors at bay. This includes using genAI for constant network monitoring to spot and fix real-time threats. Automated tests simulate cyber-attacks to find and fix weaknesses before hackers can, keeping defences solid and ready.

CISOs deploy AI to analyse traffic and detect phishing, identifying and blocking suspicious activities early. AI can also create strong passwords and handle routine tasks, allowing experts to focus on critical security issues.

Mastering AI, including for security, remains crucial to gaining a competitive advantage. Enterprises may unlock a return on investment in a little over a year, with an average return approaching $4 for every $1 invested.

However, AI risk quantification today unlocks this innovation tomorrow: Risk management must be fully embedded and integrated to succeed, not playing catch-up, especially when building new genAI applications. APAC companies, understandably excited to build their first internal genAI solution, often fail to consider the challenges with testing applications.

Random tests by those unfamiliar with genAI won’t reveal if it’s truly secure. It’s like securing a home versus the Pentagon; the approach must be tailored and quantified. Placing a regular penetration tester in a complex AI environment invites vulnerabilities — even more so as attack surfaces expand into IoT environments and self-optimising plants typical of Industry 4.0.

Knowing how you stack up against threats requires an objective assessment of your cybersecurity controls, particularly as introducing new AI frameworks increases the uncertainty surrounding the rollout of new technologies to customers or public sector agencies. GenAI risk quantification paves the way for safe, secure breakthrough innovations that protect sensitive data in some exciting ways.

Unlike other types of AI, genAI enables innovation by generating novel solutions and content, which makes its impact unique.

Deploying new genAI use cases solves the world’s biggest problems in healthcare, finance, climate change, energy, fire prevention, Industry 4.0, productivity and customer commerce. AI dashcams act as co-pilots for fleet drivers. With real-time coaching, drivers are reminded to maintain a safe distance from other vehicles.

Healthcare organisations use real-time insights from monitoring devices to improve clinical decisions. Supported by genAI-enabled solutions like intelligent video surveillance and equipment tracking, doctors can operate more safely with analytical insight using innovative diagnostic tools.

Imagine a customer service centre that can answer 95% of all inquiries, with the possibility of increasing this to 100%. New generations of genAI personal assistants not only understand employee needs but also streamline tasks and provide clarity.

Companies like Verizon are also pioneering new “Fast Pass” genAI features that intelligently pair customers with the best representatives for their specific needs, ensuring efficient and effective resolutions.

Achieving mastery of genAI in areas like the above hinges on meeting specific threats and vulnerabilities, including technical aspects, attack vectors and the need for robust security measures.

Source: Shutterstock

Renowned ethical hacker Bastien Treptel warned earlier this year that major banks work on the ethos that harmful agents are already inside their system: “They’re monitoring and trying to limit the damage.”

The Hayashi arrest also shows how genAI lowers the entry barrier for diverse cyber criminals, including nation-state actors with more extensive resources. People with little or no development experience can now write a zero-day exploit. As a result, attackers may eventually launch sophisticated cyber threats, burdening cash-strapped companies or public sector agencies.

In this environment, it is wise to “assume nation-state and work backwards.” This approach fosters a defensive mindset, preparing CISOs to effectively counteract even the most sophisticated genAI threats. Ultimately, while focusing on the risks of genAI is prudent, it’s vital to see its potential for defence. Cyber defenders can use AI to outsmart and neutralise threats as bad actors exploit it for attacks.

Finally, consider a spate of about 50 million cyberattacks on an Australian bank. The analysis would show that only a few are related to full-blown AI-generated sources. This perspective helps us understand the actual risk landscape and focus on defences where they are most needed.

Discover more about deploying generative AI securely and safely here.

[1] Gartner, 4 Ways Generative AI Will Impact CISOs and Their Teams, Jeremy D’Hoinne, Avivah Litan, Peter Firstbrook, 29 June 2023. GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

About the Author

Verizon

Related

April 15, 2025

April 14, 2025

April 14, 2025

April 11, 2025

Join our Community

Subscribe now to get all our premium content and latest tech news delivered straight to your inbox

Popular

33524 view(s)
4273 view(s)
2474 view(s)
1783 view(s)

Subscribe

All our premium content and latest tech news delivered straight to your inbox

This field is for validation purposes and should be left unchanged.