Unlocking a fourfold return on generative artificial intelligence (genAI) is possible—only if you view safety as a competitive edge.
Rushing in to build your first genAI application without establishing a Responsible AI council to limit bias, raise fairness, and guide your red team in vulnerability testing virtually guarantees higher risk profiles from state actors and for-profit hackers.
Numerous examples showcase the dangers lurking in genAI apps built without guardrails, including chatbots suspended in South Korea for hate speech toward minorities.
Researchers also show that simply repeating a word could cause ChatGPT to spill its training data, including personal details. At a car dealership, a customer manipulated a chatbot into offering a high-value car for a tiny amount, even getting the software to declare the offer legally binding.
At a recent Shanghai AI conference, experts explored the rising threat of data poisoning attacks. These attacks allow attackers to subtly manipulate AI training data, compromising model integrity and reliability. By altering just 0.1% of a dataset, a threat actor potentially gains control over the machine-learning model.
As you read, state actors are already prepositioning within critical infrastructure to potentially disrupt key sectors like telecoms and energy in crisis scenarios, a trend that genAI may accelerate.
This should concern governments and industry since nearly a quarter of all attacks involve espionage in a region with 65% of the world’s population and generating over 54% of its gross domestic product (GDP).
Further, the Verizon 2024 DBIR shows that the public administration section had the highest number of incidents, which may trend higher if it is not responsible by design.

Steering genAI safely and securely
Consequently, Responsible AI councils are being formed at the national level, inside international initiatives like Hiroshima, and inside tech titans like IBM, Fujifilm and Google. Companies like Verizon and Microsoft are establishing Responsible AI road maps that do something more than avoid risk: The intent is to build sustainable programs, allowing continual optimisation and building.
It’s becoming increasingly clear that you need to be a leader in AI ethics to be a leader in AI overall and keep pace with shifting regulations.
Linking Responsible AI to penetration testing
This is borne out by red team “automated” and “manual” penetration team testing, which shows unless you tackle ethics and security simultaneously, the genAI application can increase risk.
The approach involves a multistep, interdisciplinary approach combining security, adversarial machine learning, and responsible AI experts to bulletproof the application.
One of the first questions companies planning to build their first genAI solution should ask is: What are your security teams doing to test it?
General tests by those unfamiliar with AI won’t determine a genAI solution’s security. It’s been shown that genAI systems are more probabilistic, meaning the same input can produce different outputs each time due to their non-deterministic nature.
Like securing the Imperial Palace in Japan or Singapore’s Ministry of Defence, the approach must be tailored to specific threats and vulnerabilities as genAI systems’ architecture varies widely, from standalone applications to integrated systems with different input and output modalities (text, audio, images and video).
Placing a regular penetration tester in a complex AI environment may not catch vulnerabilities as attack surfaces expand into Internet of Things (IoT) environments and self-optimising plants typical of Industry 4.0.

AI councils are a society-wide effort
Hence, large companies like Microsoft and Verizon now understand traditional red team penetration testing must simultaneously explore the potential risk of security and responsible AI failures.
Even China is developing its version of a comprehensive governance framework on AI and working on implementing it across the country, highlighting its importance for national safety and innovation.
Verizon has internally implemented AI governance measures, requiring data scientists to register AI models for reviews and implementing large language models (LLMs) in ways that mitigate bias and the likelihood of toxic language.
These efforts align with the broader push for responsible AI. They are integrated into their governance, risk management, and compliance (GRC) services.
“The world has witnessed rapid AI development over the last few months. However, putting AI to use is not an easy task,” says Xuning Tang, Verizon’s Senior Director of AI/ML Engineering. “But the Responsible AI program we have now enables us to explore that in a way that is safe.”
Verizon can also help enterprise and agency cyber teams forge similar cross-functional AI steering teams through its risk quantification services, a crucial first step before building your first genAI application.
When done correctly, company organisations may realise a return on their AI investments within 14 months of deployment, with 5% of organisations worldwide realising an average of $8 for every $1 invested.
Read Governing Generative AI Securely and Safely Across APAC