Welcome to the second Cloud CISO Perspectives for March 2026. Today, Nick Godfrey details his conversation with Francis deSouza at RSA Conference, and how it’s part of our approach to bold and responsible AI use.
As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.
- aside_block
- <ListValue: [StructValue([('title', 'Get vital board insights with Google Cloud'), ('body', ), (‘btn_text’, ‘Visit the hub’), (‘href’, ‘https://cloud.google.com/solutions/security/board-of-directors?utm_source=cloud_sfdc&utm_medium=email&utm_campaign=FY24-Q2-global-PROD941-physicalevent-er-CEG_Boardroom_Summit&utm_content=-&utm_term=-‘), (‘image’, )])]>
RSAC ’26: AI, security, and the workforce of the future
By Nick Godfrey, senior director, Office of the CISO
Nick Godfrey, senior director, Office of the CISO
You can’t bring traditional security to an AI fight, so how do we defend against AI-powered attacks, boost defenders with AI, and secure AI use? Answering those questions was top of mind at RSA Conference last week, where I spoke with Francis deSouza, Google Cloud’s COO and president, Security Products, about our approach at a Google-hosted breakfast for CISOs and other executives.
One of his key points is that organizations that adopt AI move through a three-stage journey:
- Automate tasks: Using AI for specific, repetitive tasks, such as summarizing notes.
- Redesign workflows: Using agents to manage entire end-to-end processes.
- Rethink functions: Completely reimagine how a department operates, such as the security operations center (SOC).
“The workforce of the future, across every function in an organization, is going to need to be bilingual. That they need to understand their function — whether it’s cybersecurity or marketing or sales or development — and AI,” deSouza said.
He also said that part of AI-era resilience means being multi-model and multicloud. A durable AI strategy shouldn’t rely on a single model or a single cloud provider, as organizations need the ability to failover and adapt as leaderboards and technologies evolve.
“Organizations look to CISOs to drive those decisions and hold them accountable if they go wrong,” he said.
Over the course of the conference, Google discussed how AI itself is a new surface area that needs to be protected, and both attackers and defenders are looking to AI to strengthen their positions.
How we’re securing AI
AI is creating a new surface area that needs to be protected. Organizations should focus on models, agents, and data as mission-critical points to secure.
We’ve been keeping tabs on a new trend of model extraction and distillation attacks that pose a long-term threat to frontier model providers and regular enterprises that build and operate their own models, and code vulnerability is an equally serious risk.
We’ve seen early adopters use the new Triage and Investigation agent to collapse the time-to-investigate for complex alerts from two hours down to just 15 to 30 minutes. We’ve also seen additional benefits from our AI-enhanced defense, such as using our Big Sleep agent to uncover and fix vulnerabilities before they can be exploited.
We’ve also seen how good intentions can go awry. With remarkable speed, OpenClaw has rapidly become a new supply-chain attack surface. Attackers have used it to distribute droppers, backdoors, infostealers and remote access tools, with many incidents so far this year. (We’re actually partnering with OpenClaw through VirusTotal scanning to detect malicious skills.)
Supply chain security is even more important in the AI era. Threat actors in the second half of 2025 exploited software-based vulnerabilities (44.5%) more frequently than weak credentials (27.2%), a significant increase from the start of 2025.
Identity is once again the new perimeter, so it’s vitally important as part of a robust AI strategy to manage shadow AI and govern agentic identities. In addition to focusing on identity as the key to securing agents, we advocate for treating data as the new perimeter and prompts as code, as part of a holistic approach as we’ve advocated through our Secure AI Framework and industry collaborations.
How AI is changing offense
We’ve seen three key ways that adversaries have been using AI to accomplish their goals:
- New, less-skilled threat actors empowered by AI
- New and existing groups using new AI techniques
- A new level of speed, sophistication, and scale to attacks
AI has been lowering barriers to entry for less technically skilled actors, especially by allowing them to give instructions to a model. AI has also made it easier to discover zero-day vulnerabilities, conduct phishing attacks (especially voice phishing,) and develop malware.
AI agents are upending the previous commonly-held wisdom about the techniques that threat actors use. Cybercriminals, nation-state actors, and hacktivist groups use agents to automate spear-phishing attacks, develop sophisticated malware, and conduct disruptive campaigns.
There’s more to AI-enhanced attacks than just agents. There are new classes of attacks on AI systems, including autonomous attacks, prompt injection, distillation attacks, AI-enabled malware that can evade signature-based detection, and even attacks against agentic ecosystems by exploiting their supply chains.
Adversaries are using autonomous attacks to scale their operations — and the impact they have against targeted systems. One example of this is Hexstrike AI, which represents a paradigm shift from manual hacking to AI-orchestrated warfare.
With a standardized interface for more than 150 offensive security tools, Hexstrike AI allows an agent to hand off tasks from one tool to another without human intervention. It’s also openly available and already in use by nation-state aligned threat actors, and gaining significant attention in underground conversations.
AI, particularly agents, will accelerate intrusions and have already begun to outpace human-driven controls. We’ve seen AI-automated scanning used by threat actors to sift through stolen data for hard-coded keys and access tokens to help them expand their attacks to other organizations. Simultaneously, hand-off times between threat groups collapsed from eight hours in 2022 to 22 seconds last year.
How AI is changing defense
Despite all the benefits that adversaries are seeing from AI, it’s also boosting defenders in three critical ways:
- We’re using AI to fight AI.
- We’re orchestrating defense at a new pace and volume, beyond human scale.
- We have a secret weapon: Context is the defender’s advantage.
AI-led defense is shifting from attack detection to pre-calculating and neutralizing the attack surface before the adversary arrives. Comprehensive identity management is key, with true Zero Trust access a necessary goal.
Organizations should turn to reputation-based risk modeling, agent observability, and identity to sanitize prompts. Also important is AI red teaming as part of a holistic approach to isolating agents at machine speed when anomalies are detected.
It’s impossible to defend the ever-growing volume of surfaces and alerts without AI. We’ve seen early adopters use the new Triage and Investigation agent to collapse the time-to-investigate for complex alerts from two hours down to just 15 to 30 minutes. We’ve also seen additional benefits from our AI-enhanced defense, such as using our Big Sleep agent to uncover and fix vulnerabilities before they can be exploited.
Context has become the defender’s advantage. When you understand your network and user behavior, you can better detect anomalies and prioritize risks based on business impact — and harden systems accordingly.
We need to move from agents with a human in the loop to human over the loop. Some of these gains will come from the agentic SOC, where security operations powered by AI agents can automate SOC workflows, and operate at speed and scale that was not possible before.
These changes can help reduce remediation from hours to seconds. We predict that by 2026 AI will autonomously resolve or escalate more than 90% of Tier 1 alerts, covering enrichment, categorization, and initial triage. The average enterprise analyst spends 30 minutes triaging a single alert: An agent can cut that down to five minutes, potentially saving $2.7 million annually.
A big part of AI security posture management will be the continuous discovery and inventory of AI assets and vulnerabilities at scale across multicloud environments.
All our news from RSA Conference
In addition to discussing all things AI, we made several key announcements last week:
- Wiz news: We’ve completed our acquisition of Wiz, and revealed the AI-Application Protection Platform (AI-APP) and red, blue, and green security agents.
- M-Trends: New research from Mandiant’s M-Trends 2026 and special report on AI risk and resilience can help organizations better understand the current threat landscape and how to keep defenses current.
- Threat intelligence: Google Threat Intelligence Group (GTIG) officially debuted its Disruption Unit in our keynote from Sandra Joyce, vice-president, Google Threat Intelligence, as we collectively evaluate what we can do within existing authorities and regulatory frameworks to make it more difficult for malicious actors to succeed in their efforts.
- Agentic SOC: We’re introducing new agents in the agentic SOC to help defenders focus on what matters most.
- Check out our new security innovations in Chrome Enterprise, Security Command Center, network management, and more.
You can check out everything we announced at RSA Conference here.
- aside_block
- <ListValue: [StructValue([('title', 'Learn something new'), ('body', ), (‘btn_text’, ‘Watch now’), (‘href’, ‘https://www.youtube.com/watch?v=P7gs9oZUKSQ’), (‘image’, )])]>
In case you missed it
Here are the latest updates, products, services, and resources from our security teams so far this month:
- How Google Does It: Building an effective AI red team: Red teaming can help prepare you for classic and cutting-edge attacks. Here’s how we built a red team specifically to mimic threats to AI. Read more.
- These 4 AI governance tips help counter shadow agents: It’s not easy to stop employees from using shadow agents, but these 4 tips on robust AI governance can make the shadows less appealing. Read more.
- Disconnected but resilient: Securing agentic AI at the extreme edge: At Google Cloud, we’re embracing a situationally-dependent, graceful, and controlled degradation approach to AI agent resilience. Here’s how. Read more.
- RSAC ’26: Supercharging agentic AI defense with frontline threat intelligence: From agentic AI defense to frontline threat intelligence to cloud security fundamentals, check out the news from Google Security at RSA Conference. Read more.
- RSAC ’26: Bringing dark web intelligence into the AI era: To get teams the critical data they need to make quick, accurate decisions about rising threats, we’re introducing a new dark web intelligence capability in Google Threat Intelligence. Read more.
- New Mandiant report: Boost basics with AI to counter adversaries: The new Mandiant AI risk and resilience report provides organizations with guidance on navigating the adversarial use of AI, securing AI systems, and AI-powered defense. Read more.
- Why context is the missing link in AI data security: In the AI era, organizations need more than security controls that rely on manual tagging and simple keyword matching — and we’ve updated Sensitive Data Protection to help. Read more.
- How to build AI agents with Google-managed MCP servers: In this guide, we show you how to build agents securely on our Google-managed MCP servers. Read more.
- Quantum frontiers may be closer than they appear: We’re setting a timeline for post-quantum cryptography migration to 2029. Read more.
- Welcoming Wiz to Google Cloud: Redefining security for the AI era: Google has completed its acquisition of Wiz, a leading security platform. The Wiz team will join Google Cloud, and we will retain the Wiz brand. Read more.
Please visit the Google Cloud blog for more security stories published this month.
- aside_block
- <ListValue: [StructValue([('title', 'Join the Google Cloud CISO Community'), ('body', ), (‘btn_text’, ‘Learn more’), (‘href’, ‘https://rsvp.withgoogle.com/events/google-cloud-ciso-community-interest-form-2026?utm_source=cgc-blog&utm_medium=blog&utm_campaign=FY25-Q1-global-GCP30328-physicalevent-er-dgcsm-parent-CISO-community-2025&utm_content=cisop_&utm_term=-‘), (‘image’, )])]>
Threat Intelligence news
- M-Trends 2026: Data, insights, and strategies from the frontlines: Grounded in over 500,000 hours of frontline incident investigations conducted by Mandiant globally in 2025, M-Trends 2026 provides a definitive look at the TTPs actively being used in breaches today. Read more.
- iOS exploit chain DarkSword adopted by multiple threat actors: Google Threat Intelligence Group (GTIG) has identified a new full-chain exploit that uses zero-day vulnerabilities to compromise iOS devices, and has observed multiple commercial surveillance vendors and suspected state-sponsored actors using it in distinct campaigns. Read more.
- Ransomware under pressure: TTPs in a shifting threat landscape: While ransomware remains a dominant threat due to the volume of activity and the potential for serious operational disruptions, we have observed multiple indicators that suggest the overall profitability of ransomware operations is in decline. Read more.
- Updated for 2026: Proactive preparation and hardening against destructive attacks: This guide includes practical and scalable methods that can help protect organizations from destructive attacks and potential incidents where a threat actor is attempting to perform reconnaissance, escalate privileges, laterally move, maintain access, and achieve their mission. Read more.
Please visit the Google Cloud blog for more threat intelligence stories published this month.
Now hear this: Podcasts from Google Cloud
- M-Trends 2026: Weaponizing the administrative fabric: Mandiant’s Kelli Vanderlee, senior manager, Threat Analysis, and Scott Runnels, Mandiant Incident Response, go deep on mean time to respond, threat group collaborations, and all things M-Trends 2026, with hosts Anton Chuvakin and Tim Peacock. Listen here.
- AI SOC or AI in a SOC: Raffael Marty, SIEM operating advisor, attempts to cut through the AI hype to get to real questions facing the future of SIEM, detection engineering, and the SOC itself, with hosts Anton and Tim. Listen here.
- Resetting the SOC for code war: Allie Mellen, Forrester principal analyst and author of “Code War: How Nations Hack, Spy, and Shape the Digital Battlefield,” discusses with Anton and Tim how detection engineering changes when the adversary is a highly-resourced nation-state. Listen here.
- Cyber-Savvy Boardroom: From AI theater to measurable business value: When does a standard, scalable platform stop being a “high-speed rail” and start becoming a trap? Neal Pollard joins hosts Alicja Cade and David Homovich to discuss how boards are learning to spot the difference between good standardization and dangerous concentration risk — before the nightmare begins. Listen here.
To have our Cloud CISO Perspectives post delivered twice a month to your inbox, sign up for our newsletter. We’ll be back in a few weeks with more security-related updates from Google Cloud.