Cyber AI & Automation Summit

December 10-11, 2025 – Register

SecurityWeek’s 2025 Cyber AI & Automation Summit discusses the revolutionary role of AI, machine learning, and automation in cybersecurity. Attendees can expect robust debate on practical use-cases for AI-enabled security, the hype vs the promise of AI, and some early wins around vulnerability discovery and cloud attack surface management.

View the Agenda

Register for Virtual Events

Become a Sponsor

2025 Platinum Sponsors

OktaSnyk

Abnormal AI

Ping Identity

2025 Gold Sponsors

Wiz
1Password
HiddenLayer
Kodem Security
Pangea Cloud
Aqua security
Radarfirst
Radarfirst

2025 Silver Sponsors

Eclypsium
grayswan
AIceberg
Arambh Labs
Join us as we delve into the transformative potential of AI, use-cases in malware hunting and threat-intelligence, the workflow when computers take on developer tasks, government policy and regulations, intellectual property protection and privacy implications.

Huddle with your peers to measure the costs, benefits, and risks of deploying machine learning and predictive AI tools in the enterprise, the threat from adversarial AI and deep fakes, and preparation for the inevitable compliance and regulations from policy makers.

This event brings together cybersecurity program leaders, AI threat researchers, policy makers, software developers and supply chain security specialists to delve into the transformative potential of AI, predictive ChatGPT-like tools and automation to detect and defend against cyberattacks.

time icon12/10/2025 11:00

Model Red-Teaming: Dynamic Security Analysis for LLMs

The rise of Large Language Models has many organizations rushing to integrate AI-powered tools into existing products, but they introduce significant new risk. OWASP has recently introduced the LLM Top 10 to highlight these novel threat vectors, including prompt injection and data exfiltration. However, existing AppSec tools are not designed to detect and remediate these vulnerabilities. In particular, static analysis (SAST), one of the most common tools, cannot be used since there is no code: machine-learning models are effectively “black boxes." LLM red-teaming is emerging as a technique to minimize the vulnerabilities associated with LLM adoption, ensure data confidentiality, and verify that safety and ethical guardrails are being applied. It applies tactics associated with penetration testing and dynamic analysis (DAST) of traditional software to the new world of machine-learning models. Join Snyk, the leader in AI security, for an overview of LLM red-teaming principles, including: 

  • What are some of the novel threat vectors associated with large language models, and how are these attacks carried out?
  • Why are traditional vulnerability-detection tools (such as SAST and SCA) incapable of detecting the most serious risks in LLMs?
  • How can the principles of traditional dynamic analysis be applied to machine learning models, and what types of new tools are needed?
  • How should organizations begin to approach building an effective program for LLM red-teaming?
speaker headshot

Clinton Herget
Snyk, Field CTO

time icon12/10/2025 11:30

Demystifying the AI SOC with intelligent workflows

Security teams are overwhelmed by alert fatigue, skill shortages, and burnout. AI is well-positioned to address these challenges, yet security leaders are split on whether a truly autonomous SOC is achievable. The truth is: the future of the SOC isn’t fully autonomous - it’s intelligent. In this session, we’ll explore: 

  • The evolution of workflows, from python and scripts, to agentic workflows, and everything in between
  • Why flexible, intelligent workflows that combine deterministic logic, AI agents, and human insight are the future of the SOC
  • How to apply these workflows to build and scale an AI SOC that’s faster and more resilient
speaker headshot

Thomas Kinsella
Tines, Co-founder and Chief Customer Officer

time icon12/10/2025 12:00

Malicious vs. Defensive: How to Stay Ahead of AI-Powered Cybercrime

The widespread adoption of generative AI has led to increased productivity for employees, but also for adversaries. Threat actors are using AI to craft email attacks that are polished, typo-free, hyper-personalized, and are doing so at scale. From convincing business email compromise (BEC) attacks to credential phishing and even deepfake videos that mimic executives, these machine-speed threats are breaching defenses faster than ever. So how do you stay ahead? Join this session to explore how generative AI is reshaping the threat landscape, what AI-powered attacks look like in the wild, and how autonomous, behavior-based defenses can help you fight back. Because in the age of AI, only good AI can stop bad AI—before it reaches your people.

speaker headshot

Mick Leach
Abnormal Security, Field CISO

time icon12/10/2025 12:30

BREAK

Please visit our sponsors in the Exhibit Hall and explore their resources. They're standing by to answer your questions.

time icon12/10/2025 12:45

Presentation by Okta

time icon12/10/2025 13:15

When the Next Breach Isn’t Technical: How CISOs and Security Leaders Will Be Tested in 2026

The Silent Shift from Threat Containment to Governance Proof Event Description: Over the past two years, cybersecurity and AI governance have evolved from technical disciplines into boardroom imperatives. Proxy advisors and institutional investors, BlackRock, ISS, Glass Lewis, are now evaluating companies on how effectively they govern data, privacy, and AI risk. For the first time, governance maturity directly impacts shareholder confidence and market value. In this session, veteran cyber governance leader Chris Hetner unpacks the new reality facing CISOs, incident response leads, and infrastructure security directors: 

  • Governance is now part of your attack surface. What was once internal documentation is now subject to investor and regulatory review.
  • AI is accelerating faster than oversight. Shadow automation and untracked AI models are creating ungoverned risk and new accountability challenges.
  • Proof of control is the new metric. The SEC and global regulators increasingly expect organizations to demonstrate defensible decision-making and traceable risk management. 

You’ll leave this session with a clear understanding of how to operationalize defensibility building governance that stands up to board scrutiny, investor questions, and regulatory audits. 

Key Takeaways: 

  • How to prepare your organization for investor-grade governance expectations
  • The convergence of cyber, privacy, and AI risk under new SEC and proxy oversight
  • Practical frameworks for evidence-based decision trails, escalation workflows, and explainable outcomes
speaker headshot

Christopher Hetner
Senior Executive, Board Director, and leader in cybersecurity

time icon12/10/2025 13:15

The AI Mirage: Why Savings Remain Elusive

speaker headshot

Carl Hayes
Invenci, CEO

time icon12/10/2025 13:45

The Economic Impact of Securing AI

Artificial intelligence is redefining productivity, profitability, and risk. As AI systems drive trillions in potential corporate gains, their vulnerabilities carry material financial consequences. This session examines the direct economic impact of securing AI, analyzing how model exploitation, data exposure, and control inefficiencies translate into measurable business losses. Malcolm Harkins, Chief Security & Trust Officer at HiddenLayer, presents a framework for assessing exposure, quantifying the cost of controls, and integrating AI risk into enterprise governance and financial disclosure. Attendees will gain a practical understanding of how to align security investment with business outcomes in an AI-driven economy.

speaker headshot

Malcolm Harkins
HiddenLayer, CISO & Trust Officer

time icon12/10/2025 14:45

Networking & Exhibit Hall Connections

Please visit our sponsors in the Exhibit Hall and explore their resources. They're standing by to answer your questions.

This virtual summit will have carefully curated presentations and conversations aimed at educating, inspiring, and provoking new ways of thinking about the hype and promise surrounding AI-powered security solutions in the enterprise and the threats posed by adversarial use of AI.

Huddle with your peers to measure the costs, benefits, and risks of deploying machine learning and predictive AI tools in the enterprise, the threat from adversarial AI and deep fakes, and preparation for the inevitable compliance and regulations from policy makers.

This event will feature keynotes, panels, fireside chats and technical presentations on the current state of generative-AI and LLM technologies in cybersecurity, showcasing real world examples and how automation can make security spend more efficient.

Attendees can expect robust discussions and debates around the following topics:

  • Leveraging AI for automated attack surface reduction
  • Vulnerability discovery and AI-powered fuzz testing.
  • Automating the SOC with generative-AI technologies.
  • People and staffing a future AI-enabled ecosystem.
  • Case studies on successful AI deployments
  • Protecting sensitive data flowing through AI/ML models
  • The costs and economics of deploying AI at scale
  • Compliance and regulation - what will they look like?
  • Roadmap planning - practical guides to integrating AI into your cybersecurity strategies.
  • Adversarial AI & Deepfakes - threats posed by AI-generated synthetic media
  • The ethics and perils of AI gone rogue

SecurityWeek’s mission is to drive the conversation forward with meaningful dialogue that skips past the hype and provides meaningful guidance on shaping the future of cybersecurity in the age of artificial intelligence.

Join the event and huddle with your peers to measure the costs, benefits, and risks of deploying machine learning and predictive AI tools in the enterprise, the threat from adversarial AI and deep fakes, and preparation for the inevitable compliance and regulations from policy makers.

SecurityWeek’s mission is to drive the conversation forward with meaningful dialogue that skips past the hype and provides meaningful guidance on shaping the future of cybersecurity in the age of artificial intelligence.

Become a Sponsor

Event Details
  • Start Date
    December 10, 2025 11:00 am

  • End Date
    December 10, 2025 4:00 pm