Cyber AI & Automation Summit

AI in Cybersecurity

A SecurityWeek Event

December 4, 2024 – Register

SecurityWeek’s inaugural Cyber AI & Automation Summit pushed the boundaries of security discussions by exploring the implications and applications of predictive AI, machine learning, and automation in modern cybersecurity programs. (Watch sessions from 2023 on demand)

The 2024 virtual summit will feature discussion on the revolutionary role of AI, machine learning, and automation in cybersecurity. Attendees can expect robust debate on practical use-cases for AI-enabled security, the hype vs the promise of AI, and some early wins around vulnerability discovery and cloud attack surface management.

Register for Virtual Events

Become a Sponsor

 

2023 Platinum Sponsor

Trend Micro

2023 Gold Sponsors

Lacework

Abnormal Security

Join us as we delve into the transformative potential of AI, use-cases in malware hunting and threat-intelligence, the workflow when computers take on developer tasks, government policy and regulations, intellectual property protection and privacy implications.

Huddle with your peers to measure the costs, benefits, and risks of deploying machine learning and predictive AI tools in the enterprise, the threat from adversarial AI and deep fakes, and preparation for the inevitable compliance and regulations from policy makers.

This event brings together cybersecurity program leaders, AI threat researchers, policy makers, software developers and supply chain security specialists to delve into the transformative potential of AI, predictive ChatGPT-like tools and automation to detect and defend against cyberattacks.

This virtual summit will have carefully curated presentations and conversations aimed at educating, inspiring, and provoking new ways of thinking about the hype and promise surrounding AI-powered security solutions in the enterprise and the threats posed by adversarial use of AI.

Huddle with your peers to measure the costs, benefits, and risks of deploying machine learning and predictive AI tools in the enterprise, the threat from adversarial AI and deep fakes, and preparation for the inevitable compliance and regulations from policy makers.

This event will feature keynotes, panels, fireside chats and technical presentations on the current state of generative-AI and LLM technologies in cybersecurity, showcasing real world examples and how automation can make security spend more efficient.

Attendees can expect robust discussions and debates around the following topics:

  • Leveraging AI for automated attack surface reduction
  • Vulnerability discovery and AI-powered fuzz testing.
  • Automating the SOC with generative-AI technologies.
  • People and staffing a future AI-enabled ecosystem.
  • Case studies on successful AI deployments
  • Protecting sensitive data flowing through AI/ML models
  • The costs and economics of deploying AI at scale
  • Compliance and regulation - what will they look like?
  • Roadmap planning - practical guides to integrating AI into your cybersecurity strategies.
  • Adversarial AI & Deepfakes - threats posed by AI-generated synthetic media
  • The ethics and perils of AI gone rogue

SecurityWeek’s mission is to drive the conversation forward with meaningful dialogue that skips past the hype and provides meaningful guidance on shaping the future of cybersecurity in the age of artificial intelligence.

time iconDecember 6, 2023 11:00 AM ET

Generative AI: A Cyber Sword with Double Edges

This session explores the emerging threat landscape associated with generative AI, highlighting its potential as a double-edged sword in cybersecurity. This talk will cover recent activity and trends in the criminal underground, demonstrating how threat actors can exploit this technology to evolve social engineering and fraud, app hijacking, and other tactics and techniques. From there, well shift to defense strategies and evolution needed to keep pace with these evolving techniques. In the concluding segment, we'll explore how defenders can leverage generative AI for themselves as a tool to enhance and accelerate security productivity and outcomes.  

speaker headshot

Shannon Murphy
Trend Micro, Global Risk & Security Strategist

time iconDecember 6, 2023 11:30

5 Ways Cybersecurity Leaders can leverage GenAI in 2024

speaker headshot

Tim Chase
Lacework, Field CISO

time iconDecember 6, 2023 12:00

BREAK

We are taking a quick break. Please visit our sponsors in the Exhibit Hall and review their resources. They're standing by to answer your questions.

time iconDecember 6, 2023 12:15

The ChatGPT Threat: Protecting Your Email from AI-Generated Attacks

The widespread adoption of generative AI meant increased productivity for employees, but also for bad actors. They can now create sophisticated email attacks at scale—void of typos and grammatical errors that have become a key indicator of attack. That means credential phishing and BEC attacks are only going to increase in volume and severity. 

So how do you defend against this threat? Join this session to hear how generative AI is changing the threat landscape, what AI-generated attacks look like, and how you can use "good AI" to prevent "bad AI" from harming your organization.

speaker headshot

Mick Leach
Abnormal Security, Field CISO

time iconDecember 6, 2023 12:45

Crafting Security in the Language of Algorithms and Machines

In an era where artificial intelligence (AI) and Large Language Models (LLMs) are becoming integral to our digital interactions, ensuring their security and usability becomes paramount.

This presentation embarks on a journey through the intersection of these two pivotal domains within the automation landscape, digging into the methodologies, techniques, and tools employed in threat modeling, API testing, and red teaming these artificial narrow intelligent systems.

Expect a discussion on how we, as users and developers, can strategically plan and implement tests for Generative-AI and LLM systems to ensure their robustness and reliability and attempt to spark a conversation about our daily interactions with Generative-AI and how it affects our conscious and subconscious engagements with these technologies.

Key Takeaways:

  • Exploring User Interaction with GenAi: Engage in a dialogue about the pervasive and perhaps unnoticed interactions with GenAi in our daily lives, and how this influences our digital experiences.
  • In-depth Insight into LLM Security: Uncover the intricate techniques and tools applied in threat modeling and API testing, and MLOPs red teaming to safeguard LLMs.
  • Strategic Testing of GenAi & LLM Systems: Delve into strategic planning and testing methodologies for GenAi & LLM systems, ensuring their efficacy and security in real-world applications.
speaker headshot

Rob Ragan
Bishop Fox, Principal Security Architect

time iconDecember 6, 2023 13:20

Demystifying LLMs: Power Plays in Security Automation

As the popularity of Large Language Models (LLMs) continues to grow, there's a clear divide in perception: some believe LLMs are the solution to everything - a ruthlessly efficient automaton that will take your job and steal your dance partner. Others remain deeply skeptical of their potential - and have strictly forbidden their use in corporate environments.

This presentation seeks to bridge that divide, offering a framework to better understand and incorporate LLMs into the realm of security work. We will delve into the most pertinent capabilities of LLMs for defensive use cases, shedding light on their strengths (and weaknesses) in summarization, data labeling, and decision task automation. Our discourse will also address specific tactics with concrete examples such as 'direction following'—guiding LLMs to adopt the desired perspective—and the 'few-shot approach,' emphasizing the importance of precise prompting to maximize model efficiency.

The presentation will also outline the steps to automate tasks and improve analytical processes and provide attendees with access to basic scripts which they can customize and test according to their specific requirements.

speaker headshot

Gabriel Bernadett-Shapiro
Independent AI Security Researcher

time iconDecember 6, 2023 14:00

Fireside Chat: Nick Vigier, CISO, Oscar Health

Nick Vigier joins the SecurityWeek fireside chat to discuss his priorities as CISO at Oscar Health, the challenges of communicating security risks in large organizations, the ransomware crisis in the healthcare sector, the cybersecurity labor market and the issue of CISOs facing personal liability for breaches.f

The conversation also delves into AI/LLM use-cases to automate routine and monotonous security tasks, the use of generative-AI and co-pilot technologies to write more secure code and ethical and privacy considerations when training and deploying large language model (LLM) algorithms. 

speaker headshot

Ryan Naraine
SecurityWeek, Editor-at-Large

speaker headshot

Nick VIgier
Oscar Health, Chief Information Security Officer

time iconDecember 6, 2023 15:00

Trend Vision One Companion Demo

time iconDecember 6, 2023 15:25

Abnormal Platform Demo

time iconDecember 6, 2023 15:36

Lacework Demo

Solutions Theater (On-demand)

time icon

[On-Demand] Trend Vision One Companion Demo

time icon

[On-Demand] Abnormal Platform Demo

time icon

[On-Demand] Lacework Demo

SecurityWeek’s mission is to drive the conversation forward with meaningful dialogue that skips past the hype and provides meaningful guidance on shaping the future of cybersecurity in the age of artificial intelligence.

Become a Sponsor

Leave a Comment

Event Details
  • Days
    Hours
    Min
    Sec
  • Start Date
    December 4, 2024 11:00 am

  • End Date
    December 4, 2024 4:00 pm