CodeSecCon -- Secure Your Software

CodeSecCon is the premier virtual event bringing together developers and cybersecurity professionals to revolutionize the way applications are built, secured, and maintained.

Visit the full site at www.codeseccon.com to see the full agenda

Register for Virtual Events

Platinum Sponsors

 Chainguard

Kusari

Snyk

Gold Sponsor

Wiz

Designed to address the most pressing challenges in software security, CodeSecCon empowers attendees to:

  • Develop Secure Applications: Learn best practices for secure coding and application design.
  • Reduce Software Vulnerabilities: Discover innovative tools and techniques to minimize risks.
  • Enhance Collaboration: Bridge the gap between security and development teams to foster a DevSecOps culture.
  • Learn how to safely integrate AI into applications and reduce risk of sensitive data exposure

Register to Attend

Agenda

Day 1 - August 12

time icon08/12/2025 11:00

Unsolved Problems in Application Security

The discipline of AppSec has evolved tremendously since the founding of OWASP in 2001. As software development methodologies have advanced, AppSec has struggled to keep pace with innovation.

Some foundational issues, like reliable SCA, have now been solved by the industry. But certain thorny problems, like software attestation, risk-based prioritization, SAST accuracy, and DAST correlation, remain elusive.

Join this session for a discussion of the current state of application risk management and the unsolved issues that still limit the full potential of developer-focused security.

speaker headshot

Clinton Herget
Field CTO, Snyk

time icon08/12/2025 11:30

Open Source Provenance: The Attack Vector Nobody's Talking About

Did you know when you download a package there’s no guarantee the source code matches the upstream? Whether it’s an NPM artifact, a DockerHub image, or a system package, typically every time you download a package, there’s considerable risk that what you’re downloading is different from what the maintainers themselves built. Open source software is one of the marvels of modern software development, with continuous iteration and improvement driven by a broad and diverse community of maintainers. However that considerable strength comes with some drawbacks–namely around the mechanisms that exist today to prevent nefarious activities that can wreak havoc throughout the software supply chain. In this talk you’ll learn about supply chain risks, get a deep dive into how malicious packages are created and distributed, as well walk away with a clear understanding of the proactive security measures you can take to safeguard yourself, your users, and your customers.

speaker headshot

Adam La Morre
Sales Engineer, Chainguard

time icon08/12/2025 12:00

To SBOM or not to SBOM? That’s actually NOT the question.

Cybersecurity professionals debate whether SBOMs deliver real security value or are just another compliance checkbox. While regulations have made SBOMs essential in many industries, the reality is that any organization developing or using software can face serious risks when they can’t answer the question, “What’s inside your software?” Rather than debate their worth, let’s unlock their value and make SBOMs work for you. In this session, Michael Lieberman will share practical insights on how to: 

  • Turn SBOMs into actionable intelligence to assess and respond to vulnerabilities fast. 
  • Reduce mean time to remediation (MTTR) and minimize incident disruption. 
  • Proactively manage risk and improve communication with regulators, customers, and stakeholders. 
  • Avoid costly security incidents based on real-world examples.
speaker headshot

Michael Lieberman
Co-Founder and Chief Technology Officer

time icon08/12/2025 12:30

BREAK

time icon08/12/2025 12:45

Code-to-Cloud Visibility: The New Foundation for Modern AppSec

Code-to-cloud visibility isn't a gadget; it’s the foundation of effective Application Security Posture Management (ASPM). As organizations scale and shift more responsibility to engineering teams, the lack of context between where risks are introduced (code) and where they manifest (cloud) creates disjointed workflows, missed ownership, and delayed remediation.

In this session, we'll explore how code-to-cloud mapping brings development, DevOps, and security teams onto the same page. By correlating vulnerabilities, secrets, and misconfigurations with runtime exposure, organizations gain a unified language for prioritization and a shared playbook for remediation.

You'll learn why code-to-cloud context is becoming the #1 priority for modern AppSec programs, how it helps reduce noise, tighten feedback loops, and establish a clear path to reducing risks in your business-critical applications.

time icon08/12/2025 13:15

Securing Code at Scale: AI-Powered Security Analytics with OpenSearch and Agentic Workflows

This session explores how to build a comprehensive code security platform using OpenSearch's powerful analytics capabilities enhanced with AI agentic workflows. I'll demonstrate how to implement a system that continuously monitors, analyzes, and remediates security vulnerabilities across your codebase. Attendees will learn how to leverage OpenSearch for storing and analyzing code security telemetry, while implementing agentic AI workflows that autonomously identify patterns, prioritize vulnerabilities, and suggest remediation strategies.

Key topics include:

  • Configuring OpenSearch for efficient storage and querying of code security data
  • Implementing vector search to identify similar vulnerability patterns across repositories
  • Building AI agents that autonomously scan, analyze, and classify security issues
  • Creating agentic workflows that coordinate specialized security analysis tasks
  • Developing automated remediation suggestions using LLM-powered agents
  • Visualizing security insights through OpenSearch Dashboards

This session provides a practical blueprint for security teams looking to scale their code security.

speaker headshot

Hitesh Subnani
Solutions Architect, Amazon Web Services

time icon08/12/2025 13:45

Security Enablement Isn’t Just Shifting Left — It’s Teaching Right

Shift left” is everywhere, but meaningful security enablement takes more than early tooling and policy changes. Developers need security education that actually connects: relevant, engaging, and behavior-shaping.

This talk explores the often-overlooked side of secure development: how we teach it. Drawing on coaching experience and engineering leadership, I’ll unpack why traditional security training often fails and how to design learning moments that developers actually retain and apply.

You’ll leave with a fresh lens on developer enablement, practical principles from adult learning, and strategies for building a security-first mindset through influence, not enforcement.

speaker headshot

Boomie Odumade
Senior Engineering Leader | Secure Dev Culture Advocate | Platform Builder | Career Coach

time icon08/12/2025 14:15

Cross-Verification Patterns: Securing LLM Outputs Against Hallucination Attacks

As organizations increasingly integrate Large Language Models (LLMs) into critical applications and data processing pipelines, the security implications of AI-generated hallucinations have become a significant concern. These fabricated outputs can introduce vulnerabilities, compromise data integrity, and create attack vectors that traditional security measures may not detect.

This presentation explores cross-verification methodologies as a defensive security pattern against LLM hallucinations. We demonstrate how adversarial prompting techniques can exploit hallucination vulnerabilities and present a multi-model verification framework designed to detect and mitigate these security risks.

Our research involved systematic testing using Anthropic Claude and Llama models across 1800-2000 data points, specifically examining scenarios where fabricated information could compromise system security. We tasked the primary model with generating structured summaries under strict constraints, then employed a secondary model to perform security-focused validation, identifying potential injection attempts, data corruption, and fabricated content that could bypass traditional input validation.

Results indicate that 1-3% of outputs contained hallucinated information despite rigorous prompt engineering—a rate that poses meaningful security risks in production environments. The study reveals that while cross-verification significantly reduces the attack surface, complete elimination of hallucination-based vulnerabilities remains challenging.

Key takeaways include implementation strategies for multi-layer LLM validation, detection patterns for common hallucination attack vectors, and architectural considerations for securing AI-integrated systems. Attendees will learn practical approaches to harden their applications against this emerging class of AI security threats while understanding the current limitations and ongoing challenges in this rapidly evolving landscape.

speaker headshot

Anupam Chansarkar
Sr. Software Engineer, Amazon.com Services LLC

time icon08/12/2025 14:45

I'm A Machine, And You Should Trust Me: The Future Of Non-Human Identity

Security boils down to trust. Trusting that the code will do what is expected and is free from vulnerabilities. Trusting that the entities interacting with our data and resources have the right to access those resources. Our current approach to both human and non-human access uses the same basic flawed pattern: long-lived credentials.

This approach to trusted access does not take into account who or what is requesting that resource. These secrets, which quite often leak, are an attacker's best friend and are how attackers think about getting into and moving throughout your system.

What if instead of simply asking for a security key or credential to gain access, our applications, workloads, and resources asked "Who are you and how can you prove that?" Humans can move towards leveraging our non-changing characteristics, like biometrics. But what about machines? Especially in the world where pods and workloads last for only hours or days?

speaker headshot

Dwayne McDaniel
Developer Advocate, GitGuardian

Day 2 - August 13

time icon08/13/2025 11:00

Embedding AI, Without Inviting Risk: A DevSecOps Blueprint for Safer Applications

As AI features become embedded in modern apps, so do new attack surfaces. From unsecured model APIs and data exposure to vulnerabilities in CI/CD pipelines, developers face mounting risks when integrating AI. In this session, Nikhil Kassetty draws on his experience in fintech and cloud-native environments to deliver a practical DevSecOps blueprint for securing AI-powered applications. You’ll learn how to harden AI integration points, enforce safe data practices, and align security with speed , all while maintaining the flexibility developers need. Whether you're deploying LLMs or embedding predictive logic, this session equips you to build smarter software, without opening the door to smarter threats.

speaker headshot

Nikhil Kassetty
Software Engineer - AI and Fintech Thought Leader

time icon08/13/2025 11:30

ML-Driven Database Security: Adaptive Query Optimization Against Injection Attacks

Database management systems face critical security challenges when traditional query optimization methods expose vulnerabilities to SQL injection and timing attacks. Current statistics-based optimization techniques create predictable query patterns that attackers exploit, leading to data breaches in up to 35% of enterprise databases. Our research presents a machine learning-based inferential statistics framework that enhances both query performance and security by introducing adaptive, unpredictable optimization patterns.

By integrating Bayesian learning and reinforcement learning, our security-focused framework maintains optimal database performance while obscuring query execution patterns from potential attackers. The system improves cardinality estimation accuracy by 40-50% over traditional methods while introducing intelligent randomization that prevents timing-based attack vectors. Enterprise-scale testing demonstrates 85-95% reduction in statistics collection overhead and 25-30% query execution improvements, all while maintaining security hardening against common database attacks.

The framework's adaptive response mechanisms detect and counter suspicious query patterns within 500 milliseconds, providing real-time protection against injection attempts and unauthorized data access. In databases ranging from 1TB to 5TB, our solution achieves 30-40% operational cost reductions while strengthening security posture through dynamic histogram redistribution and intelligent batch processing that masks database structure from attackers.

Key security innovations include ML-driven query pattern obfuscation, adaptive statistical model updates that prevent reconnaissance attacks, and reinforcement learning algorithms that continuously evolve defensive strategies. These advances enable organizations to achieve superior database performance while maintaining robust security against evolving threats. This approach represents a critical advancement in secure database query optimization, providing scalable, self-adaptive protection for enterprise systems handling sensitive data and high-volume transactions.

speaker headshot

Manas Sharma
Senior Software Engineer, Google

time icon08/13/2025 12:00

What to Tell Your Developers About NHI Security and Governance

Non-Human Identities (NHIs) outnumbered humans 45 to 1 in 2022. Given that their access abuse is one of the most easily exploited attack paths, we really need to get a handle on NHI security right now. But how do we start? What do we even tell the developer? We can't tell them to just not keep building applications and secrets security alone has not addressed all the concerns NHI security requires.Once again, OWASP is here to shed some light on the situation right as this issue becomes a major, main steam concern. In January of 2025, they released the Top 10 Non-Human Identity Risks, which highlights exactly how NHIs keep getting exploited and gives us a guide to raising awareness and prioritizing and remediating the situation inside our organizations.

But they are not the only ones who released a guide or even a top 10 list. This talk will guide us through the commonalities of all the published wisdom around NHI security, and we will end with a discussion that governance is a path forward but will need to go through IAM and, eventually, the whole organization.

speaker headshot

Dwayne McDaniel
Developer Advocate, GitGuardian

time icon08/13/2025 12:30

BREAK

time icon08/13/2025 12:45

Securing the Future of AI Agents: Navigating the Risks of MCP and LLM Integration

As large language models (LLMs) gain the ability to act, browse, automate, and interact with real-world systems via the Model Context Protocol (MCP), they also expose new and unpredictable attack surfaces.

In this talk, I’ll introduce MCP — the protocol powering tool-using agents — and walk through the emerging security threats it brings, including tool injection, prompt exploits, session hijacking, and remote code execution. We’ll explore practical, field-tested defenses and governance strategies that can help teams build AI-enabled systems that are powerful and safe.

Whether you're a developer integrating LLMs or a manager responsible for shipping secure AI products, this session will equip you with the mental models, examples, and frameworks to secure your agentic architectures.

speaker headshot

David Burns
Head of Developer Advocacy and Open Source, BrowserStack

time icon08/13/2025 13:15

Scaling Trust and Control Across the Modern Delivery Pipeline

Join us for a deep dive into how we are redefining software delivery by embedding security, compliance, and governance as first-class citizens in our DevOps pipelines. This session will uncover how we’ve moved beyond traditional scanning to establish a dynamic, policy-driven enforcement model that integrates seamlessly into developer workflows.

speaker headshot

Manideep Kantamneni
CapitalOne, Distinguished Engineer

time icon08/13/2025 13:45

AI-Powered Web Security: Protecting Applications from Emerging Threats

Traditional web security measures are no longer sufficient against today's cyber threats. AI-powered web security offers a proactive approach to safeguarding applications. By leveraging machine learning, AI detects patterns and anomalies for real-time threat detection and prevention. This technology mitigates attacks like phishing, malware, ransomware, and DDoS. AI analyzes data from multiple sources to identify suspicious activities that might be missed by humans, improving incident response and security. Join us to explore how AI is transforming web security and learn to implement these solutions to protect your applications.

speaker headshot

Vaishnavi Gudur
Senior Software Engineer, Microsoft

CodeSecCon is the premier virtual event bringing together developers and cybersecurity professionals to revolutionize the way applications are built, secured, and maintained.

Designed to address the most pressing challenges in software security, CodeSecCon empowers attendees to:

  • Develop Secure Applications: Learn best practices for secure coding and application design.
  • Reduce Software Vulnerabilities:
  • Discover innovative tools and techniques to minimize risks.
  • Enhance Collaboration: Bridge the gap between security and development teams to foster a DevSecOps culture.
  • Learn how to safely integrate AI into applications and reduce risk of sensitive data exposure.

We’re looking for passionate speakers to present at CodeSecCon

, the premier conference that brings together developers and cybersecurity professionals to revolutionize how applications are built, secured, and maintained.

Share your expertise, groundbreaking ideas, real-world experiences, and practical insights on topics ranging from secure coding and DevSecOps to threat modeling, cloud-native security, and beyond. Join a global audience eager to learn, collaborate, and push the boundaries of application security innovation.
Submit your session proposal here.

Leave a Comment

Event Details
  • Start Date
    August 12, 2025 11:00 am

  • End Date
    August 13, 2025 4:00 pm