Skip to content

Building Bridges, Breaking Barriers: Six Takeaways from the 15th Annual Executive Dinner at RSAC 2025

Preisha Agarwal, Conor Higgins

May 8, 2025

  • Blog Post
Industry Legends 15th Annual Executive Dinner at RSA

In the age of AI deepfakes, how do we determine what’s real and what’s fake? How can we trust what we see and hear when AI-generated content looks and sounds identical to the real thing?

As AI and GenAI technologies reshape entire businesses and industries, where are the major opportunities and risks? How do companies instill trust in their customers and partners when integrating and innovating with AI?

What role do public and private organizations play in this new era? How do CISOs, AI executives, and business leaders adapt to responsibly implement AI in their companies?

The 15th Annual Executive Dinner at RSAC 2025 set the stage for discussions, connections, and insights around these and many other critical questions. Co-hosted by Forgepoint Capital, PwC, and Google Cloud, this year’s event brought together over 400 industry leaders- including cybersecurity executives, startup founders, Fortune 500 CISOs and CIOs, government officials, VCs, and technology partners- from 21 countries and over 200 organizations around the world.

The night began with a round of community introductions and opening remarks from Sean Joyce, PwC Global Cybersecurity & Privacy Leader of US Cyber and Risk & Regulatory Leader. The evening progressed with a compelling keynote address by Bryan A. Vorndran, Assistant Director of FBI Cyber, followed by a captivating presentation from Dr. Hany Farid, Professor at UC Berkeley and Co-Founder and Chief Science Officer of GetReal Security. The dinner concluded with an engaging panel discussion featuring Noopur Davis, Comcast EVP and Global Chief Information Security Officer and Comcast Cable Chief Product Privacy Officer along with Dr. Prem Natarajan, Capital One EVP, Chief Scientist, and Head of Data and AI, moderated by Phil Venables, Google Strategic Security Advisor and former Google Cloud CISO. We were also honored when Chris Krebs, Former Director of CISA stopped by to make an appearance, prompting a standing ovation. What a moment.

Six key themes arose during the evening’s conversations:

  1. AI innovation accelerates and brings implementation challenges

AI innovation is progressing at an unprecedented rate of change. New, exciting AI technologies constantly emerge. AI growth isn’t doubling every 12-18 months- it may do so in just 12-18 days.

The relentless pace of change presents challenges for any company incorporating AI, regardless of AI experience and proficiency. Consider the context in which AI has developed: years of innovation created a machine learning (ML) revolution which many companies embraced by building sophisticated practices to incorporate numerous task-specific predictive ML models. Today, GenAI and LLMs open up new opportunities across entire business functions to enable faster and more efficient operations. Even companies with well-established ML practices face a learning curve as they incorporate new AI technologies in novel ways and with business functions haven’t previously leveraged AI.

  1. Deepfakes and malicious AI-generated content challenge the foundations of trust and human communication

Criminals are weaponizing free and readily available AI tech against individuals, companies, and societies. Case in point: AI deepfakes. A person’s likeness can now be accurately captured and recreated based on a single image and their voice can be recreated with just 15 seconds of audio. Recent high-profile deepfake incidents include the CFO deepfake video call scam which cost a British multinational engineering firm $25 million and the deepfake audio of London Mayor Sadiq Khan which inflamed social tensions. These threats cloud and undermine our perception of legitimate digital communication. We can no longer immediately trust what we see or hear online.

This new paradigm requires a shift in approach. When our human systems- eyes and ears- fail to detect deepfakes, we must look beyond the outputs and go under the hood of the content we encounter. For instance, even if it looks or sounds identical, AI-generated content leaves behind traces that sophisticated security technologies can detect. New security technologies in this area can help verify or disprove content.

This is also an issue that requires human collaboration. In an era of AI-generated content, unpredictability, and siloed beliefs, AI can become conduit for in-group thinking and confirmation bias, and we may be tempted to seek out and believe AI-generated content that we agree with. We must navigate this challenge through partnerships within our communities and between public and private organizations.

 

  1. Balancing AI innovation with risk, governance, and security frameworks

AI continues to dominate conversations among business leaders but with less fear, more pragmatism, and a resounding call for strong AI security, governance, and risk management. How do we responsibly secure AI and manage risk while we adopt new technologies?

As employees, customers, and businesses increasingly embrace AI, responsible AI use and innovation are now business imperatives. The new era of AI governance prioritizes clear accountability, usage policies, and monitoring capabilities.

For example, Large Language Models (LLMs) are permeating company workflows and creating expanded attack surfaces. Attackers seek and exploit model vulnerabilities with methods like prompt injections which traditional risk management and security frameworks fail to account for. Subsequently, AI risk is being integrated into governance frameworks to meet the evolving LLM threat landscape. Security teams are utilizing prompt engineering, sandboxing, LLM-specific cyber threat models, and output guardrails to build safer and smarter AI technologies. In addition, organizations increasingly deploy internal LLMs with fine-tuned parameters and closed feedback loops to reduce unpredictability.

  1. AI vendors must prioritize model transparency, explainability, and trust

Trust has become a major deciding factor for buyers evaluating vendors selling AI and AI-enhanced products. AI model transparency, training oversight, and output explainability are now essential to address concerns about hidden model biases, hallucinations, model drift, unvetted training datasets, and undocumented model behaviors.

Compliance-driven industries in particular are demanding more rigorous AI validation procedures during vendor evaluation processes, including red-teaming, scenario-based stress tests, and third-party audits. Transparency is now a security imperative.

  1. The CISO continues to evolve

In the age of AI, CISO responsibilities are expanding beyond security posture, detection, and incident response to encompass AI oversight, privacy engineering, and even aspects of legal strategy. CISOs must be technically fluent and adaptable to upskill and lead employees across security, compliance, AI, and data teams. Internal AI fluency programs and academic partnerships can help modern CISOs bridge knowledge gaps around AI architectures, governance models, and attack surface considerations.

However, some things remain unchanged; namely, the need to translate cybersecurity into business value. Security must be framed in terms of business risk, operational continuity, and brand trust to secure funding and executive buy-in. CISOs should align security conversations with the language of business, such as reporting on “risk-to-revenue” metrics instead of alert volume. This approach helps shift the perception of cybersecurity as a cost center to a strategic business enabler.

  1. Evolving AI regulations

The AI regulatory landscape is poised to shift, particularly in high-risk industries like healthcare, finance, and critical infrastructure. Many organizations are preparing for major AI audits and mandatory AI disclosures. Existing cybersecurity frameworks like those from NIST and ISO are expected to inspire new waves of regulations with AI-specific controls around explainability, data usage, and accountability.

In an era where AI will touch every business function, enterprises must future-proof their architectures and compliance strategies. For instance, some security leaders are considering how to extend zero trust beyond the network to the AI model level. LLM workflows are also being re-architected to minimize data exposure, restrict permissions, and validate outputs. Collaborative endeavors like the Coalition for Secure AI (COSAI) present an opportunity for collective action through the development of common security standards and best practices. Regardless of the specific strategies they employ, organizations will find success by remaining ahead of the compliance curve- not reacting to it.

 

Conclusion

As we balance the benefits and risks of AI innovation and embrace new AI governance standards, trust is at a premium. We must cultivate trust through our individual organizations and technological innovations along with transparent, collaborative, and impactful partnerships. Thank you to the leaders across industry, government, and venture capital who gathered with us to share, learn, and forge a path forward together. We hope to see you again next year.

With Appreciation

Special thanks to our honored speakers, co-hosts, and guests for their insights, expertise, and support. Here’s to your continued leadership and all you do to advance innovation and prosperity.

  • Bryan A. Vorndran, Assistant Director, FBI Cyber
  • Dr. Hany Farid, Professor, University of California, Berkely; Co-Founder & Chief Science Officer, GetReal Security
  • Noopur Davis, EVP, Global Chief Information Security Officer, Comcast & Chief Product Privacy Officer, Comcast Cable
  • Dr. Prem Natarajan, EVP, Chief Scientist, and Head of Data and AI, Capital One
  • Phil Venables, Strategic Security Advisor, Google; Former Chief Information Security Officer, Google Cloud

As Phil Venables moves on to his next chapter, we took a moment to recognize his leadership, expertise, and many contributions to our industry and community. In lieu of another trophy for his mantle, we invited dinner attendees to support his preferred nonprofit organization, The National Cryptologic Foundation, a 501(c)(3) organization with the mission to advance the United States’ interest in cyber and cryptology through leadership, education, and partnerships, and through the National Cryptologic Museum. The NCF is also on the platform Benevity, where employees from participating companies can volunteer and give through corporate social impact programs.

Many thanks to NCF President George Barnes for making time to join us for the surprise reveal and thank you to Phil.