Skip to content
Perspectives

Embracing New Possibilities: Ten Takeaways from the 14th Annual Executive Dinner at RSAC 2024

Tanya Loh

May 16, 2024

  • Blog Post
Group photo from 14th Annual Executive Dinner at RSAC 2024

As new forms of AI become more sophisticated and integrated into the ways we work and live, we now confront complex and even existential questions. What can AI do for industry and society? Is there an ideal balance of innovation, safety, and growth? How can we responsibly adopt this disruptive and transformative technology? 

These questions sparked conversations at the 14th annual Executive Dinner at RSAC on May 8 in San Francisco. Co-hosted by Forgepoint, PwC, and Google Cloud, this year’s dinner drew 400 industry leaders and executives- including enterprise CISOs and CIOs, leading startup founders, top VCs, and technology partners- to share ideas, forge and renew connections, and literally break bread together while celebrating innovation and community. It was an especially global affair, with public and private sector representation from over 15 different countries- demonstrating that cybersecurity is truly a “team sport”. 

As the evening unfolded over cocktails and dinner, we welcomed an inspiring fireside chat with Deputy Attorney General Lisa Monaco from the U.S. Department of Justice and Sean Joyce, PwC Global Cybersecurity and Privacy Leader & US Cyber, Risk, and Regulatory Leader, and a fantastic panel discussion with Unilever Global CISO Kirsten Davies, Albertsons Companies SVP and CISO Aaron Hughes, CSA AI Safety Initiative Chair Caleb Sima and Crosspoint Capital Partners Managing Partner Hugh Thompson. Ten takeaways emerged:

The State of the (AI) Union is Emergent  

  1. AI is in the early stages of a digital transformation lifecycle.

    All new disruptive technologies are born into a lifecycle of resistance, acceptance, adoption, and transformation. As new technology emerges, most people fear it and try to prevent it from evolving and spreading. Technology then starts transforming existing systems as companies lift and shift old systems. Finally, innovators create native systems designed around the new technology. The evolution of cloud computing shows these phases clearly. Today’s cloud-native applications with micro services were preceded by monolithic applications moving to cloud virtual machines, which were preceded by resistance to the cloud itself.You don’t have to obsess over Gartner to know that many types of AI are still in the early stages of the lifecycle, with a mix of resistance, early adoption, and integration with old systems defining our current moment. Some forms, like generative AI for cybersecurity, are earlier in the cycle while other more established forms, like machine learning for threat detection, are further along and more accepted.
  2. There are few AI experts but many enthusiasts.While there is plenty of enthusiasm for AI use cases within companies, actual expertise in both AI technology and its integration is lacking. For example, the Cloud Security Alliance’s recent State of AI and Security Survey Report (commissioned by Google Cloud) found that 55% of executives report a high understanding of potential generative AI uses for cybersecurity, while just 11% of IT and security practitioners report the same. This may indicate a Dunning-Kruger like effect (in which executives overestimate their understanding of AI use cases) or a significant gap in the understanding of AI and difficulties in implementation. Regardless of its source, this chasm between expertise and enthusiasm currently limits the possibilities of AI.
  3. We must balance flexibility and risk management as we explore, test, and apply AI use cases.As company leaders, practitioners, and innovators work to understand and unlock the potential of AI and AI-assisted technologies, they face pressures to implement guardrails and define specific use cases- and for good reason. Ungoverned, “reckless” adoption leads to heightened risks which may not be initially evident.However, creativity and innovation require adaptability. It’s critical that we find a balance between risk management and flexibility to avoid stifling the benefits of AI. Doing so requires staying informed on the latest governance and compliance recommendations across industries and geographies- which can seem like a monumental task. Thankfully, there are resources like the Cloud Security Alliance’s AI Governance & Compliance Resource Links Hub that surface and catalog the most relevant news, regulations, policies and papers, and communities of peers to reinforce awareness.  

AI Use Cases: The Good, the Questionable, and the Harmful 

  1. The Good: AI can augment human capabilities and transform business processes.There are already proven AI use cases which transform business processes and reduce costs. For example, generative AI can already handle lower-value, labor-intensive, and repetitive work across business areas (think AI assistants for personal productivity, task automation for security operations centers, AI chatbots, and AI-enabled customer personalization in marketing). In the public sector, AI is transforming opiod and drug sourcing and classification as well as public tip triage at scale for the FBIThe future for AI is bright, with the potential to transform threat detection and prevention in cybersecurity among many other applications. The most promising use cases share a common theme: AI augmenting human ingenuity and capabilities, not replacing them.
  2. The Questionable: Some AI use cases present ethical and moral dilemmas.

    AI deep fakes, one of the most controversial AI use cases today, exemplify how AI technology can create a wide range of positive, negative, and questionable outcomes. Deep fakes can be used to mislead or misinform people- for example, bad actors can use deep fake robocalls during elections to spread disinformation and sway voters. On the other hand, they can enhance educational experiences (think interactive museum exhibits) or reduce VFX costs in filmmaking when appropriately licensed and represented.Ultimately, AI is a tool which cannot replace or remove the obligation of human integrity, judgement, or evidence-based decision making. AI outputs must be informed by and interpreted through human perspectives.
  3. The Harmful: AI lowers the bar for threat actors and enhances threats across all levels of sophistication.In cybersecurity, AI has changed the threat landscape by making existing attacks easier to perpetuate and by introducing new risks. The barrier to entry for unsophisticated threat actors is lower, thanks to AI-enabled tools which automate aspects of attacks like phishing emails. Previously mediocre attackers are now equipped with enhanced capabilities. Nation-state threat actors benefit as well, seeing the opportunity to leverage AI for disinformation.

    At the same time, as companies integrate LLM applications and models into their business processes, products, and services, they must consider threats to AI models like prompt injection and data poisoning. Security teams are defending companies against new types of attacks on AI while also facing an accelerated pace of familiar attacks enabled by AI.  

Defining Individual and Shared Responsibilities  

  1. Companies must approach AI integration, regulation, and compliance from a risk management perspective. 

    As enterprise companies continue to integrate AI, the CISO role is starting to encompass more risk management. This shift is a natural development, as risk management must be a part of corporate AI conversations and decisions. Security is an essential aspect as well- AI must be deployed safely after all. This merging of security, risk management, and AI integrations is a welcome change, though we empathize with CISOs who may now face even broader responsibilities and demands. As AI regulations develop, companies must also create and enable clear governance and compliance. Regulations may be focused on national, company, consumer, or other kinds of risks, and companies may be assessed by auditors who don’t deeply understand AI. GRC should address relevant risks in a clear and straightforward manner that is consumable and actionable, while enabling companies to demonstrate compliance.

  2. Vendors selling products or services that leverage AI must be transparent about its role to avoid overhyping it. As more companies embed AI in their products and services, AI is becoming “table stakes” – so must vendors better address how AI is a function of what their product solves. AI cannot just be marketing and sales hype to get the attention of buyers, and software cannot just claim to use AI. Products and solutions “powered by AI and GenAI” must be contextualized properly and proven in use to avoid misleading target customers. Missteps and overstatements risks dismissal from consideration.  Where AI is integrated in a product or service to enhance efficiencies, vendors should represent it as such. Buyers make decisions to acquire functionalities, not the methods behind them. A good rule of thumb is that if you remove AI language from your pitch deck, your product or service should still be compelling.  
  3. Regulators must be careful to protect the public while also supporting innovation. 

    Regulating an emerging technology is challenging and brings risks of its own. Adversaries don’t conform to regulation. Overly restrictive regulation can hamper defenders from mission-critical intervention. 

    While regulation can help clarify acceptable and unacceptable use cases, it can also negatively impact the innovation lifecycle and slow down progress. While there is no single answer to this problem, creativity- testing out ideas and use cases to see what works- is critical for innovation. AI regulation must protect all stakeholders while supporting (and not stifling) creativity. 

  4. Collaboration between governments, industries, academic institutions, law enforcement, and the public is essential. 

    The future of AI depends on a broad and collaborative effort to mitigate risks, enable opportunities, and create positive impacts. Insular groups that do not reach across divides will be limited by their unchallenged biases, perspectives, and expertise. Leaders from all industries, government agencies, academia, law enforcement agencies, and civic society groups need to come together and share ideas, expertise, and resources. Convenings and working groups like the United States Justice Department’s Justice AI Initiative and NIST’S AISI are essential. 

    Conclusion 

    As we traverse the AI lifecycle in the coming months and years, we will explore and test use cases, attempting to balance innovation, safety, and security. Ultimately, securing AI and unlocking its potential depends upon strong public-private collaborations, conversations, and gatherings- much like this event and others. Thank you to everyone for joining us to embrace the possibilities of AI in its new forms. We hope to see you again next year.  

With Appreciation

Special thanks our honored speakers, co-hosts, and guests for their insights, expertise, and support. Here’s to your continued leadership and all you do to advance innovation and prosperity.   

  • Lisa Monaco, Deputy Attorney General, United States Department of Justice  
  • Aaron Hughes, SVP & Chief Information Security Officer, Albertsons Companies  
  • Kirsten Davies, Global Chief Information Security Officer, Unilever  
  • Caleb Sima, Chair of the AI Safety Initiative, Cloud Security Alliance   
  • Dr. Hugh Thompson, Managing Partner, Crosspoint Capital Partners; Executive Chairman and Program Committee Chair, RSA Conference 
  • Sean Joyce, Global Cybersecurity and Privacy Leader & US Cyber, Risk and Regulatory Leader, PwC  
  • James Shira, Global and US Chief Information & Technology Officer, PwC 
  • Kevin Mandia, Chief Executive Officer, Mandiant, Google Cloud  
  • Phil Venables, Chief Information Security Officer, Google Cloud   
14th Annual Executive Dinner at RSAC 2024 Cohosts with Lisa Monaco