Q&A with Entrepreneur in Residence Kathryn Shih: Using AI to Accelerate Innovation in Cybersecurity
Kathryn Shih
January 30, 2024
- Blog Post
Kathryn Shih is an Entrepreneur in Residence at Forgepoint. Learn more about her background here.
Kathryn, your background is unique. You started your career as a software engineer, pivoted to product management, and are now an expert in generative AI and cybersecurity. Tell us about your journey.
There’s a short answer for why I got into technology and computers in the first place: I was good at math and really liked video games.
The longer answer is that there’s a common thread in everything I have done. I’ve always been interested in what makes things work and what makes things useful. I started out as a software engineer at Akamai Technologies and really liked that experience. We didn’t really have product managers- products were handled between business folks and engineering. That said, sometimes there would be a significant disconnect. As an engineer, I often wondered why I was told to build something that didn’t seem like it properly solved the problem at hand. I lost a lot of those fights as a junior engineer.
I decided to learn how to think about this and argue about it in a compelling way, which brought me to business school. There, I learned that product manager was a real role- surprise, surprise- and that it focused on the stuff I had strong opinions about.
I always found B2B interesting as well. I like making useful things in hard spaces and B2B has lots of complexity: the sales cycle can be a pain but there is also a willingness to pay that you often don’t find in the consumer space. That willingness is why B2B is where new technologies are first commercialized, versus the consumer side where it’s more about nicely wrapping up existing technologies. It’s really cool to work with new technologies to solve genuinely hard problems. That interest eventually brought me to Amazon Web Services, where I was a product manager on EC2 back when the PM team was pretty small.
I bounced around Seattle for a while and ended up going to Google Ads. Ads is also a really interesting space that solves hard problems on a massive scale. I’ve told some of the Forgepoint folks this: I think Google Ads is probably one of the world’s largest real-time AI use cases. If you think about the amount of AI that powers Ad systems, it is just tremendous. With the amount of revenue involved, these are very advanced systems that get tons of research and do cutting-edge work. I ended up learning a lot there.
I also worked on Android for Google. I got recruited to work on some of the hardest problems around Android phones, working on both device performance and device hardening.
In 2019, I burned out on travel. Google Ads and Android had me traveling all the time. I decided to switch to a local team focused on cloud security. The grand irony is that I moved teams in December 2019, not realizing that the entire world was about to quit traveling with the advent of COVID.
I thought security was a unique space. Ads is technically meaty and interesting, but it wasn’t always clear how it benefited the broader world. In cybersecurity, outcomes are unequivocally correlated with a better world. There are not many industries where you can say that. The mission is unassailably good. That’s one of the reasons I have found it engaging and want to continue contributing.
You were most recently the lead product manager for Google’s Security AI workbench which leverages Sec-PaLM, a LLM fine-tuned for security use cases. How do you expect cybersecurity specific LLMs to evolve over time?
It’s an interesting question. I think they are going to evolve rapidly because the need is so great. Even before people were talking about generative AI, one of the biggest problems for cybersecurity was that we couldn’t find enough people. Think about the real power generative AI has, for instance with AI assistants like Copilot that are accelerating and empowering human operators. Maybe you can’t hire five times more people, but you can help your existing folks become five times more efficient with generative AI. You can bring the necessary resources to bear against big problems not because you found an untapped pool of cybersecurity experts, but because you can create so much more productivity. Your team can do hard, abstract thinking and have a co-pilot do the low value, labor-intensive heavy lifting which used to take up their valuable time.
One broad trend with generative AI that I expect to apply in security is that you start with relatively small use cases and improve them over time. You start with a known. Over time, the perimeter of the known continually expands.
Companies often struggle to weigh the balance of risks and opportunities when integrating generative AI. How can they figure out which use cases make sense? What are some red flags to look out for?
The first one is more of a green flag. One way to think about generative AI is that it shifts variable costs to be fixed costs. You make an upfront investment in an AI system to reduce variable costs. Maybe you couldn’t do X or Y because the variable costs of the human workflow would have been overwhelming; generative AI has a chance to unlock those use cases by changing the cost model.
One example in security is the Security Operations Center (SOC). SOC analysts face a needle in the haystack problem: there’s lots of data coming in but only a tiny amount of it might be malicious. They spend lots of time digging out data points and often can’t focus on high level thinking. It’s a variable cost that causes the organization pain. AI can help you move that to a fixed cost.
There are red flags too: AI features that help with an uncommon workflow or something where the variable cost is low. Those use cases may be cool but you have to be careful with the economics. If you lock in a high fixed cost to help with a task the CTO does for two hours every six months, the payback period is going to be brutal. Think about the economics and whether you can cross that bridge.
Another thing to consider is that it can be pretty easy to get an AI system that’s 80% to 90% good. It takes a bunch of elbow grease and fine tuning to get to 100%. There are plenty of recent examples of this, like when LLM chatbots started coming out a year ago. When they first launched, some of them said embarrassing things. That’s not because they didn’t have hundreds or thousands of smart people working on them. It’s because they were 80% to 90% good. Getting that final 10% often takes real world usage, feedback, and tuning. Really good use cases- particularly for generative AI- are ones where you can get to 90% and people benefit from it while giving you feedback to get to 100%.
The flip side is a use case like air traffic control, where you want to be 100% good right from the get-go. You’ve got to think really hard about how to roll that out. You need a non-standard method to close the final gap.
As an experienced product manager, you know how to create a product strategy and work with cross-functional teams. What insights would you share with a startup trying to scale a product globally?
Think about the problem you’re solving and the customers you’re solving it for. If you’re solving a global problem and you do a great job, you’ll find a global market and build a healthy business.
Pick a problem so painful that someone will pay you to solve it. Do a great job, take their money, keep solving more of the problem, keep taking more money- get a happy feedback loop. For that to work, though, the problem has to be a general one and you need to be careful with founder effects from your initial customers.
For example, there are esoteric problems that are common among American Fortune 100 companies. You don’t see these problems in other nations or in smaller companies due to differing laws and company sizes. If you solve a problem that’s unique to the Fortune 100 then you have a happy customer base of just 100 users.
I’ve also seen teams who, one way or another, got an esoteric early customer like a Fortune 50 user with specific needs. That user was paying well and was vocal, so the teams chased their needs. They didn’t think about the combined customer feedback and which users were most representative of the market they were after. All early feedback matters and you should make your customers happy, but keep an eye on which feedback is useful for your broader user base and market.
Why Forgepoint? What is your role working with the team and our companies?
I think cybersecurity is interesting and good for the world and I appreciate that Forgepoint is mission–focused. Having spoken with members of the team, it’s clear they are trying to build an enduring firm with healthy businesses and are thinking about how they impact the country, the world, and the broader ecosystem. That’s a great perspective to have. It doesn’t hurt that everyone’s nice, too.
I want to help the Forgepoint team think about how AI is going to change the security landscape- both in good and bad ways- and make sure investments are aligned with those changes. There’s also a broad portfolio of companies doing interesting, valuable work in security. One of my big goals is to help these companies use AI to accelerate their mission, better serve their customers, and be even more innovative.
What areas of AI innovation (generative or otherwise) are you most excited about right now?
Generative AI is really cool. I also think that in the last 5 years, non-generative AI classification has gotten extremely powerful. That has huge implications for security.
I have talked to a lot of folks who say, “We’ll just use generative AI to make a best-in-class classifier,” but there are so many ways to attack the problem of classifying what’s interesting, what’s good, or what’s bad. Generative AI is one tool in a broader AI tool suite. AI-based classification has the potential for tremendous impact in cybersecurity.
There are tons of classification problems in security – anytime you’re trying to decide if something is vulnerable or malicious or otherwise bad, that’s a classification problem. This includes the issues we’re all familiar with around operations and also questions like whether an engineering design is secure before it gets rolled out. I’ve talked about this already, but AI can be a force multiplier that enables security specialists to focus on really interesting work. People can hone in on the hard, creative, and abstract parts of their job when they offload grunt work onto the machine.
What’s the most rewarding aspect of your work?
I enjoy bringing teams together to solve problems and improve the world. I’m a big believer in trying to leave a positive mark.
If you weren’t an Entrepreneur in Residence, what would you be doing?
Other than rock climbing and mountain biking, I would still be looking for interesting technology problems to solve. I like the challenge. I would get bored otherwise.