What Is Product Security?

  |    
セキュリティ

Compiler • • What Is Product Security? | Compiler

What Is Product Security? | Compiler

About the episode

Our trust in the internet is the lowest it’s ever been. In spite of our vigilance, we face more threats than ever before. Product security is a vital element in the defense against malicious incursions. This season of Compiler covers the particulars of product security.

With some help from Emily Fox, Portfolio Security Architect at Red Hat, our hosts kick off the season with a simple question: What is product security?

Compiler team Red Hat original show

購読する

Subscribe here:

Listen on Apple Podcasts Listen on Spotify Subscribe via RSS Feed

トランスクリプト

Let's say you're making cereal. There is a certain ratio of insect parts that is technically allowed in the cereal that you're making. I think of that kind of as like the risk we're talking about when it comes to product security and how much you're willing to bear. Does that make sense? It does. You kind of lost me at the whole insect part piece, though. This is Compiler, an original podcast from Red Hat. I'm your host, Emily Bock, as senior product manager at Red Hat. And I'm Vincent Danen Red Hat's vice president for product security. On this show, we go beyond the buzzwords and jargon and simplify tech topics. In this episode, we level set on the definition of "product security". What is product security? You're all going to find out what I do every day. Phishing. DDoS attacks. Social engineering. These are not new terms if you know anything about cyber security. But as AI, edge computing and other emerging technologies are adopted more widely, malicious actors are changing their methods of attack. It isn't just cybersecurity professionals who have to be aware and responsive. Developers and engineers working in product security and data security are part of the effort, too. So let's start there. What is product security? We spoke with Emily Fox, portfolio security architect at Red Hat, about her work. Product security is the processes, methods, policies, tools, standards pretty much anything that you can think of an organization would want to apply to their software that's produced so that they can identify, address, mitigate and resolve security risks that do occur or could potentially occur throughout the product's lifecycle and do so while addressing their customer security needs. So there's a lot to unpack there. Vincent, you're our vice president of product security. So this is your whole domain. Can you tell us a little bit about how you define product security? You're right. There is a lot to unpack there. It's a tough space, right? You're looking at all of those different processes, policies, people, technology, tools, systems. There are so many interwoven pieces when it comes to just product security as a whole. And how you address those security needs, not just for the... not just for the product itself, but especially with that lens towards the customer, like the customer is the most important part, because they're the ones who are actually using the software that you're producing. I really like what you said there is that, you know, it's all about the customer and I... so I usually think of product security in terms of an analogy, just because that's how my brain tends to work. So let me throw it at you and see what you think. Sure. So you know how in food production agency, totally relevant to what we're talking about here, but like, let's say you're making cereal. There is a certain ratio of insect parts that is technically allowed in the cereal that you're making. I think of that kind of as like the risk we're talking about when it comes to product security and how much you're willing to bear. Does that make sense? Yeah. It does. You kind of lost me at the whole insect part piece, though. So that's not the analogy that I typically use. Is it okay if I give you the analogy that I would typically use? Yeah, absolutely. All right. Maybe it's just as gross, maybe it's less gross, but it's basically a water treatment facility, right? There is water upstream in the mountains. It's very pertinent to me, I live not close, but close enough to the mountains. You can go up there and you can drink from the streams, the lakes, the rivers, if you want to. I definitely recommend that you boil the water first, but you could do it right. But downstream from that is a water treatment facility. The water treatment facility adds purification. Distillation. They bottle it. There's quality control. There's packaging that goes into there. There's tamper proof seals that they put on those bottles. And then those bottles get loaded onto trucks that get distributed, securely, to their end locations so that I can sit there and drink it and not worry about getting sick from drinking the, you know, the lake water and getting it from, from a bottle of water. So maybe that's a little less gross than your insects. But yes, I mean, fundamentally it's a process, right? You can... especially pertinent when you're talking about open source. You can go and grab that upstream lake water all you want. There's risk to that. That de-risking process that happens through that water treatment facility. That's what a typical software vendor does as they're producing their software. Okay. That's a lot more elegant than the one I had. And I actually really like that a lot because when you're taking from upstream, you can also put your own safety mechanisms in place as well. But they might not be as tried and true as like a regulated and official water treatment plant. Oh, 100%. The only downside there is you don't get to make puns about bugs, but I'm going to adopt yours from now on, I think. Yeah, bugs. So moving on from there a little bit. We've talked a little bit about cybersecurity. We've talked about product security. Where do you see the line being between those two? Like, what's the difference? I mean, cyber security is... I would classify it as all things digital, right? Product security is one discipline within that. So you have information security, you have data security, you have product security. I mean data security, it's in the name. Very focused on the data. The security of the data. Product security is a little bit strange, right? A lot of people, let's say intuitively know about it, but they don't actually, at least not until recently, talked about it a lot. Right? In that product security discipline is really on the way that you create and deliver software in a secure mechanism so that your end users, but consumers, are able to use it in a secure fashion. So traditionally you would look at things like the security response. Vulnerabilities pop up. What's the process for discovering those, fixing those, delivering those to customers. Right. That's kind of the thing that most people talk about; CVEs, vulnerabilities, things like that. If you look at the old shift left perspective, how do you get rid of those vulnerabilities before they pop up? That's when you're talking about things like a secure development lifecycle. So all of the proactive work that you're doing at the beginning to ensure that the software is, you know, you're following secure development coding standards. You're thinking about, you know, access to APIs, you're thinking about authentication and authorization for particular tools. You're thinking about things like the principle of least privilege; all of these sorts of things at the beginning, rather than at the end. So it's really a continuum when you're looking at the life cycle of a, of a piece of software. What's its inception and then how do you support it, which is the key part when it comes to customers, how do you support it at the end? Gotcha. So like the way I'm seeing it then in our water analogy, cybersecurity would be all water safety. Product security is, you know, the safety of that water treatment plant and the bottles of water that it's producing. Yeah, I think that's a fair way of looking at it. Okay. So we talked a little bit about, you know, the differences between product security versus cybersecurity. But let's dig in a little bit on the product security side. Now, when we're talking about a product security policy, what does that look like in the real world. Yeah. A simple one is, it's a policy that we have, every piece of software that we sign out or we send out to customers is cryptographically signed. Right? We want to ensure that when a customer downloads a piece of software, they know that Red intended to give it to them. Right. And so that's a really fundamental part of our policies. If you are going to ship a product live to a customer, that's... whether it's a beta, GA, whatever, it has to be signed. You absolutely must not send any product out there without that cryptographic signature. And we have a very robust way of managing the keys to that so that, you know, Bob, the rogue developer, can't just sign whatever they want with a key that implies Red Hat's, okay with it, and it goes out there. So that's part of that policy. There's your access control, there's different keys. There's all of these sorts of things. And there's a policy for that. Another policy might be you have to do some sort of, you know, SCA or SaaS T-scan, some kind of test for security. And it has to pass. Right. Another policy might be, if you're aware of you know, vulnerabilities in the software, what's the criticality? And maybe there's a threshold where like, yeah, a low vulnerability doesn't matter. We can ship a product or an update with a low vulnerability in it. That doesn't really matter. But if it's a critical vulnerability, that's a... it stops. Like, the pipeline that produced the water treatment facility, all the bottles stop until that critical fly is resolved. Gotcha. That makes sense. And that's kind of the piece that I'm usually involved in as, like, a product manager is I see those vulnerabilities come in and I use that priority to go, okay, this one is going now, there's a there's a deadline on that one. This one, you know, maybe can kind of wait. It's like, don't leave your water bottle in a hot car for months on end and then drink it in our water analogy versus, you know, this water will kill you. Yeah. Correct, correct. And you're right. Like, I forget that you're kind of on the receiving end of some of those policies. Like, those are things that that me and my team are implementing. And you're sitting there going like, oh yeah, we actually have to do this. And there's a good reason for doing that. Very good reason. Yes. I think that all makes a lot of sense. We've done a lot of work around defining product security and security in general. So how do we put it into practice? The other Emily shared a few questions organizations should be asking themselves when writing security policies. First question is why does it matter and what is the outcome that we're trying to achieve? If you can't answer that question, you really shouldn't be writing a security policy in the first place. Because when you answer that question about why does it matter and what is it we're trying to achieve, you can get more discreet information and more granular details that allow you to craft a better security policy. The next question that you should be asking is, is it still relevant? So once you have one, you should continuously evaluate them as industry changes, your risk tolerance might go up or down, your risk management practices may change. Ultimately, you should be reassessing on a regular basis whether or not it still applies and what exceptions or exemptions you have to those policies or security controls. Really good, really well-written policies have very, very few exceptions to them, and exceptions are time bound or event bound. Okay, so I think that's a good list of questions for us to work through here. So that first one, why does it matter? We can answer this as product security in general or like a specific security policy we can walk through. Yeah. Signing. Right. The easiest one. Right. It matters so that the end customer knows that the piece of software that they're installing was actually intended to be given to them by the vendor. Right. There's a cryptographic way of validating that this piece of software is delivered as intended without modification. It's the thing that engenders trust from your customers, right? Yeah. It's the equivalent of, you know, opening something out of shrink wrap and expecting it to be safe. I mean, put it that way, like when you go to a grocery store you're making sure that the package is closed before you get it, right. Like you don't want that half-eaten bag of chips. I'm not gonna pay for that. Yeah, I'm not going to just buy loose cookies from the supermarket. Certainly not from the floor. No, not from the floor for sure. Maybe if someone gives me one and it's shrink wrapped and, you know, I'm buying it in the original packaging, I expect it to be safe. Yes. Yeah. Less so for loose cookies. But I think that speaks to why it matters. Is it engenders that trust in customers. That, yes, this is where it came from. It's what I expect it to be. This should be good to go. Yeah. And that's the, the achievement of that, of that policy, right, is to make sure that they find the software is trustworthy. It's safe to install, safe to use. That's the whole purpose behind it, is trust. Exactly. So that's why it matters. Trust. Okay. We have achieved a good relationship between the products we're putting out and what the customers are getting from it. Walking back to that signing policy specifically, let's go through that exercise of is it still relevant? Yes. And that, like Emily, the other Emily, had noted, right. Like you do have to reevaluate these policies over time, right? And they may change as this technology changes. So for one example, if I go to what we do at Red Hat with our RPM signing, for example. Right. And it's actually going to be really pertinent when we were talking about post-quantum crypto in the future, because the way that we sign some of the, yeah spoilers, the way that we sign some of these things needs to change, like the algorithms that we use today to sign things are not going to be the algorithms that we're using tomorrow, not literally tomorrow, but like in the time to come, right. In the metaphorical tomorrow. Exactly. They're going to be different. And does it mean like, are we still going to be signing RPMs the exact same way? Probably not. Is a policy going to have to change? It might. Gotcha. I know, I think that makes perfect sense. Either like the policy itself in some cases not this one seems like can change, or how it's executed. Or who's responsible for doing it. Exactly. So I'm going to move on a little bit to defining risk as an aspect of the product security landscape that we've been talking about. So we're going to talk about risk tolerance and risk management. So pop quiz I'm sure you're up to the task. Can you define those two terms for me. Yes. Risk tolerance is literally just about how much risk you're willing to put up with. Okay. How much risk are you going to tolerate? Risk management is your ability to assess the probability or likelihood of a particular risk occurring against the impact of its occurrence. Basically, it's marrying risk tolerance with risk management. Risk management is where you're actually looking at how do I remediate the risk? What are the risks I need to remediate, and what are those risks that I'm okay with? Right. And this is stuff that we do every day. You know, I mean, you're looking at... All right, just going for a bicycle ride, right? Am I putting my bike helmet on? There's a risk tolerance. You know, I might fall, if I'm riding on the grass, maybe not a big deal. If I'm riding on the highway, I probably should have my helmet on. That's your risk management. Gotcha. That makes sense. And it's the risk tolerance, too, because in one instance, you're willing to risk it, in the other you're not. And so the environment matters when you're weighing these things against each other as well. Oh yeah. No. Green knees versus road rash. Very different feeling, right? Very different feeling, very different feeling. So I think that that puts us in a really good spot for understanding all the terms we're using here. So, I know that Emily, the other, cooler Emily, also pointed out that even when people put together a management plan, they don't always take those tolerances in mind like we talked through. The most important thing that most people seem to forget is the risk tolerance portion. Everybody wants to manage risk, and there's a lot of different ways that you can do it. You can accept it, just deal with it. You can mitigate it, you can transfer it even that's what the cybersecurity insurance policy industry is all about. But the reality is, is that you need to first define what is going to be tolerable to you. And in some cases, it's a blanket statement. And that's totally fine. In other cases, it might be context specific or environment specific; under these conditions are tolerances low?. Under these conditions are tolerances high? For dealing with a particular data set, for example, our tolerance might be super low because we're talking about people's livelihoods for example. So it just depends. And every organization's risk management program is going to be slightly different. The key important thing is, is that you have the information that you need to make an informed risk decision and then to reassess it, reassess that risk and decision as things change over time. Because the risk of yesterday is not always the same as the risk of today. So when we're talking about informed risk decisions, like, what is the information that goes into that decision? It depends. It's very it's very customer dependent. Right. So I mean, if you're asking a bank versus a hospital versus Alice and Bob's Pizza shop, right, like it's going to be very different. The types of information that they have right? A bank is going to be concerned about bank information, credit card information, you know, personally identifiable information, that sort of thing. Right. And I mean, that could be, if that data was exposed, that could be bad, it's people's livelihoods, etc.. Right? Then you have like on the health care side, right? There's no financial information. But now this is probably even more personal information because it's got health care diagnoses, all of the personal medical things that, you know, we usually manage or try to, you know, not let everybody else know about kind of thing. Right? Sorry I'm kind of wondering here a bit because my brain's firing on a few different cylinders, but like from a, a read perspective, I get this information about somebody's health status. If an attacker could get in there from a write perspective, then maybe I'm changing prescriptions. Maybe I'm changing medical plans, those sorts of things. Right. Like at one is like, okay, bad like disclosure bad. The other one like terrifying. Because now you know, oh, we won't get into all the medical things, you know, but, like, bad things could happen, right? And then, like Alice and Bob's pizza shop, what's the worst that it could get? Probably some credit cards maybe. Like how often you get pizza and maybe they change the... you're allergic to pepperoni and they get you a pepperoni pizza. Right. So there's different scales when it comes to, the sorts of risk that would be there and the sorts of information that, you'd be concerned about. And that's fundamentally what I think, the other Emily was talking about. Right. Like that information that you're looking for is like, what are my assets that I'm trying to protect? Yeah. And even for a single organization, it could be completely different. If you're looking at, like, a, like a software producer, for example. Right. Those systems in production, are probably the critical ones that you want to take care of, the systems that are in dev or test, probably important, but maybe not to the same degree. Maybe your risk tolerance is a little bit more like, I'm not going to worry about all the vulnerabilities on those test systems, really care about them in production. Yeah, no, I think that makes sense. And I think also I can tie it back to your water treatment plan analogy because that's what I got to do. I see it coming down to what happens with non-potable water, essentially. If your risk is non-potable water, it's got bad stuff in it. You don't want to drink. It doesn't matter so much if it's going to fill toilets or go into the sewer or something of that nature, but if it's going into people's drinking water that matters a lot more. So it comes a little bit down to where is it going, what it's being used for, what's the blast radius, so to speak, of if something goes wrong? Yes. Yeah, its purpose. And like what's the worst case scenario? Yeah. Right. I mean it's... you don't want to sit there and be like I was thinking about the doomsday or whatever. But when you're looking at some of these policies, when you're thinking about risk, you actually do have to sit there. It's why so many security people are like the tinfoil hat types, right? Because my wife tells me this all the time. She's like, how could you think of that? Like, why would you think like this? It's been conditioned into me. I think about the worst possible at every time so that I can fully evaluate the risk. Yeah, no, I think that's really important to do. And I think it's good to consider as early up the stream as you can to like talking about shifting left. The earlier you can put these things into practice the more you can prevent it from ever happening at all versus like trying to fix it after the fact, all the better. 100%. It's the difference between regularly changing the oil in your car and waiting for your engine to seize. Exactly, exactly. And so I think that's the information that comes into play when you're trying to make an actually informed decision about not only your risk in and the nature of it, but the how much of it you can tolerate in the given context. So once you have that information, you kind of know what's going on. You get the lay of the land, then you make a plan. Yeah. And I imagine that's what informs the policies that come about. Yeah. Well, it's, I'd say that's probably what informs the policies and what the execution of that policy looks like. Or the adherence to that policy looks like. Great new policy. We'll try it out. If everybody understands, like, yeah, these things are important. And yeah, we can do it and it's easy enough to do it. People will probably do it. If you find this a little bit difficult or people are like, eh, I don't really see the value in this. So I'm going to go do my real work. I'm not going to do that thing. Then you might have to amend your policy to say you have 30 days or 60 days or one quarter, whatever the timeframe is, to remediate those things. And by the way, we're going to implement some sort of scanner that tells us whether or not you're meeting these deadlines, doing the thing that we asked you to do. You're adhering to the policy. Gotcha. Now someone's checking for helmets. Yeah. I mean, and that's the thing. Like, you could start with a policy that says like, we're going to do this and this is why, and Nirvana, everybody does it. And you literally don't have to do anything more. But if you're like, eh, we don't want to do this. We don't think this is important... I've got features to put out. Yeah Totally. Or, you know, I don't want to apply all these patches. I have this website to design. Mhm. Right. I'm trying to serve like the parts that make us money. I'm working on that part. I don't have time to do this other part. Then you might have to adapt your policy to say like no actually this is all part and parcel. And if one of those things gets exploited and the thing that you're so concerned about goes down, we're not making money. We're not servicing our customers. Right? Yeah. Now there's nothing to have vulnerabilities anymore. That's right. And that's where those policies evolve, right. Depending on the environment that they're born in. Yeah, exactly. And I think that's the last step of that process. You get the information, make your informed decisions around risk and your tolerance for it. You make a plan and then you check it. And I think that's been kind of a common theme coming up is that you can't just set it and let it go forever. So it might not be relevant forever, or it might not be the best way to execute on it forever. So you go back, check, has the context changed? Has the need changed? Do we need still need to do this or do we need to change the plan? Has the technology changed? I mean, I've been doing this long enough to know version control systems keep changing like the advent of AI is changing all kinds of stuff. Automation, Oh man, for sure. You know, this. It's changing all kinds of stuff, right? Exactly. Yeah. See, it's not all doom and gloom. Tech changes in good ways, too. Oh, yeah it does. You know, technology's awesome most of the time. Exactly, exactly. But yeah, I 100% agree. I think it's... we, we keep touching on the same cycles. And I think that's really kind of a feature more than a bug in that you really have to go back and check, make sure things haven't changed. And actually just made me think of something to where it comes to the technology. If you look at the progression of bare metal to virtualization, to containers, right in the context of vulnerabilities, bare metal and virtualization effectively work the same, like a vulnerability would be exploited or impactful in the same way. Containers radically change that paradigm. But the thing is, most people today don't think about containers in that new paradigm when it comes to vulnerabilities. They think about it the old way that we've always thought about it, and it's like we are talking apples and oranges here. Like, yes, still fruit, but not the same. And you can't look at them the same way. And that's where policies have to be reevaluated. I think that makes perfect sense. We talked a little bit about implementation, like actually bringing that plan through to execution. And what that actually looks like. So a large part of product security is making sure your software is defended from the latest threats, including all those ones you you talked about that, you know, changing landscape, things come up out of nowhere and really quickly. And so it's so crucial to stay up to date. So for most organizations I highly recommend if you are pulling in patches and updates, which you should be, you should try to keep it as close to that upstream release as much as possible, while still giving yourself maybe a couple days a week, maybe two weeks of buffer so that you can allow it to sit in that testing and staging environment and really understand what's going on with those updates before you commit to deploying it to production. So not just with the technology of the day, but also with the actual like platform you're working with in the moment. And it seems like the trick is to update early and often. So why do you think timeliness matters so much? I mean it's about risk avoidance, right? And frankly, exploitation. Right. If you look at it from a software producing perspective, we create patches not just for fun. We create patches because there's a security issue that we believe is meaningful enough to warrant that change in software. Right? Because at the end of the day, any changes to code is also risky, right? You can unwittingly break something else. It's worse than the security issue or the other bug that it's trying to fix, right? So when you're looking at changes to code, you got to treat it carefully. Right? And that's why Emily said like sit there and let it soak for a bit, see if there are any unintended consequences to that. But I mean, at the end of the day, if that patch is available, somebody decided there was something worth that risk to fix. And if that's the case, then you better believe that now is the time to apply that patch, because at that point it's either known to be exploited, somebody is going to be exploiting it probably pretty soon. Like that window between disclosure and exploitation has shrunk dramatically. And I expect with AI, over time, that window is going to shorten even further because now we can leverage AI to help me create these proof of concepts and exploits and things like that. Right. Yeah, it's almost like any patch is kind of a ticking clock where, like, as soon as it's out that that means there's already technically been time to take advantage of it being there in the first place. I will say that when it comes to patches that you're going to apply into, like a production environment, you want to test them, right? This is why I'm not a huge fan of like automated updates. So like on my iPhone, it doesn't just automatically update. I have to trigger that update. Right? And I want to know... You can update when I tell you to. Yes. But for some people who don't update anything, you know, they should probably... like my wife. She doesn't update these things, so I set her phone to auto update. Now, thankfully it doesn't do it the minute it comes out there's a buffer. I don't know how many days it is, but there's a buffer between when that update is automatically applied. Yeah. Automatic is better than never, but deliberate is better than automatic. Yes. Especially if you get the opportunity to test it first. Because if you're doing this with customer information or it's running your business, you want to test it. Exactly. Like, hey, we want we're talking about automation and stuff, people, you can automate this testing too. I think that's a good practice. And, you know, we wondered if there are so many threats out there. Why doesn't everyone adopt the maximal security stance? We've touched a little on that. And Emily is here to set a straight as well. Security is a balance. I've said that so many times, and I really can't enforce it enough. Security is the balance between applying security controls and mitigations and compensating mechanisms to reduce risk. If we want to do all the security things because we know it means that we're not going to be hacked, the most secure computer in the world is one that is unplugged, sitting in a basement and covered in concrete. It is completely worthless to anybody other than a giant paperweight, but it's secure because we've removed all the threats from the internet and we're not introducing vulnerable software to it because it's never powered on in the first place. And there's no potential interactions of a in-person attack because it's encased in concrete. So it's not really useful. She's not wrong. Oh, I think that hits the nail on the head. I think, you know, honestly, we've talked about balance a million times as well. I think that's really the name of the game here. And especially like, I like what you said about a perfect security instance is also useless. So, we talked about balancing risk, but also balancing it with convenience. Yeah. And there's no such thing as perfect security, right. Especially in like, as Emily was saying, in a way that is usable. Right. Like we need to disabuse ourself of that notion to start with. Right? The whole, just because I was reading about some things recently, this chasing after zero CVEs, zero CVEs is a new term that people like to use. There's no such thing, right. Because what a CVE is just it's a known vulnerability; known at least by the people who run CVE. There's a ton of vulnerabilities out there that don't have CVEs. I'm sorry, like, do those things not count? Do the things that we haven't discovered today, but we will discover tomorrow and assign a CVE name to, do those not count? Do they not exist like zero? CVEs is a myth. It doesn't matter. And yet people will chase that myth because they have a checkbox security compliance mindset. And so they're like, I just I just need it to be zero. I just needed to be all green or whatever the measurement is. But that dream is very expensive. Very expensive. It is very expensive. Like I ran it through some numbers, this was a couple years ago, like... I was talking about the cost to avoid a vulnerability. Right. So you just throw something like $10,000 of vulnerability, which is probably cheap, right? When you're looking at, the time it takes to deploy it, deploy package, test the package over to production, blah, blah, blah, all the things, the people that are involved, etc., right? If you're looking at, say, in Red Hat's case, the number of moderates that are, have a CVE, right? We don't fix all the models because they're not all impactful. The cost to avoid, I think, a couple of years ago, there was like three exploited known to be exploited, moderate vulnerabilities out of like 1300 or so. I probably get the numbers wrong. Somebody can fact check me later, but it's ballpark. But 1300 vulnerabilities. If you're looking at $10,000 per vulnerability for 1300 moderate vulnerabilities where only three were known to be exploited. I mean, your cost to avoid is in the hundreds of thousands of dollars. Yeah. And that's on largely hypotheticals. Yeah. Like 1297: zero impact, three with impact. If you just focus on those three: 10,000, 20,000, $30,000 is how much it costs you. Not, however many hundreds of thousands, that it would have cost you otherwise. Right. So, I mean, there is a real cost to chasing perfect security. Exactly. And that's where that balance comes into play, like risk management, risk tolerance. And I don't know how to phrase it really, but there like there's like this holistic view you have to take of the risk versus reward, that cost versus the benefit. Exactly. Sometimes that cost is too high. I think that's just tackling the cost of like finding them and fixing them. There's also like on some level, it's a little bit of a zero sum game as well, where if you spend all of your time and capacity working only on CVEs, you're also not doing anything else. You're going have, you know, a product security team of thousands and your engineers will be like three. Right. So you have to balance it not just against, like, the cost of literally doing it, but also against what it would take from to do it too. Well, that and what... because your resources are finite. Exactly. We don't just get to create time out of thin air. We don't get to create money out of thin air. Right. We create money by providing value to our customers. No customer... like your business model for whatever business... I don't care what your business is... your business model is not patching vulnerabilities. I'm not advocating that you ignore all that stuff because it's not valuable. Right? But there's a there's a sweet spot where you go, there's a cost to avoid the truly terrible things or the things that are most likely to occur. And if they do occur, they'll be pretty bad and impact that value proposition for my customers. Then there's a whole bunch, the vast majority, that are basically useless and don't matter, that offer no value to my customers. And do I spend my finite resources doing that for no value? Yeah. Or improving the services for my customers, thereby increasing the value to them and to me as a business? Yeah. If you send every employee chasing hypothetical mice all the time, you sell zero pizzas. Correct. So the trick is figuring out where the actual mice matter and how to balance it against sending out pizza that people can enjoy and trust mice free. Yes. Or as mice free, as we're willing to tolerate. We've got to work on the great analogies, don't we? They're all kind of gross in the wrong way. Nailed it. I know. All right, so we've covered a lot here. We know... you know a maximum security profile. There is diminishing returns there. We know that it can also hinder functionality going back to Emily's clip. And the more layers of security you have, like the more likely you are to even make more spawn, like you might be than shoving little edge cases that make the original software even less functional because there's new bugs, because you're just flattening out every surface that could be a vulnerability instead. And that's where you're looking at it from a product security perspective, it's absolutely something we have to think about. Right? Because the end user who is deploying our software is going to be the unfortunate recipient of those things, right? And the thing that we have to keep in mind as well, like, this is a give and take situation, right? There is a certain amount of responsibility on that end user. Yeah. We should be able to make certain assumptions about the end user's environment. Like, does anybody have systems connected to the internet these days without firewalls? I mean that seems like it's pretty common and pretty common sense. Right? And so we can assume that certain mitigations are in place. Now, I'm not saying that they will be maybe as comprehensive as a top secret military facility, right. For in every case, like they may not all have all the whiz bang bells, whistles, EDR, endpoint detection, all the different tools that are there, but they have something that basically doesn't give naked access to their systems, to all the attackers of the world, we can make that assumption and make it reasonably well. And if that assumption fails, then there is a need for some security work and some security policies by the customer. Because the thing that I'm fixing then is the least of your problems. Exactly, exactly. So I think we've covered a lot of ground for episode one. I know we have a whole lot more topics to talk through as well. We talked through defining product security and what it is, how best to go about security policies, how to create and enforce and reevaluate them. And we've also talked about balancing it against everything else in that calculus of what's going on. So for episode one, anything else you want to leave our listeners with? Yeah, I think I'd want to really reinforce the balance part of it. Right. Because I'll say more security teams are changing, but a traditional security team, very black and white. Right. And this is where collaboration, conversation, discussion with your stakeholders as you're writing those policies, so you can kind of come up with the best outcomes together. Like it's truly... the security teams of the future have to be true partnerships because the threat surface just keeps growing. The complexity continues to grow. We can't afford to just be sitting there and being like thou shalt and thou shalt not. It has to be a discussion like, okay, this is the goal, this is the purpose. This is where we want to go. How do we get there in a way that is lessening the impact for you and is still meeting the objective. That's what those product security teams of the future, working in partnership with probably the end state, like the information security teams of the future and the engineering teams of the future, really need to be able to partner with each other to work well together to solve all of these... because there's so many and they're not stopping and they're not slowing down, and it's going to get bigger and it's going to get worse. And not to be the doomsayer, but this is a problem that's not going away. How do we handle it in a way that is actually cost effective and meaningful? I'm off my soapbox. You know, you nailed it. And I expect you to be right back on it for our next episode. Well, you've heard our thoughts now. We would love to hear yours. You can hit us up on social media at Red Hat and use the #compilerpodcast. Let us know what you think. This episode is written by Johan Philippine. And thank you to our guest, Emily Fox. Compiler is produced by the team at Red Hat with technical support from Dialect. If you liked today's episode, please follow the show, rate the show. Leave a review. Share it with someone you know. It really helps us out.

About the show

Compiler

Do you want to stay on top of tech, but find you’re short on time? Compiler presents perspectives, topics, and insights from the industry—free from jargon and judgment. We want to discover where technology is headed beyond the headlines, and create a place for new IT professionals to learn, grow, and thrive. If you are enjoying the show, let us know, and use #CompilerPodcast to share our episodes.