Cybersecurity & Tech

The Lawfare Podcast: Jim Dempsey on Standards for Software Liability

Stephanie Pell, Jim Dempsey
Wednesday, January 24, 2024, 8:05 AM
What should a software liability regime look like?

Published by The Lawfare Institute
in Cooperation With
Brookings

Software liability has been dubbed the “third rail of cybersecurity policy.” But the Biden administration’s National Cybersecurity Strategy directly takes it on, seeking to shift liability onto those who should be taking reasonable precautions to secure their software. 

What should a software liability regime look like? Jim Dempsey, a Senior Policy Adviser at the Stanford Cyber Policy Center, recently published a paper as part of Lawfare’s Security by Design project entitled “Standards for Software Liability: Focus on the Product for Liability, Focus on the Process for Safe Harbor,” where he offers a proposal for a software liability regime. 

Lawfare Senior Editor Stephanie Pell sat down with Jim to discuss his proposal. They talked about the problem his paper is seeking to solve, what existing legal theories of liability can offer a software liability regime and where they fall short, and his three-part definition for software liability that involves a rules-based floor and a process-based safe harbor.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Introduction]

Jim Dempsey

So it basically says, how do you develop secure software? You, the software developer, define a risk, you identify risks, and then you address those risks, and you document what you're doing. Well, if the developer lowballs the risk environment, then they can lowball the controls, the security measures.

Stephanie Pell

I'm Stephanie Pell, Senior Editor at Lawfare, and this is the Lawfare Podcast, January 24, 2024.

Software liability has been dubbed the “third rail of cybersecurity policy.” But the Biden administration's national cybersecurity strategy directly takes it on, seeking to shift liability onto those who should be taking reasonable precautions to secure their software. What should a software liability regime look like? Jim Dempsey, a senior policy advisor at the Stanford Cyber Policy Center recently published a paper as part of Lawfare's Security by Design Initiative, entitled, “Standards for Software: Focus on the Product for Liability, Focus on the Process for Safe Harbor,” where he offers a proposal for a software liability regime. I sat down with Jim to discuss his proposal. We talked about the problem his paper is seeking to solve, what existing legal theories of liability can offer a software liability regime and where they fall short, and his three-part definition for software liability that involves a rules-based floor and a process-based safe harbor.

It's the Lawfare Podcast, January 24th: Jim Dempsey on Standards for Software Liability.

[Main Podcast]

Jim, I want to start by having you talk about why we are having a discussion about imposing liability on the developers of insecure software. What is the catalyst or the various drivers of this discussion, and what problems are we trying to solve?

Jim Dempsey

Well, Stephanie, great to be with you, by the way. Always love talking to you about these issues. The immediate impetus for the work I did here was the Biden administration's cybersecurity strategy issued in March 2023. About 80 percent of that strategy was continuous with a prior strategy, similar to what was in the Trump strategy, which was similar to what was in the Obama strategy. But, in a couple of respects, the Biden administration was groundbreaking and marked a sharp turn from cybersecurity policy that had existed in the United States for decades across administrations.

And one of the ways in which they just set out brand new ground was in calling for legal reform to impose legal liability on the developers of software for the harms caused by defects in their products. And, basically, explicitly, the administration's strategy said that there was a market failure, that the incentives were not adequately aligned, that software developers, through the use of disclaimers in the license terms--most software nowadays is licensed, not sold--but, whether it's a sale or a license, those contracts all, most all, certainly from the big developers say, "we disavow any warranty. And we, the software developer, are not liable for any harms caused by our products, that our products are not guaranteed to be secure. They're not warranted to be secure. And if losses occur, you, the user of the software, bear the cost of those losses." And, of course, the losses from software and cybersecurity vulnerabilities generally are enormous and a huge drain upon our economy, as well as a threat to our national security and our personal privacy of each of us.

So, the administration said for the first time ever, in a national White House cyber security strategy, that we need change. And I think they recognized that this would take legislative change in order to overcome these disclaimers of liability. And to me, and it's sort of called out in the strategy, this raises the question of, ‘okay, liability for what?’ And that's what led me down the path of looking at how to define a standard of care for software development.

Stephanie Pell

And you begin your paper on software liability with a premise that is that because there is general agreement that the manufacturers of software should not be made insurers of their products, but rather should be liable only when a product is unreasonably insecure, getting software liability right turns a lot on defining the standard of care. So can you unpack that a bit for us?

Jim Dempsey

Yeah, so in calling for liability on software developers for bugs in their products, the administration was by no means saying that all flaws would be legally actionable. The goal is not to produce perfectly secure software.

It's widely agreed that there is no such thing. Or, if you could build it, it would be far too expensive and unusable. And in no other sector, really, in no other sector of the economy, do we expect product developers to produce a perfectly safe product. Take automobiles, car manufacturers are not expected or required to produce a perfectly safe car. That would be prohibitively expensive.

So across many, many other sectors, across many other kinds of products or services, the standard is not perfection. It's some standard of reasonableness, some standard of a reasonably safe or unreasonably dangerous product. And in a way, what the administration was saying, which I accepted as the starting point for my research here, was the same principle should apply to software. Software should not be this uniquely exempt sector. It should be subject to similar rules as other sectors, which have developed over time. And all of those other sectors are based upon the notion that the manufacturer doesn't pay for every harm. They don't have to eliminate every risk, but they have to eliminate unreasonable risks.

And then, that pushes you to the question of what is reasonable or what is unreasonable. And in the case of software, pushes you to the question of how secure is secure enough. And it seemed to me that if we are going to impose liability on software developers, that they deserve to know, and the users of software deserve to know, what that standard of care is.

Stephanie Pell

So just so we're on the same page about important terms that you use in the paper, can you explain how you are using the term “standard” in the paper?

Jim Dempsey

Yeah. So standard, as I use it in the paper, has two meanings. One, I refer to the legal concept of a standard of care. So in the context of software, that would be the rules of software design, non-compliance with which, when it causes legally actionable harm, will generate liability on the part of the software developer for that harm. That's how you define across industries the legal standard of care— automobile makers, toaster makers, airplane manufacturers are all subject to a similar legal standard of care.

Now, in many, many cases, that legal standard of care is based on a technical standard. And when I say technical standards, I'm thinking about things like what is developed by the National Institute of Standards and Technology, or building codes, which are actually primarily in our country adopted by private trade associations, but then are adopted at the local level by municipalities or counties and take on the force of law. There's a national electric code, there's other codes for other aspects of the construction trades and building. And there are many in the federal register, many, many, many, many technical standards for the safety of a wide range of products. There's now a very developed set of federal motor vehicle safety standards. There are standards for lawnmowers. There are standards for a whole host of products which talk about the technical design and construction of those products.

And to some extent they're all based upon this cost benefit analysis. We're not going to have a perfectly safe lawnmower; after all, it's got a spinning blade that's there to cut the grass. We're not going to have a perfectly safe car, we're actually not even going to have a perfectly safe airplane. But we have a cost benefit analysis that we apply to that and we decide how much risk is enough risk. And that's expressed in these very, very specific rules that you must use such and such grade of copper wire, up to a certain wattage or voltage. You must have a control system on the lawnmower that stops the blade from spinning within three seconds. A boiling point of motor vehicle fluid cannot be below a certain degree of temperature.

Product after product after product, human activity after human activity, we have developed over time these technical standards, which then take on the force of law because the technical standards become the definition of the legal floor. They become the definition of the standard of care. And what I was trying to do in this paper and in my research was to look at the bridge between the technical standards and the legal concept of a liability.

Stephanie Pell

And in reaching clarity on this definition and developing an appropriate liability regime, which your paper helps us move closer to, what are the downsides or the harms in just letting the definition develop incrementally on a case-by-case basis, as often occurs with common law?

Jim Dempsey

Before the federal motor vehicle safety standards were adopted, this was all done case-by-case, and it took decades. And we don't have decades, in my opinion. I look upon the cyber security issue as a matter of urgency, and I don't think we have time to go case-by-case, product-by-product, vulnerability-by-vulnerability, to develop a definition of reasonableness, or what is reasonably secure software.

And I don't think that would be good for the industry. I don't think it would be good for innovation. Industry craves and they deserve certainty. Certainty provides part of the foundation for innovation. And also, case-by-case litigation is expensive. It involves a lot of discovery, involves a lot of lawyers. So I argue that, to ensure timely progress on what I view as an urgent issue, to reduce the cost of litigation, and to promote resource allocation to engineers rather than lawyers, we need a standard of care for software that is objectively measurable.

Stephanie Pell

So given all of those factors, in your paper you then offer a proposal for federal legislation to be applied in private litigation that involves a three-part definition. Can you walk us through the contours of that definition? And then we will address each part with more specificity.

Jim Dempsey

Yeah. And by the way, I am, as you called out, Stephanie, I am definitely thinking of this in the context of private litigation. You could see a regulatory agency as doing this as well, and that might work, but I'm not sure there's any regulatory agency that has sufficient expertise here as an enforcement agency and sufficient resources. Mainly what I'm thinking of is business-to-business litigation, because it's mainly the business users of insecure software who are currently bearing the costs. And the goal of this would be to push the costs, and therefore the incentives for development of more secure software, back to the developers.

And so, in order to do that, as I tried to think this through, I started with the premise or the conclusion that there are certain things we know about secure versus insecure software. Certain product features or behaviors—default passwords that should just be a no-no to send out a product that has a default password. Something known as path traversal, which allows attackers to use one feature of a software product to then access deeper into the software stack and corrupt the software more broadly by exploiting directory names. Buffer overflow, which has occurred in so many of these most notorious attacks where the software allowed the attacker to insert malicious code by using some fill-in-the-blank kind of feature in the software.

So, time and again, software developers are making these basic mistakes, and we know what they are, we know how to avoid them. And that there should be this minimum floor of no-nos, this minimum floor of features that either must be included or should not be included in a software. Now, coming up with that list, it may be two dozen, three dozen practices, maybe a hundred practices, features that must be taken into account in software design. That clearly doesn't account for the complexity and dynamism of software development. So that—it's not so much that it would be outdated. A lot of these no-nos are perennials, as long as you have directories and path names, you want to prevent the exploitation of them. But, it wouldn't capture the whole universe.

So then I went to the products liability tort law realm, where there's a concept of design defect, where there's a defect in a product—and again, you're not eliminating all defects. You're eliminating defects where there was a reasonable alternative design that could have been adopted without serious cost, without much more cost than the flaw; and where the developer chose not to adopt the reasonable alternative and ended up producing a product that was unreasonably dangerous or not reasonably safe. That's products liability. Again, it applies to everything from automobiles to toasters. It doesn't apply to software. And it should. And that would pick up a lot of other flaws.

Now, then you have the problem of, we don't want this liability to be unlimited. We don't want it to be unpredictable. We don't want it to be completely undefined. So then I look at some of these secure software development practices that are being developed and, picking up on a line in the administration strategy where they said that industry deserves a safe harbor if they follow certain sound software development practices, that there should be a safe harbor if you did all of the good development practices, and still these unexpected, hard-to-detect flaws still crept into your product, then you would be protected from liability for those.

So a floor of no-nos, a floor of defined practices or features that we know are wrong, we know how to avoid them, we know that they're unfortunately made time and again. They've been exploited, time and again by the attackers in cyber security attacks. That should be the floor. If you include those features in your software, it's almost per se liability.

Then for other kinds of flaws, a product liability-type inquiry into design defects, but cabined and capped by a safe harbor based upon sound software development practices, which are out there and we can talk more about that where safe harbor could be constructed.

Stephanie Pell

And I think it's important to acknowledge, as you discuss in the paper, that the law does offer a number of plausible theories of liability that could serve as a starting point for crafting a software development standard of care. And you assert that at the core of all of these theories is an important question: How do we distinguish software that is too insecure from software that is secure enough? You touched on this a little bit before, but can you talk a bit more about that challenge?

Jim Dempsey

So, in thinking about how to construct a legal regime, my mind at least goes—and I think the minds of other lawyers go to—what models do we have? What other legal background, legal tradition can we draw upon? Rather than trying to create something new out of whole cloth, how have we dealt with liability problems in the past? And there's a variety of legal constructs, legal frameworks: warranty, negligence, products, liability, certification. We'll talk a little bit more about each of those in depth if you want. But all of them boil down to this question, in the security context, how secure is secure enough? In the automobile context, how safe is safe enough? In the toaster context, how safe is safe enough to make sure the toaster doesn't catch on fire and burn your house down? How safe is safe enough in making sure that the wiring in your house doesn't electrocute you?

So across all of these regulated fields, and regardless of the legal theory that you adopt, the question is always, how good is good enough? And that's the question to my mind that has to be answered if we're going to impose liability on software developers.

Stephanie Pell

So let's then talk a bit more about these existing theories of liability. What are they? You talked about one of them a little, product liability, but what are they and where do they fall short for software liability? And perhaps you can start with warranty.

Jim Dempsey

Yeah, so warranty is an age old common law concept going way back to England, I guess, which we adopted in the United States. And warranty is the concept that says every sale of a product carries with it an implied warranty of merchantability. That's the word that has been used for centuries. Merchantability. Basically the merchant, when they put something up for sale to the public, they are implicitly saying that these goods are what they are supposed to be, and they'll do what they're supposed to do, and there's nothing significantly wrong with them.

Oh, and there's a second actual implied warranty, which is the implied warranty of fitness for specific purpose. If the merchant says this product can be used for some specific purpose, they are implicitly warranting that it is in fact fit for that purpose.

Now, both of these implied warranties, at common law, merchantability and fitness for a particular purpose, were codified back in the 20th century in the Uniform Commercial Code, which has been adopted in every state except Louisiana. Now, this, this implied warranty of merchantability would say, well, where is that in the software context? Well, under common law and under the Uniform Commercial Code, merchants are able to disclaim the warranties and that they use certain magic words like “as is” or “with all faults” or “as available.” That indicates that there is no warranty.

Now interestingly, in Massachusetts at the state level for consumer products, Massachusetts has a law which says that merchants can actually not disavow the warranty. And even at the federal level, we have some regulation of warranty law. The Magnuson-Moss Warranty Act of 1975, Congress said that in certain circumstances, warranties cannot be disclaimed or modified or limited in consumer transactions.

The problem with this is, to begin with, the implied warranties only apply to the sale of goods. And as I said before, today, most software is not sold. It's licensed and the Uniform Commercial Code Article 2 doesn't cover licenses, it only cover sales and sale of goods. There's been this running argument in the courts about whether software is a good or a service. If it's a service, then it's outside of this whole framework in the first place. And in fact, software as a service by definition is not a good, it's a service, or at least by name.

So if we were to go the warranty route, we would need legislation saying that warranties apply to software, regardless of whether it's good or a product, regardless of whether it's sold or licensed. You would have to say, warranties apply, and then you would have to say, you can't disclaim them. And then still, you're stuck with the problem of what are you warrantying? And are you warrantying that the software is merchantable? Well, no one really knows what that means. So you're still coming back to the same problem of how good is good enough?

And again, if you went the common law route, or if you went the case-by-case route, that would be case after case after case with judges and juries trying to decide this was or wasn't merchantable, or it wasn't or was fit for a particular purpose. It would be chaos for decades.

Stephanie Pell

Does the liability theory of negligence offer a more promising direction?

Jim Dempsey

Well, Trey Herr and colleagues at the Atlantic Council have written to that effect, arguing that the negligence framework would, in fact, be the way to go. Now, again, they acknowledged that Congress would have to act either by creating a private right of action or by empowering a federal regulator to bring enforcement actions under some kind of negligence theory.

But there again, and Trey Herr and his colleagues state this explicitly in their paper, you still need to consider what is the standard of care. You still need to determine what, if negligence, what is negligent. And that pushes you back again to reasonableness. Not every flaw is going to be worthy of compensation. Not every flaw should be actionable. So again, you get back to this question of, what is the standard of care.

And jumping ahead, Stephanie, products liability. Another type of tort law is products liability law. Chinmayi Sharma and Ben Zipursky from Fordham Law School have written a wonderful article on products liability laws applied to software. But then again, you get to the question of, what is a design defect. Products liability law, at least the prong that they're focused on, focuses on design defect. Two other prongs of it, manufacturing defect and failure to warn, aren't really applicable here.

So you're looking at design defect and that again comes down to, I mentioned it before, this reasonableness question. What is a reasonable alternative? And if the developer rejected the reasonable, safer approach, did that leave the product unreasonably dangerous? So again, this word reasonableness, which by the way, appears throughout the law—I think I once looked this up and the word reasonable appears, I think, literally thousands of times in the federal code and the federal code of regulations. And similar words like “feasible” or “appropriate.” Those are words that obviously for centuries, literally, lawmakers have fallen back on to try to define something.

But again, my point here is we don't have time to fight for years over what is reasonable or not reasonable. And by the way, even if you had a reasonableness standard under, let's say, a products liability regime, and that went to a jury, juries don't give answers for their verdicts. The standard jury form—I looked at one of these, I think from, I forget which state it was--simply say, the defendant produced an unreasonably dangerous product, yes or no. And then there's another line that says, and the defendant shall pay damages of blank. And that's all you get in a jury case. And again, that doesn't give the kind of certainty and clarity that the software development field deserves. And it just enriches lawyers to fight case-by-case over this is reasonable, that's not reasonable and sort of roll the dice with a jury.

So products liability, while it has this good insight—and I give total credit to Sharma and Zipursky for coming up with this—products liability focuses on the product. It focuses on the outcome. And we've been saying for decades now that we want outcomes-based regulation. So products liability doesn't ask, where did this design come from, how did it get there, who made the decision, what's the email chain, what's the Slack channel, the dialogue back and forth on whether to do this or not do it. If it's got a flaw and it's unreasonable, you're liable. That's good. But we can't, I think, hang a software liability framework on the word “reasonable.”

Stephanie Pell

So I want to talk more about the focus on product rather than process. But before we do, I think there is one other existing theory of liability that you discuss in your paper, but you don't see that it either provides a useful path.

Jim Dempsey

Yeah, and that's the, that's the approach of certification or licensing. Europe is heading down these lines with a voluntary certification process, but even though it's voluntary, it's extremely complicated. And I just don't think we have a stomach in the United States for anything nearly as complicated as what the Europeans are proposing.

We are doing certification or self-certification for government purchase software. President Biden issued an executive order separate from the strategy, but back early in his term in 2021. President Biden issued an executive order, 14028, for those looking things up. 14028 focuses particularly, not exclusively, particularly on cybersecurity of the U.S. government systems and the products purchased by the U.S. government. And it requires, in a process that's only now being implemented, developers of software for the federal government and those who sell software to the federal government to self-attest that the software complies with a certain standard articulated by NIST.

When you're dealing with the federal government and you make an assertion like that, if it is false under a contract, that subjects you to civil and criminal penalties under the False Claims Act, which is not available to private purchasers of software. And the False Claims Act, I think, can't be avoided by the disclaimers.

So there is a possibility that the government, through its procurement process, could impose liability on software developers. Now, let me say that DoD has been trying to do this for years with something called the cybersecurity maturity certification process. And it's hard. A lot of companies make these certifications and then there's no penalty for it. The government is trying to tighten that up, to be determined whether it's going to be effective or not. It doesn't work again for the private sector because disclaimers would obviate anything.

And still, certification to what? License to what? What is it that you're self-attesting to? You still need to come up with the standard of care. And it needs to be objective. It needs to be enforceable. So that's, again, where I say if we're going to move forward on this issue—and I feel we do urgently need, as a nation, to move forward on this issue—it comes down to, how do you define how good is good enough?

Stephanie Pell

And another issue you identify as important to understand in this software liability discussion is that existing software development standards focus largely on process and not on product. What do you mean by that and what limitations does a focus on process entail?

Jim Dempsey

There are, out there, secure software development standards. And here I use the word “standard” broadly to include frameworks and guidelines. And maybe one of the leading, or certainly, one of the leading publicly available standards for secure software development is a framework promulgated in February 2022 by NIST called SSDF, Secure Software Development Framework. But when you look at it, it by and large does not focus on outcomes. It does not focus on the product. It focuses on the process. And there's a lot in there that a company could easily comply with and still produce in secure software.

Starting with the fact that the NIST framework is based upon the notion of risk assessment, which is legitimate and necessary, in fact, but it allows the product developer itself to define risk. So it basically says, how do you develop secure software? You, the software developer, define a risk, you identify risks, and then you address those risks and you document what you're doing. Well, if the developer lowballs the risk environment, then they can lowball the controls, the security measures.

Here's some direct quotes from the NIST framework: “Track and maintain the software's security requirements.” Well, it doesn't say what those requirements must be. It allows the developer to set the requirements and says track them. Track the requirements, track the risks, track the design decisions, but all of that is self-decided by the developer.

Here's another one: “Follow all secure coding practices that are appropriate to the development languages and environment to meet the developer’s requirements.” So, follow all secure coding practices that are appropriate. Which you, the developer, decide what's appropriate and not appropriate.

“Determine which compiler, interpreter, and build tool features shall be used and how each shall be configured, then implement and use the approved configurations.” It doesn't say how they shall be configured, doesn't say what tools or compilers or interpreters should be used. It just says, choose them and then use them. So it really has nothing, that to my mind, or very little to my mind, that is objective in the sense of outcomes. Nothing that really concretely or definitively defines a practice. And by the way, it's called a framework. It says you must consider this, you must consider that, but it doesn't say, what the outcome must be other than having considered it. But having considered it doesn't mean that you made the right decision.

And I've looked at there's an international framework for avionics software, “methods and tools should be chosen that aid error prevention and provide defect detection.” Well, what methods, what tools, doesn't say. “System requirements, including safety related requirements are developed and refined.” It doesn't say what they have to be.

So, and CISA came out—Cybersecurity and Infrastructure Security Agency came out—with a product in October of 2023 on secure by design software, but that's even more high level: “convene routine meetings with company executive leadership to drive the importance of secure by design,” “use a tailored threat model during resource allocation.” Again, it allows you developer to choose their own threat model and to define their own threat environment.

So you look across all of these, and to me, they are very much process-focused and often at a very high level, that to my mind just could not serve as the basis for a legal determination. If you're going to hold somebody liable and you actually want that liability to incentivize meaningful change that will produce better software as opposed to just a checklist mentality of, oh yes, we had the meeting of our team. We've identified our risk. We've defined our security requirements. We've chosen the tools that we think are appropriate. Check, check, check, check, check.

These standards or frameworks or guidelines, whatever you want to call them, to me, they weren't designed for a legal liability purpose. And all of the developers of them, NIST would say, weren't intending to define a legal standard here. And they are just not applicable to define a legal standard.

So that's what said to me, as I was thinking through this issue, we need to come up with something, at least for the floor, something more definitive, something that is clearer, something that is more objective. And taking that lesson from Sharma and Zipursky, focus on the product, not the process.

Stephanie Pell

So I want to now turn back to your specific proposal for a liability definition. And as we discussed before, it's a, it's a three-part definition. The first part states that a rules-based approach would define a floor, the minimum legal standard of care for software, focused on specific product features or behaviors to be avoided. And again, important to acknowledge that you are specifically calling for a focus on product or product features in this part of the definition, not a process.

So, again, can you talk about exactly what you mean by a floor? And what the goal of such a floor for a software liability standard is?

Jim Dempsey

So, one thing that strongly drove my thinking on this was a publication from CISA in DHS, the Cybersecurity and Infrastructure Security Agency. Last year, they published, as they've published now for a couple of years running, a list of the top routinely exploited vulnerabilities. It turns out that if you look at the data, if you look at actual cybersecurity exploits in the real world as they are happening, you see some exploits being disproportionately impactful. And of course, those are the ones we tend to read about in the paper that affect dozens or hundreds or thousands or tens of thousands of corporate users have this software that they're reliant on, which turns out to have a flaw which the bad guys find, the attackers find and exploit.

And the CISA tracks this and they identify which of these vulnerabilities are most frequently exploited and they publish a list of it. And last year, they went one step further and they took the vulnerabilities, which are product-specific, and they looked at what was the underlying weakness. What was the software flaw? What was the product feature that led to the vulnerability, which was exploited, putting it on the list of top routinely exploited vulnerabilities? And, the MITRE Corporation, which is, many of your listeners probably know, is a federally funded research center. I don't know what the exact term is, but it mostly works, if not exclusively, almost entirely for the federal government, MITRE.

They maintain a list of software weaknesses. They actually track them all and they've now identified, at least as of early this year, 934 weaknesses, which are common flaws that might appear across multiple products. If you look at vulnerabilities, there are literally hundreds of thousands of vulnerabilities in software products. CISA tracks those and the list of software vulnerabilities, I think, is growing at a rate of 25,000 a year.

Underlying those vulnerabilities are certain common weaknesses. And the same weaknesses show up time and again. And in fact, MITRE has a list of the most frequently exploited weaknesses, these coding errors. And they found that some of these appear on the top list year after year. I think probably somewhere close to half of them have been on every list since they started compiling it, which was only in 2019. But, but still people say, oh, software is developing so rapidly. Well, the weaknesses are—there's some evergreens among the weaknesses.

And this is what drove me taking what Sharma and Zipursky had said, focus on the products, focus on the features. And now we have this data from CISA and MITRE that actually looks at vulnerabilities as they are being exploited by the bad guys, and links them to specific known weaknesses, which as I say, recur again and again. And that's where I said, these should just be no-nos.

And Derek Bambauer, law professor, had argued three or four years ago when he was thinking about critical infrastructure security, that from a systems operator perspective, that we're just— he called it, cyber security for idiots. We should just focus on the really dumb stuff that people do time and again, allowing 1, 2, 3, 4, 5 as the passwords, etc. And take that concept and apply it to software. We know what these vulnerabilities are. And we know needs to be done to avoid them. That is, we know how to fix or code so that there's not a buffer overflow or there's not a path traversal flaw. We know we shouldn't hard code in credentials such as default passwords.

So that's what led me to conclude, let's go for the 80/20 rule. Let's go for the 80 percent of the stuff that we know and understand and can identify and can define and put a label to and NIST and MITRE have put a label to, they have identified it. Let's make that the floor. And make those matters of, in essence, per se, liability. If your product has this flaw, it's to sort of slap your forehead and say, wow, how did they let that happen? Why did they let that happen? If your product has that kind of a flaw, you should be liable for the consequences that flow from it. You should be liable for the harms that are suffered as a result of that flaw. And that's what I call the floor.

Stephanie Pell

So, your answer, I think, suggests that it is possible to define a floor. But it does raise the question, who should be the one defining the floor? Do you have thoughts on that?

Jim Dempsey

I do, and honestly, I would give it to CISA. I think they're halfway there already with the work they've done. Now, CISA is not a regulatory agency, and they don't want to be a regulatory agency, and I don't want to make them into a regulatory agency. That's why I would depend upon the private litigation system to actually implement this, as we do for so many other sectors. Or you could give it to FTC to implement, etc.

But to define the floor, to go through this list of 934 weaknesses and identify those that are really concrete and can be put into this standard of care, I would look to CISA. I think they've matured remarkably as an agency. They've, I thought, did a good job on developing the cross-sector cybersecurity performance goals for network operators, which don't apply to software. They apply more to systems operators, critical infrastructure, particularly. And so I would give that role to them.

I would probably say that it would probably have to be done by some sort of notice and comment process, not only for legal sustainability, but just also for industry buy-in. You would want to have some kind of public consultation process, some sort of notice process, for that. But I would hope that CISA would have the fortitude to stand up to industry, which clearly would want to dumb down the list. But that's where I would look, and I would have them start with what already exists in the MITRE list of common weaknesses and pluck out those that actually can be treated as a design defect.

Stephanie Pell

So I want to turn to the second part of your definition of liability, where you assert that, “a list of known coding weaknesses cannot suffice alone. Software is so complex and dynamic that a liability regime also needs to cover design flaws that are not so easily boiled down. For these, I propose a standard based on the defects analysis common to products liability law.”

So, we are moving past the floor into a defects analysis discussion. Can you unpack that a bit for us?

Jim Dempsey

So I was influenced here by what I was hearing from people with software coding knowledge. Over the past year now or nine months, I've been helping to coordinate an informal working group of academics and others, including both lawyers, law professors, as well as computer science, folks, and obviously some law professors who were actually coders in a former life. And we've been talking through this issue, just sharing ideas and exploring options. And again and again, I heard from the people from the software development side, look, software is way too dynamic, way too complex to boil down to just a list of dos and don’ts.

A list of dos and don'ts will not capture the universe of design defects. And if you're going to have a liability regime, it has to be flexible enough to take into account things that you haven't put on any particular list at any particular moment in time. And by the way, that's consistent with the law in other areas. We have literally tens of thousands, I think, of technical standards out there for all kinds of products. Everything, again, from airplanes to toasters. But across the board in tort law, failure to comply with the legal standard, with the technical standard adopted and incorporated into law, failure to comply is evidence of liability, is evidence of a defect.

But complying with the entire standard doesn't insulate you from liability because plaintiffs, those who have been harmed, always have the opportunity to come in and say, you did everything in the prescribed standard, but you missed something else, which you should have found, which wasn't contemplated by the standard, but here it is. And if it otherwise fits the standard of reasonableness, then you're liable.

And so I said, well, I just could not think of any place other to go for that then to the realm of products liability law, design defect, reasonable alternative, and reasonable safety. I just don't see any other concept out there that deals with everything in that zone of uncertainty, complexity, rapid development, things that are going to come up next week that we haven't thought of now.

And so for that, I would say, yes, go to the design defects analysis common to products, liability law. Which would mean case-by-case adjudication, but I see no way around that.

Stephanie Pell

But you also indicate that whatever the ultimate liability regime, you would want to draw on tort law principles that liability attach only when a design flaw is actually exploited and causes actionable damage to any particular person. Again, questions that could only be decided case-by-case. Why do you propose this particular limitation?

Jim Dempsey

Well, because I was worried and there was some sentiment, some people in the working group argued, that there should be liability for any and all flaws. And I felt that would open up the floodgates to litigation of a kind we don't like, almost like a troll type activity, where people are out there looking for flaws in software and seeking to get damages or some kind of remedy from the software developer, even if the flaw had not been exploited, even if the flaw had not led to any damage, any harm.

People argued, look, there are lots and lots and lots of flaws in software, and sometimes you will have flaws. But because of the way the software is going to be implemented, because of the uses that it's going to be put to, because of other ways that it's going to be configured, the flaw will never reasonably be exploited or is highly unlikely to be exploited. And you shouldn't force people to eliminate even the flaws that are irrelevant or that are not dangerous.

So the theory was, look, let's again make this a question of, it is supposed to be a compensatory scheme. It is supposed to use damages as an incentive, but it should be damages measured by actual harm. So, in a way that's a give to industry and sort of a big give to industry, but I think it's a reasonable one because again, I just didn't want to be proposing a system that, a little bit like the patent troll system, where it's a gotcha kind of system where people are running around looking for cases to bring even if there's no harm otherwise.

Stephanie Pell

So then in the third part of your definition, you state that, “this liability should not be unlimited or unpredictable. As the Biden Administration's National Cyber Security Strategy recognized, developers deserve a safe harbor that shields them from a liability for hard-to-detect flaws above the floor. For that, I would turn to a set of robust coding practices.” Now, this is where you bring the concept of process back into your proposed liability regime. Can you talk a little bit more about the concern you're trying to address here?

Jim Dempsey

So in a way, I was taking the national strategy, the Biden strategy at face value, and the drafters of the strategy had decided, in part, as a matter of political reality that there needed to be some sort of cap or sealing or safe harbor that industry would, honestly, just crush through their lobbying effort, anything that could be framed as unlimited, risk of liability. And I actually agree with the administration, both in terms of their political judgment, as well as in terms of the underlying substantive judgment that, again, liability should not be unlimited and completely unpredictable.

So that led me to grapple with the notion of what would a safe harbor look like. And for safe harbor, I turned then to process. And this notion, again, that there are ways to develop software that are more secure than others, and there are steps that you can take, processes that you can follow in the software development lifecycle, that, while not guaranteed to produce perfect software—and again the goal isn't perfect software—not guaranteed even to uncover or detect or eliminate serious flaws, but processes which, if followed, are more likely to produce better software. And those include various testing and design features, some of which are already in the NIST framework.

Earlier I cited stuff from the NIST secure software development framework that are pretty loosey goosey, pretty fuzzy. There are other things in there that are quite specific. Things like about obtaining provenance information, a software bill of materials for each software component. That is, as you use, in your software development, stuff developed by other people, making sure you know what you're ingesting. Techniques called fuzz testing, which introduces sort of random parameters into software to see if you can break it. Static analysis of software code, for which there are tools where you can submit your software as you're developing it to these analyses which may identify bugs.

Now, we're not quite there. Again, the NIST framework, I think, falls back too much on generality, and again, it wasn't really designed to be a legal standard. Microsoft actually has a software development guidance of its own in-house, which is pretty strict and pretty good in some ways. A guy named Carl Landwehr has done some very significant work a couple of years ago on software design and his notion of building codes for building code.

So I think we need to come up with that framework. As I say, I don't think the NIST framework is quite it yet, but can be sharpened. Again, I would fall back on CISA to do that. Landwehr, probably a decade or more ago, called on the software profession to do this and they really didn't, no one picked him up on it. Again, Microsoft did it in-house and I assume others have in-house documents. But as a profession, the software profession didn't get together to do this jointly. I would welcome them if they were to do that. I think ultimately it would have to be looked at and blessed by some government agency, a little bit in the way that the critical infrastructure protection standards for the electrical power grid are developed by industry, but then reviewed and approved or sent back for more work by FERC, the Federal Energy Regulatory Commission. So you could have sort of a joint public-private effort, but the final decision maker has to be the governmental entity.

So that is going to require a little more work. And there's a lot of support for secure software development processes. But I would use them as the safe harbor, not as the floor.

Stephanie Pell

So one theme that I took from your proposal was not to let the perfect be the enemy of the good. Is that a fair statement?

Jim Dempsey

Yeah, a hundred percent. That sums up my own personal approach to policymaking after many, many years, a decade on Capitol Hill and many more years from the outside, and very much so here. The point is we need to act, in my view, quickly and make incremental progress. And that's why I think of the system of the floor. And again, I'm partly building on what Derek Bambauer did a couple of years ago.

Let's start with those clearly erroneous decisions, those clear flaws that we know about. Let's create the liability structure and incentivize companies to eliminate them. We can add to that as we go along. This doesn't have to be one and done. And just as CISA and MITRE revise their list of vulnerabilities and weaknesses regularly, that can easily be done. So let's start somewhere. Let's start relatively modestly. Start with the floor.

And I think we can incentivize the development of more secure software. We need to do better than we're doing now. Just, there has to be a better way to base our whole economy, our government, and our personal lives than this buggy software that we're all subject to.

Stephanie Pell

We'll have to leave it there for today. Thank you so much for joining me.

Jim Dempsey

Fantastic. Thanks so much, Stephanie.

Stephanie Pell

The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad-free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts. Look out for our other podcasts, including Rational Security, Chatter, Allies, and The Aftermath, our latest Lawfare Presents podcast series on the government's response to January 6th. Check out our written work at lawfaremedia.org.

The podcast is edited by Jen Patja Howell, and your audio engineer this episode was Noam Osband of Goat Rodeo. Our music is performed by Sophia Yan.

As always, thank you for listening.


Stephanie Pell is a Fellow in Governance Studies at the Brookings Institution and a Senior Editor at Lawfare. Prior to joining Brookings, she was an Associate Professor and Cyber Ethics Fellow at West Point’s Army Cyber Institute, with a joint appointment to the Department of English and Philosophy. Prior to joining West Point’s faculty, Stephanie served as a Majority Counsel to the House Judiciary Committee. She was also a federal prosecutor for over fourteen years, working as a Senior Counsel to the Deputy Attorney General, as a Counsel to the Assistant Attorney General of the National Security Division, and as an Assistant U.S. Attorney in the U.S. Attorney’s Office for the Southern District of Florida.
Jim Dempsey is a lecturer at the UC Berkeley Law School and a senior policy advisor at the Stanford Program on Geopolitics, Technology and Governance. From 2012-2017, he served as a member of the Privacy and Civil Liberties Oversight Board. He is the co-author of Cybersecurity Law Fundamentals (IAPP, 2024).

Subscribe to Lawfare