Moving Slow and Fixing Things
Published by The Lawfare Institute
in Cooperation With
Silicon Valley, and the U.S. tech sector more broadly, has changed the world in part by embracing a “move fast and break things” mentality that Mark Zuckerberg popularized but that pervaded the industry long before he founded FaceMash in his Harvard dorm room. Consider that Microsoft introduced “Patch Tuesday” in 2003, which began a monthly process of updating buggy code that has continued for more than 20 years.
While it is true that the tech sector has attempted to break with such a reactive and flippant response to security concerns, cyberattacks continue at an alarming rate. In fact, given the rapid evolution of artificial intelligence (AI), ransomware is getting easier to launch and is impacting more victims than ever before. According to reporting from The Hill, criminals stole more than $1 billion from U.S. organizations in 2023, which is the highest amount on record and represents a 70 percent increase in the number of victims over 2022.
As a result, there are growing calls from regulators around the world to change the risk equation. An example is the 2023 U.S. National Cybersecurity Strategy, which argues that “[w]e must hold the stewards of our data accountable for the protection of personal data; drive the development of more secure connected devices; and reshape laws that govern liability for data losses and harm caused by cybersecurity errors, software vulnerabilities, and other risks created by software and digital technologies.” This sentiment represents nothing less than a repudiation of the “Patch Tuesday” mentality and with it the willingness to put the onus on end users for the cybersecurity failings of software vendors. The Biden administration, instead, is promoting a view that shifts “liability onto those entities that fail to take reasonable precautions to secure their software.”
What exact form such liability should take is up for debate. Products liability law and the defect model is one clear option, and courts across the United States have already been applying it using both strict liability and risk utility framings in a variety of cases including litigation related to accidents involving Teslas. In considering this idea, here we argue that it is important to learn from the European Union context, which has long been a global leader in tech governance even at the risk of harming innovation. Most recently, the EU has agreed to reform its Product Liability Directive to include software. When combined with other developments, we are seeing a new liability regime crystallize that incorporates accountability, transparency, and secure-by-design concepts. This new regime has major implications both for U.S. firms operating in Europe and for U.S. policymakers charting a road ahead.
The EU’s various levers to shape software liability, and more broadly the privacy and cybersecurity landscape, are instructive in a number of ways in helping to chart possible paths ahead, and each is deserving of regime effectiveness research to gauge their respective utility. These include:
- Extending Products Liability to Include Cybersecurity Failings: Following the EU’s lead in expanding the definition of “product” to include software and its updates, U.S. policymakers could explore extending traditional products liability to cover losses due to cybersecurity breaches. This would align incentives for businesses to maintain robust cybersecurity practices and offer clearer legal recourse for consumers affected by such failings.
- Adopting a “Secure by Design” Approach: New EU legislation, such as the Cyber Resilience Act, mandates that products be secure from the outset. U.S. policy could benefit from similar regulations that require cybersecurity to be an integral part of the design process for all digital products. This would shift some responsibility away from end users to manufacturers, promoting a proactive rather than reactive approach to cybersecurity.
- Enhancing Transparency and Accountability Through Regulatory Frameworks: Inspired by the EU’s comprehensive regulatory measures like the General Data Protection Regulation (GDPR) and the AI Act discussed below, the U.S. could benefit from creating or strengthening frameworks that enforce transparency and accountability in data handling and cybersecurity. Building on the recent guidance from the U.S. Securities and Exchange Commission that requires publicly traded companies to report material cybersecurity incidents within four days, this could include potential requirements for risk assessments, incident disclosures, and a systematic approach to managing cyber risks across all sectors, not just critical infrastructure.
Each of these themes is explored in turn.
Extending Products Liability to Include Cybersecurity Failings
The EU has taken a more detailed, and broader, approach to imposing liability on software developers than what has commonly been argued for in the U.S. context.
In a recognition that many products, from toasters to cars, have gotten increasingly “smart,” the EU began a process in 2022 to update its products liability regime, which had been in place and largely unchanged since 1985. As such, reforms agreed to under the Product Liability Framework include an expansion of what’s considered a “product” to cover not just hardware, but also stand-alone software such as firmware, applications, and computer programs along with AI systems. Exceptions are applicable for certain free and open-source software, which has long been an area of concern for proponents of more robust software liability regimes.
Relatedly, the concept of “defect” has been expanded to include cybersecurity vulnerabilities, including a failure to patch. The notion of what constitutes “reasonable” cybersecurity in this context, such as a product that does not provide the expected level of service, builds on other EU acts and directives, discussed below.
Recovered damages have also broadened to include the destruction or corruption of data, along with mental health impacts following a breach. Covered businesses can also include internet platforms with the intent being that there is always an “EU-based business that can be held liable.” Even resellers who substantially modify products and put them back into the stream of commerce may be held liable. It’s now also easier for Europeans to prove their claims through the introduction of a more robust U.S.-style discovery process and class actions, along with easing the burden of proof on claimants and extending the covered period from 10 to 25 years in some cases.
Although the EU has long been a global leader on data governance and products liability, the same has not necessarily been the case for cybersecurity—particularly pertaining to critical infrastructure protection. In 2016, the EU worked to change that through the introduction of the Network Information Security (NIS) Directive, which was updated in 2023 as NIS2.
Among other things, NIS2 expanded the scope of coverage to new “essential” and “important” sectors including cloud and digital marketplace providers, required EU member states to designate Computer Security Incident Response Teams (CSIRTs) and join Cooperation Groups, which are in essence international information sharing and analysis centers, or ISACs. Covered businesses must take “appropriate” steps to safeguard their networks, secure their supply chains, and notify national authorities in the event of a breach.
In sum, NIS2 regulates software in a manner more familiar in the U.S. context, relying on information sharing and a risk management approach to standardize common activities like incident reporting.
Further, the European Union’s Cybersecurity Act, which took effect in June 2019, establishes a comprehensive framework for the certification of cybersecurity across information and communications technology products, services, and processes. The regulation aims to bolster trust in the digital market by ensuring that these entities adhere to standardized cybersecurity criteria. This certification scheme is voluntary, but it affects manufacturers and service providers by enabling them to demonstrate their compliance with high levels of cybersecurity, thereby enhancing market perception and consumer trust in their offerings. The act fits within the broader EU strategy of leveraging regulatory measures over direct state control, epitomized by the role of European Union Agency for Cybersecurity (ENISA). ENISA has become a major entity in shaping and supporting the cybersecurity landscape across the EU, despite facing challenges in establishing its authority and influence.
From a products liability perspective, the Cybersecurity Act shifts the landscape by integrating cybersecurity into the core criteria for product safety and performance evaluations. By adhering to established certification standards, companies not only mitigate the risks of cyber threats but also reduce potential legal liabilities associated with cybersecurity failures. The act encourages transparency and accountability in cybersecurity practices, pushing companies to proactively manage and disclose cyber risks, which can influence their liability in cases of cyber breaches.
This approach aligns with the EU’s broader regulatory security state model, which emphasizes governance through regulation and expertise rather than through direct governmental intervention. This model is characterized by the deployment of indirect regulatory tools and reliance on the expertise and performance of various stakeholders to manage security issues, rather than solely depending on direct state power and authority. The voluntary standards have posed challenges, leading to uneven adoption and vulnerabilities in products not compliant with these standards and minimum security objectives for organizations. Nevertheless, some studies have commented that at least the act has helped the European Union behave in a coordinated way.
Adopting a “Secure by Design” Approach
In addition to the proposal to include software within the scope of products liability legislation, the EU has introduced unified cybersecurity requirements for products sold within the common market, which includes pure software products. The Cyber Resilience Act (CRA), a forthcoming EU regulation, combines detailed cybersecurity requirements, such as patch management and secure-by-design principles, with a comprehensive liability regime. The CRA can be considered as more comprehensive than California’s “Internet of Things” (IoT) security law as the CRA’s cybersecurity requirements go far beyond California’s reasonable security features and password requirements, and the CRA applies to both IoT and software products.
Fundamentally, the CRA requires that products be introduced to the market with all known vulnerabilities patched and that they have been developed under a “secure by design” basis. However, developers are also required to conduct and maintain a cybersecurity risk assessment, provide a software bill of materials listing out the third-party software components used in their products, and ensure security updates are available for a period of at least five years. Developers and manufacturers of ordinary products can self-certify conformity with the legislation while “important” and “critical” products will require a more in-depth and an independent conformity assessment, respectively.
Noncompliance with the CRA follows the model used in the GDPR and can result in a fine of up to 15 million euros or 2.5 percent of total revenue (whichever is larger) for breaches of core requirements, while other breaches can result in a fine of up to 10 million euros or 2 percent of total revenue. However, there is no mechanism under the act for a complainant to enforce the CRA directly, and complainants must petition their local regulator if they believe the requirements have not been met.
Enhancing Transparency and Accountability Through Regulatory Frameworks
The EU’s AI Act introduces a regulatory framework to protect users from harms caused by the failure of an AI system in the name of safety and transparency. The act classifies AI systems into three categories—prohibited, high-risk, and non-high-risk—and is reminiscent of the CRA in its comprehensive scope. Prohibited applications, such as those involving subliminal techniques or social scoring, are banned within the EU. High-risk applications, which include medical devices and credit scoring systems, must adhere to stringent requirements, including maintaining a risk management system, ensuring human oversight, and registering in the EU’s database of high-risk AI systems. Non-high-risk applications face minimal to no regulatory obligations.
The act also addresses general purpose AI models, like foundation and large language models, imposing obligations similar to those for high-risk systems. These include maintaining a copyright policy and publishing a summary of the training data. Enforcement is managed by domestic regulators and coordinated at the EU level by the newly established European Artificial Intelligence Board and the European Office for AI, where complaints can also be lodged against noncompliant AI providers.
There are penalties for noncompliance. Violations involving prohibited AI can result in fines up to 30.3 million euros or 7 percent of total revenue. High-risk AI breaches may lead to fines of up to 15.14 million euros or 3 percent of total revenue, and providing misleading information to regulators can attract fines up to 7.5 million euros or 1.5 of total revenue. The applicable fine, higher or lower, depends on whether the entity is a large corporation or a small to medium-sized enterprise. One of the major limitations in the EU’s AI liability regime, however, exists in its broad categorization of risk. In reality, there are many different dimensions of risk, let alone the definition of fairness in AI systems. In particular, “explainability” and “interpretability” of AI systems are often used interchangeably, and that language will make it difficult to enforce and promote trustworthy AI practices.
In the event that a user is harmed following their use of a high-risk AI system, they will be able to benefit from a proposed companion directive, which introduces additional civil liability requirements for AI systems. Under the proposed directive, the user will be able to seek a court order compelling the provider of the AI system to disclose relevant evidence relating to the suspected harm.
However, the claimant will be required to demonstrate to the relevant court that the provider has failed to comply with its obligations under the AI Act in order for their claim to succeed. Harm that occurs to the claimant despite the provider meeting its obligations under the AI Act is not recoverable under this legislation.
This approach, as is the case with data privacy in the EU context, is far more comprehensive than the Biden administration’s AI executive order and sets out accountability and transparency rules that are already shaping global AI governance.
As with the AI Act, the General Data Protection Regulation is a comprehensive data protection law. It came into effect in the European Union on May 25, 2018, aiming to empower individuals with sovereignty over their personal data and simplify the regulatory environment for business. In particular, the GDPR requires that companies that process personal data be accountable for handling it securely and responsibly. This includes ensuring that data processing is lawful, fair, transparent, and limited to the purposes for which it was collected. Product and service providers must disclose their data processing practices and seek explicit consent from users in many cases, making them directly liable for noncompliance. The GDPR also gives individuals the option of demanding that a company delete their personal data or transfer it to another provider.
Although there are penalties for noncompliance for both primary data controllers and potential third parties, it has been very difficult to enforce and prove liability. For example, the European Union’s own internal analysis has explained how international data cooperation has been challenging due to factors like “lack of practice, shortcomings in the legal framework, and problems in producing evidence.” Furthermore, since consumers often are searching for specific information and do not have other options, they simply consent to the relevant disclaimers on a site to enter and never think twice about the data that was shared and/or the possibility of filing a lawsuit against a company for potential damages from, say, a data breach.
Furthermore, empirical studies generally point toward a negative effect of the GDPR on economic activity and innovation. Some studies have found that the GDPR led to a decline in new venture funding and new ventures, particularly in more data-intensive and business-to-consumer sectors. Others found that companies exposed to the GDPR incurred an 8 percent reduction in profits and a 2 percent decrease in sales, concentrated particularly among small and medium-sized enterprises. There is additional evidence that the GDPR led to a 15 percent decline in web traffic and a decrease in engagement rates on websites.
Finally, the Digital Services Act (DSA) “regulates online intermediaries and platforms such as marketplaces, social networks, content-sharing platforms, app stores, and online travel and accommodation platforms.” It took effect in a staggered process in 2022 and promised risk reduction, democratic oversight, and improvement of online rights. Articles 6(1), 9(1), and 22 of the DSA could be significant after cyberattacks, while Articles 17 through 21 could be crucial protections for users of online platforms whose accounts are suspended or terminated due to intrusions or misuse attributable to cyber threats. Article 9(1) obliges certain platforms to remove illegal material upon being served with notice of specific items by “judicial or administrative authorities.” Regarding online dangers other than intellectual property infringement and incitement to violence, Recital 12 of the DSA references “stalking” and “the unlawful non-consensual sharing of private images.”
In the United States, the law on loss of access to online accounts remains a patchwork, even in cases involving data breaches covered by federal statutes. While some courts allow breach of express or implied contract as a theory of recovery, others may not, and arbitration clauses are a formidable challenge in some cases. Articles 20(4) and 21 of the DSA strengthen the right to use online platforms and not to suffer arbitrary deprivation of access.
Settlements of class actions like those involving iPhone battery life and Google Chrome incognito mode do suggest that defective software and misleading marketing of technology claims have traction in U.S. courts without further reforms. Products liability and data security litigation remains viable due to the similarity of many U.S. states’ laws and the intention of the federal class-action procedure to make asserting small-dollar claims economical.
Lessons for Policymakers
A natural question is whether Europe has taken a more active regulatory approach because its technology sector is much smaller. While having a smaller technology sector in Europe inevitably means that there are different political economy dynamics, including lower returns to lobbying, there is nonetheless a growing recognition that the absence of clearer guidelines and regulations is a lose-lose situation in the long run. For instance, a voluminous body of empirical literature documents a rise in concentration and market power, particularly among digital intermediaries, and that could be attributed to lax and ambiguous guidelines. Only recently has the U.S. Securities and Exchange Commission introduced guidance requiring that public companies report data breaches four business days after the incident is determined material.
The EU’s efforts to extend products liability law to software, adopt a secure by design approach similar to that called for in the 2023 U.S. National Cybersecurity Strategy, and enhance transparency and accountability across the digital ecosystem have solidified its place as a global leader in tech governance.
Several of these steps could be taken at once perhaps as part of the proposed American Privacy Rights Act, which would offer enhanced powers to the Federal Trade Commission to investigate deceptive or defective products, and establish baseline privacy and cybersecurity expectations for American consumers.
At the highest level, if a products liability approach in the U.S. context is to be successful, Congress would need to introduce a package of reforms that would address various barriers to recovery, including the economic loss doctrine and the enforcement of liability waivers. Moreover, the array of EU initiatives surveyed above still give rise to uncertainty, such as a potential cap of 70 million euros on all claims for a specific defective item. And costs should not be underestimated—one U.S. House of Representatives Oversight and Accountability Committee report claimed 20.4 to 46.4 billion euros in new compliance and operation costs introduced by the DSA and the GDPR. Still, such estimates should be weighed against the staggering economic harm introduced by software vulnerabilities discussed above.
A best-case scenario would be for policymakers on both sides of the Atlantic, and beyond, to come together and find common ground to encourage the convergence of baseline software security expectations. This process could either be kicked off through a special event, such as a Global Responsible Software Summit modeled after recent ransomware and democracy summits, or be added to an upcoming major gathering.
No nation is an island in cyberspace, no matter how much some wish they were. How leading cyber powers—including the EU and the U.S.—approach the issue of software liability will make worldwide ripples, which, depending on how these interventions are crafted, could turn into a tsunami.