Redefining the Standard of Human Oversight for AI Negligence
By Nanda Min Htin - Edited by Shriya Srikanth
Nanda is an LL.B. candidate at the Singapore Management University (SMU). He serves as a Steering Committee member of the Asia-Pacific Legal Innovation and Technology Association (ALITA) and a Senior Advisor of the SMU Legal Innovation & Technology club (SMU-LIT).
We have moved rapidly from an era of deterministic, rule-based algorithms to one of probabilistic, deep learning systems that exhibit emergent behaviors. In response to the opacity and autonomy of these systems, global regulators have instinctively reached for a familiar mechanism to govern digital risk: the human being.
The logic is intuitive. If a machine errs, a human should be present to intervene, correct the course, and absorb the responsibility, while being incentivized to implement safety mechanisms. This approach is enshrined in laws including the EU AI Act’s mandate for natural persons to oversee high-risk systems [1] and a California legislative proposal requiring real-time monitoring and human approval before AI executes actions in critical infrastructure [2].
While these laws mandate the presence of oversight, they fail to operationalize the capacity for oversight. They presume that the human in the loop possesses the cognitive bandwidth and technical insight to effectively monitor systems operating at superhuman capabilities. As AI systems become more complex, this creates a dangerous accountability gap between legal requirements and technical reality.
This article interrogates human oversight through the Learned Hand formula (B < P * L), a cornerstone of negligence law that defines the duty of care imposed in a negligence regime [3]. While human oversight is a start for AI safety, mere human presence is insufficient.
The article proposes a three-pillar framework to clarify the legal standard for human oversight:
- Impose a duty on the AI deployer to implement genuine human-AI collaboration frameworks.
- Impose a duty on the AI developer to demonstrate technical robustness.
- Impose on both a duty for post-market monitoring and failure reporting.
The Problem of the Human in the Loop
Current legal frameworks mandate that a natural person be "in the loop" to approve high-stakes decisions made by diagnostic algorithms in healthcare, predictive policing tools, semi-autonomous vehicles and other AI systems. This assumes that the human must be capable of intervening. But this is a false premise.
Automation Bias and Vigilance Decrement
In a 2016 study, participants were guided by a robot during a simulated fire emergency. [4] Despite the robot previously malfunctioning, every participant followed it toward a blocked exit rather than using a clearly marked safe route.
This reflects automation bias, a phenomenon where humans systematically over-trust automated decisions even in the presence of contradictory evidence. [5] In high-pressure environments like emergency rooms or air traffic control centers, humans default to the automated output, creating a single point of failure disguised as a dual-verification system.
Compounding automation bias is vigilance decrement, a deterioration in the ability to detect anomalies during passive monitoring tasks.
The 2021 crash of Sriwijaya Air Flight 182 illustrates this. [6] When the autothrottle malfunctioned, the pilots, lulled by the system's reliability, failed to monitor the engine instruments for several minutes. By the time the autopilot disengaged, the crew was too cognitively disengaged to recover from the fatal dive.
The human brain is evolutionarily wired for active engagement, not for prolonged periods of inactive supervision. When an AI system performs correctly most of the time, the operator disengages, a state known as out-of-the-loop unfamiliarity. [7] The law cannot mandate human intervention exactly at the edge case moment when the system fails, which is precisely when the operator is cognitively disengaged and least capable of reacting.
The Moral Crumple Zone and the Liability Sponge
Much like the crumple zone of an automobile during a crash, the human operator in an AI system is positioned to absorb the legal, moral, and reputational impact of a system failure. This phenomenon effectively turns the operator into a "liability sponge." [8]
This was starkly illustrated in the 2018 fatal collision involving a self-driving Uber vehicle in Arizona, caused by an automated system which failed to correctly classify a pedestrian. [9] While the legal and public discourse largely fixated on the distraction of the safety driver (streaming The Voice on Hulu), this behavior illustrates the 'vigilance decrement' in action: when tasked with passive monitoring of a largely reliable system, the human brain disengages and seeks stimulation. Whereas Uber reached a civil settlement with the victim's family, the driver was charged with negligent homicide [10], and became a liability sponge that absolved the failures of the automation design.
The moral crumple zone is pervasive in administrative decision-making as well. Social workers using predictive algorithms for child welfare or housing benefits are often the first culprit when the system produces biased outcomes.[11] They are caught in a catch-22: if they override the AI and a tragedy occurs, they are blamed for ignoring the data; if they follow the AI and it discriminates, they are blamed for enforcing bias.
When a regulator mandates a task that a human cannot reliably perform, they are effectively mandating a breach of duty and setting the industry up to fail. This creates a regulatory trap: the mere act of operating the system becomes a constructive breach of duty because the human cannot meaningfully satisfy the statutory requirement of control or oversight. Inevitable cognitive limitations might translate to automatic legal violations and cement the human's role as the liability sponge for systemic defects.
From Regulatory Compliance to Tort Liability
While regulations like the EU AI Act establish the statutory duty to oversee AI, the tort of negligence remains the primary legal mechanism for enforcing this duty and compensating victims when harm occurs.. While doctrines such as strict liability offer a route for manufacturing defects, negligence is the essential legal backstop for operational failures and the improper deployment of otherwise functional systems. The premise of negligence is that there is a duty to exercise reasonable care to avoid causing harm.
To determine what is “reasonable”, courts often rely on the Learned Hand formula (B < P * L). A party is negligent if the burden of precautions (B) is less than the probability of the harm (P) multiplied by the gravity of the loss (L). Granted, scholars have advocated for other non-economic options to determine negligence, such as corrective justice theory. Regardless, this analysis relies on the Learned Hand formula because it provides a popular, quantifiable framework capable of translating abstract corporate duties into concrete resource allocation decisions.
However, AI systems do not fail in the predictable, linear manner of mechanical tools. They fail probabilistically, opaquely, and often at scale.
The Undefined Burden (B) of the Black Box
In the context of AI oversight, B represents the effort required by the human operator, the deployer or the developer to prevent the system from realizing a foreseeable (not just any) risk.
For the Deployer: If a hospital deploys an AI tool to help doctors detect never-before-seen tumors, B represents, among others, the time and cognitive effort required to verify the AI's output. However, deep learning models often identify patterns in high-dimensional data (e.g., subtle pixel correlations) that are structurally invisible to human vision. If the AI is deployed because it perceives what the human cannot, asking the clinician to verify the AI is asking them to perform a task that exceeds their sensory capabilities. B effectively approaches infinity, rendering the negligence calculus void or, as some argue, simply not negligent and thus not liable given the high burden.
For the Developer: In traditional software, testing and debugging code is manageable via line-by-line inspection. In neural networks, programming is replaced by training. The logic of the system is not written; it is induced. Debugging a neural network with billions of parameters to ensure it never acts erroneously in edge cases is extremely challenging.[12]
Because B is largely undefined, courts struggle to provide uniform guidance on what a "reasonable" operator or developer should have done. This ambiguity often defaults to a judgment based on hindsight bias: if an accident occurred, the precaution must not have been taken, therefore negligence exists. This circular reasoning fails to provide ex ante guidance to industry.
While hindsight bias plagues all negligence litigation, it is uniquely pernicious in AI because of the “interpretability gap”. In traditional engineering, experts can reconstruct the mechanism of failure (e.g., a sheared bolt) to establish what a reasonable professional could have foreseen. With deep learning, the mechanism of failure is often inscrutable even to the developers. Without a clear causal chain, courts are forced to rely on the outcome to adjudicate fault, effectively collapsing the distinction between an unforeseeable glitch and negligent design.
The Argument for Negligence over Strict Liability
Given the apparent failure of the Hand formula, scholars have argued for a shift to strict liability. [13] If AI is treated as a product with a defect, the manufacturer is liable regardless of fault. This forces the manufacturer to internalize the costs of harm (P * L) and ostensibly incentivizes cost-justified safety precautions (B) to minimize their total financial exposure.
It is beyond the scope of the article to pit the merits of negligence and strict liability against each other. That said, significant drawbacks with strict liability merit the case for a re-engineered negligence framework. This in turn necessitates clarifying the standard of care in the context of human oversight.
For one, strict liability for unforeseeable long tail risks functions as an onerous tax on innovation. [14] By holding developers liable for every downstream misuse, it could strangle open-source development [15], whereas negligence caps liability at "reasonable" care. While critics argue that any liability regime may dampen innovation, negligence differs by offering a 'safe harbor': it limits liability to those who fail to exercise 'reasonable' care. Moreover, strict liability is ill-suited for systems that evolve post-deployment. Unlike static products, AI models change; negligence accommodates this by imposing a continuous duty of monitoring rather than a one-time defect test .
Strict liability is not a panacea. Furthermore, tort law has historically adapted to technological shifts by initially imposing strict liability to manage unknown risks, then drifting toward negligence as risks become understood and manageable.[16] Negligence remains a compelling liability framework. Thus, it is pivotal to redefine the "Burden of Precaution" (B) from an abstract standard to a set of verifiable processes.
To operationalize this redefined burden, the following framework establishes three distinct pillars of duty. These pillars collectively transform the abstract 'standard of care' into concrete, verifiable obligations for industry.
Pillar 1: Deployer’s Duty to Implement Human-AI Collaboration
The first pillar operationalizes the goal of shifting from passive observation to active partnership between the human and the AI. Under this new legal standard, a deployer would be negligent if they place a human in a loop without implementing Human-Systems Integration (HSI) frameworks [17], especially for high-risk use-cases. The law should demand that the "loop", or interfaces for AI systems management, be designed for human cognitive capabilities. Codifying the requirement for adopting HSI frameworks as a baseline for compliance helps make the negligence calculus more concrete. If a deployer skips these design steps to save money (low B), and harm occurs (high L), the breach of duty will be mathematically evident.
Multiple HSI frameworks already exist. [18]; [19] The Partnership on AI's Human-AI Collaboration Framework provides a robust, 36-question heuristic for designing and assessing these symbiotic systems. This framework forces designers and deployers to move beyond "stop" buttons and ask fundamental questions about the nature of the collaboration (e.g., "How much agency does the human have?") and the situation (e.g., "Is the human likely aware that they are interacting with an AI system?"). [20]
The legal definition of "reasonable care" must also distinguish between meaningful oversight and "warm body" roles where an operator lacks influence. Crootof, Kaminski, and Price have proposed a taxonomy of human roles to help courts adjudicate whether a human was truly "in the loop." [21] In our context, this includes:
Friction Roles: Speed without accuracy risks negligence. "Friction roles" force the human to deliberately slow down the AI’s decision-making process by performing active tasks to prevent automation bias and maintain vigilance. For instance, a sentencing system might require judges to input reasoning before seeing the risk score to prevent anchoring. [22] If the scores diverge, the system prompts a reconciliation process. A system designed without friction that leads to operator complacency in high-stakes domains should be viewed as defectively designed.
Resilience Roles: The human must be viewed as a source of resilience or a fail-safe for when the algorithm encounters edge cases. However, the system must provide "cognitive handrails" rather than opaque outputs to help the human to rapidly assess validity. For instance, when an AI is used to make loan disbursement decisions, deployers must utilize systems that offer:
- Uncertainty Quantification: Clearly displaying when the model is operating outside its training distribution.
- Saliency Maps: Visualizing the data points the model is prioritizing.
The deployment of a "black box" without these interpretive aids in a high-stakes environment constitutes a breach of duty.
Training for Failure: It is insufficient to train operators on how to use the AI when it is functioning as intended. They must be trained on how to react when it fails. As the Sriwijaya Air disaster illustrates, pilots who are not drilled on specific symptoms of failure (i.e., uneven engine thrust) are often unable to recover when the system inevitably disengages. Likewise, a hospital using diagnostic AI would be liable if it did not provide specific simulation training for when the AI hallucinates or exhibits bias. This mirrors the Federal Railroad Administration's regulations for train control systems, which require training on the limitations of automation to prevent over-reliance. [23] Operators must be exposed to adversarial scenarios during training to build skepticism. The deployer's failure to invest in this "failure training" represents negligence in the preparation of the workforce.
Pillar 2: Developer’s Duty to Demonstrate Technical Robustness
If the human operator cannot fully close the safety gap, the evidentiary burden in a negligence claim must shift. The burden should lie on the developer to prove they have provided a tool that is capable of being overseen. One way is to require developers to comply with technical standards to build models that are inherently understandable and build in the necessary signals for human intervention.
To bring an action in negligence, the burden currently lies on the plaintiff to produce evidence of harm caused by some obscure mechanism of an AI system. In contrast, Pillar 2 shifts the burden to the developer to prove robustness via compliance to established technical standards. Rather than requiring the plaintiff to prove a specific breach, the occurrence of an unexplainable harm should trigger a rebuttable presumption of negligence. This presumption holds unless the developer can demonstrate that they adhered to a standardized risk management process. This clarifies the "Burden" (B) in the BPL equation: B is the cost of compliance with the standard.
This shift in presumption is also enshrined under the EU’s Product Liability Directive. [25] Crucially, the Directive introduces a presumption that if the technical complexity of an AI system makes it excessively difficult for a victim to explain how the AI caused the harm, the court must presume the link exists. This shifts the legal burden to the developer to rebut that presumption by producing evidence, such as logging data, proving they adhered to safety standards. In the US, without a statutory shift like the EU Directive, courts must use the common law to shift the burden by utilizing existing guidelines.
This aligns with Calabresi’s cheapest cost avoider principle, which dictates that liability should rest with the party capable of preventing the harm with the least expenditure of resources. [24] The cost for a developer to implement interpretability features during training (B) is significantly lower than the aggregated cost of oversight required from every downstream model user (P * L). Thus, a developer acts negligently when they offload the cost of safety onto the downstream user by shipping a "black box" that defies reasonable human supervision.
The NIST AI Risk Management Framework (RMF) [26] and ISO/IEC 42001 [27] provide a systematic approach to managing risks and improving “robustness and reliability". The EU Ethics Guidelines for Trustworthy AI likewise identify "Technical Robustness and Safety" as one of seven key requirements. [28] Courts should mandate developers’ adherence to any of these technical standards as the minimum legal standard of care. For instance, if an AI causes harm, the developer is presumed negligent unless they can produce the RMF documentation proving they followed every step of the "GOVERN" and "MEASURE" functions.
Courts need not adjudicate the superiority of one framework over another to set a standard. Instead, they should adopt a “functional equivalence” or “safe harbor” approach: adherence to any widely recognized, consensus-based standard constitutes prima facie evidence of reasonable care. This approach has already been operationalized in legislation like the Texas Responsible AI Governance Act, which creates a safe harbor from liability for developers who comply with the NIST AI RMF or other "nationally recognized" frameworks. [29] Similarly, the NTIA advocates for liability safe harbors to incentivize the adoption of robust safety audits. [30] By recognizing compliance with any rigorous standard as a defense, courts allow the legal standard to remain flexible as technical best practices evolve.
Pillar 3: Deployer and Developer’s Shared Duty for Post-Market Monitoring and Failure Reporting
AI is a technology that evolves with use. Thus, Pillar 3 proposes a regime of continuous monitoring and feedback.
Ongoing Duty of Care
One element is to impose a dynamic standard of care. As new failure modes are discovered (e.g., a new LLM jailbreak), the Burden of Precaution (B) shifts. Developers should have a continuing duty to update their models. Failure to patch a known vulnerability discovered after deployment would constitute negligence, akin to a car manufacturer failing to issue a recall. Since AI remediation is often achievable via Over-The-Air updates, unlike physical recalls with massive logistical costs, this drastically lowers the economic burden (B), thereby justifying a stricter legal expectation for rapid remediation.
This framework draws upon the Food and Drug Administration’s guidance on Predetermined Change Control Plans (PCCPs), which requires manufacturers to specify pre-approved protocols for future model modifications. [31] The PCCP must include a description of planned modifications (e.g., re-training on new patient data, expanding to new demographics), a modification protocol for validating and implementing those changes, and a risk impact assessment.
By adapting PCCP logic to general negligence law, we can hold developers accountable throughout the lifecycle of the AI system, not just during pre-deployment.
Adverse Event Reporting and Centralized Databases
Granted, developers cannot be expected to monitor countless downstream deployment cases and foresee every type of harm. Moreover, AI failures could be hidden in non-disclosure agreements or internal logs held by the deployer, not the developer. [32] To quantify the Probability of Harm (P) for the Learned Hand formula, we need comprehensive data on AI failures.
Thus, deployers should be required to report major AI failures or even near-misses to a central database. Just as the FDA relies on its Adverse Event Reporting System [33] to track drug safety, proposals for mandatory AI incident reporting are gaining traction [34]. The EU AI Act requires providers of high-risk systems to report serious incidents to market surveillance authorities [35], while the OECD AI Incidents Monitor collects and classifies global AI incidents [36].
This data aggregation need not be the sole province of the state. In the absence of comprehensive federal legislation in the US, liability insurance is emerging as a key regulator of AI risk. [37] Insurers effectively apply the Learned Hand formula via premiums: if the premium (P * L) is higher than the cost of a safety measure (B), the firm will adopt the safety measure to lower their premium. [38] Furthermore, insurers act as central repositories for accident data, allowing them to see patterns of failure that individual firms might miss or hide. [39] This data aggregation can help establish the standard of care that negligence law currently lacks, creating a market-driven cycle of safety improvements.
This data allows the industry to quantify risk (P) by updating the standard of care. A risk that was unforeseeable yesterday becomes a known risk today once it appears in the database a sufficient number of times. If a specific failure mode (e.g., a jailbreak prompt) is documented in the database, other developers have a duty to patch against it. Ignorance of the database would no longer be a defense. This creates a feedback loop where tort liability informs regulation, and regulation informs the standard of care. A significant function of tort law and why it has weathered innovation since the industrial revolution is that it incentives the creation of valuable data and provides feedback to avoid liability.
Conclusion
Human oversight is a starting point for AI governance, but it is far from the end goal.Legal frameworks must evolve beyond demanding mere human presence.
For illustration, this article clarifies the standard of care for stakeholders in the AI ecosystem as follows:
Old Model: B (Undefined Human Effort) < P (Largely Unforeseeable) * L (Unbounded)
- Result: Systemic Failure. Infinite vigilance B for unknown risks.
Optimal New Model: B (Developer Robustness + Deployer HSI) < P (More Foreseeable, Quantified by Data) * L (More Insurable)
- Result: Calculable risk. Rational allocation of B.
By operationalizing these duties, tort law can move beyond the "liability sponge" model and incentivize the development of truly robust, transparent, and safe AI systems.
[1] European Parliament and Council, Article 14 Regulation (EU) 2024/1689 (Artificial Intelligence Act), Official Journal of the European Union, 2024. https://eur-lex.europa.eu/eli/reg/2024/1689/oj
[2] California State Senate, SB 833: Critical infrastructure: artificial intelligence systems: human oversight, Legiscan, 2024. https://legiscan.com/CA/text/SB833/id/3262151
[3] United States v. Carroll Towing Co., 159 F.2d 169, Second Circuit Court of Appeals, 1947. https://law.justia.com/cases/federal/appellate-courts/F2/159/169/1565896/
[4] Paul Robinette et al., Overtrust of Robots in Emergency Evacuation Scenarios, ACM/IEEE International Conference on Human-Robot Interaction, 2016. https://moralai.cs.duke.edu/documents/article_docs/robinette_overtrust_of_robots.pdf
[5] Raja Parasuraman and Dietrich H. Manzey, Complacency and Bias in Human Interaction with Automation, Human Factors: The Journal of the Human Factors and Ergonomics Society, 2010. https://journals.sagepub.com/doi/10.1177/0018720810376055
[6] Frances Mao, Sriwijaya Air crash which killed 62 people blamed on throttle and pilot error, BBC, 2022. https://www.bbc.com/news/world-asia-63579988
[7] Bernd Lorenz, Francesco Di Nocera, Stefan Röttger, and Raja Parasuraman, The Effects of Level of Automation on the Out-of-the-Loop Unfamiliarity in a Complex Dynamic Fault-Management Task during Simulated Spaceflight Operations, Proceedings of the Human Factors and Ergonomics Society 45th Annual Meeting, 2001. https://journals.sagepub.com/doi/10.1177/154193120104500209
[8] Karen Hao, When algorithms mess up, the nearest human gets the blame, MIT Technology Review, 2019. https://www.technologyreview.com/2019/05/28/65748/ai-algorithms-liability-human-blame/
[9] National Transportation Safety Board (NTSB), Collision Between Vehicle Controlled by Developmental Automated Driving System and Pedestrian (Uber Report), NTSB, 2019. https://www.ntsb.gov/investigations/accidentreports/reports/har1903.pdf&ved=2ahUKEwjbu5uPjaeRAxXhMVkFHZOaEkYQFnoECBoQAQ&usg=AOvVaw2mQHuezu-29n--TCf3x2_A
[10] Superior Court of Arizona, State of Arizona v. Rafaela Vasquez, Maricopa County, 2020.
[11] Paul Michael Garrett, ‘Magic moments’: AI and the ‘disappearance’ of social work ethics?, The British Journal of Social Work, 2025. https://academic.oup.com/bjsw/advance-article/doi/10.1093/bjsw/bcaf230/8306959
[12] Sajid Ali et al, Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence, Information Fusion, 2023. https://www.sciencedirect.com/science/article/pii/S1566253523001148
[13] Renee Henson, "I Am Become Death, the Destroyer of Worlds": Applying Strict Liability to Artificial Intelligence as an Abnormally Dangerous Activity, University of Missouri School of Law Faculty Scholarship, 2024. https://scholarship.law.missouri.edu/facpubs/1193/
[14] John McGinnis, The Seen and Unseen of AI Liability, Law & Liberty, 2025. https://lawliberty.org/the-seen-and-unseen-of-ai-liability/
[15] Angela Luna, Open-Source AI: The Debate That Could Redefine AI Innovation, American Action Forum, 2024. https://www.americanactionforum.org/insight/open-source-ai-the-debate-that-could-redefine-ai-innovation/
[16] Donald Gifford, Technological Triggers to Tort Revolutions: Steam Locomotives, Autonomous Vehicles, and Accident Compensation, University of Maryland Francis King Carey School of Law Faculty Scholarship, 2017. https://digitalcommons.law.umaryland.edu/cgi/viewcontent.cgi?article=2594&context=fac_pubs
[17] Guy André Boy, Human Systems Integration of Human-AI Teaming, IEEE 4th International Conference on Human-Machine Systems, 2024. https://ieeexplore.ieee.org/do...
[18] Microsoft, The HAX Toolkit Project, Microsoft Research, 2025 https://www.microsoft.com/en-us/research/project/hax-toolkit/
[19] IBM, Design for AI, IBM Design, 2025. https://www.ibm.com/design/ai/
[20] Partnership on AI (PAI), Human-AI Collaboration Framework and Case Studies, PAI, 2019. http://partnershiponai.org/wp-content/uploads/2021/08/CPAIS-Framework-and-Case-Studies-9-23.pdf
[21] Rebecca Crootof, Margot E. Kaminski, and W. Nicholson Price II, Humans in the Loop, Vanderbilt Law Review, 2023. https://scholarship.law.vanderbilt.edu/cgi/viewcontent.cgi?params=/context/vlr/article/4845/&path_info=Humans_in_the_Loop__Crootof_Kaminski_Price.pdf
[22] Chiara Natali, Brett Frischmann and Federico Cabitza, Stimulating Cognitive Engagement in Hybrid Decision-Making: Friction, Reliance and Biases (preface), Workshops at the Third International Conference on Hybrid Human-Artificial Intelligence, 2024. https://boa.unimib.it/bitstream/10281/565781/3/Natali%20et%20al-2025-CEUR%20Workshop%20Proceedings%20Hybrid%20Human-Artificial%20Intelligence-VoR.pdf
[23] Federal Railroad Administration (FRA), Title 49 Code of Federal Regulations Part 243, Training, Qualifications, and Oversight for Safety-Related Railroad Employees, FRA, 2024. https://railroads.dot.gov/railroad-safety/divisions/safety-partnerships/training-standards-rule
[24] Guido Calabresi, The Costs of Accidents: A Legal and Economic Analysis, Yale University Press, 1970. https://yalebooks.yale.edu/book/9780300011159/the-cost-of-accidents/
[25] European Parliament and Council, Directive (EU) 2024/2853 on Liability for Defective Products, Official Journal of the European Union, 2024. https://eur-lex.europa.eu/eli/dir/2024/2853/oj
[26] National Institute of Standards and Technology (NIST), AI 100-1: Artificial Intelligence Risk Management Framework (AI RMF 1.0), U.S. Department of Commerce, 2023. https://www.nist.gov/itl/ai-risk-management-framework
[27] ISO/IEC, ISO/IEC 42001:2023 Information Technology — Artificial Intelligence — Management System, International Organization for Standardization, 2023. https://www.iso.org/standard/81230.html
[28] High-Level Expert Group on AI, Ethics Guidelines for Trustworthy AI, European Commission, 2019. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
[29] Texas Legislature, The Texas Responsible Artificial Intelligence Governance Act (House Bill 149), Texas Legislature Online, 2025. https://capitol.texas.gov/tlodocs/89R/analysis/html/HB00149S.htm
[30] National Telecommunications and Information Administration (NTIA), AI Accountability Policy Report, U.S. Department of Commerce, 2024. https://www.ntia.gov/issues/artificial-intelligence/ai-accountability-policy-report
[31] U.S. Food and Drug Administration (FDA), Predetermined Change Control Plans for Machine Learning-Enabled Medical Devices: Guiding Principles, FDA, 2025. https://www.fda.gov/medical-devices/software-medical-device-samd/predetermined-change-control-plans-machine-learning-enabled-medical-devices-guiding-principles
[32] Sebastien Gittens, Stephen Burns, Matt Flynn, Caroline Poirier and David Wainer, "We signed what?!": The Hidden Hazards of Vendor AI Terms and Conditions, Bennett Jones Blog, 2025. https://www.bennettjones.com/Insights/Blogs/The-Hidden-Hazards-of-Vendor-AI-Terms-and-Conditions
[33] U.S. Food and Drug Administration (FDA), FDA Adverse Event Reporting System (FAERS) Public Dashboard, FDA, 2023. https://www.fda.gov/drugs/fdas-adverse-event-reporting-system-faers/fda-adverse-event-reporting-system-faers-public-dashboard
[34] Georgetown Center for Security and Emerging Technology (CSET), CSET’s Recommendations for an AI Action Plan, CSET, 2025. https://cset.georgetown.edu/publication/csets-recommendations-for-an-ai-action-plan/
[35] European Parliament and Council, Article 73 Regulation (EU) 2024/1689 (Artificial Intelligence Act), Official Journal of the European Union, 2024. https://eur-lex.europa.eu/eli/reg/2024/1689/oj
[36] OECD.AI, OECD AI Incidents Monitor (AIM), OECD, 2024. https://oecd.ai/en/incidents
[37] Anat Lior, Insuring AI: The Role of Insurance in Artificial Intelligence Regulation, Harvard Journal of Law & Technology, 2022. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4266259
[38] Peter Grossman, Reed Cearley, Daniel Cole, Uncertainty, Insurance, and the Learned Hand Formula, Law, Probability and Risk, 2007. https://academic.oup.com/lpr/article-abstract/5/1/1/990799?redirectedFrom=PDF
[39] Anat Lior, Holding AI Accountable: Addressing AI-Related Harms Through Existing Tort Doctrines, The University of Chicago Law Review, 2024. https://lawreview.uchicago.edu/online-archive/holding-ai-accountable-addressing-ai-related-harms-through-existing-tort-doctrines