The European Parliament has approved a sweeping AI liability directive that creates the world's most comprehensive framework for determining responsibility when artificial intelligence systems cause damage or harm.
The legislation, which passed with a decisive 421-95 vote, establishes a tiered liability system that assigns responsibility based on the level of risk an AI application poses. High-risk systems used in healthcare, transportation, and critical infrastructure face the strictest requirements.
Under the new rules, companies deploying AI must maintain detailed documentation of their systems' decision-making processes. When an AI system causes harm, the burden of proof shifts to the deploying company to demonstrate that adequate safeguards were in place.
"This is about ensuring that the AI revolution benefits everyone, not just those who build the systems," said the Parliament's rapporteur for the legislation. "People deserve to know that when AI makes decisions that affect their lives, there is accountability."
The framework draws a clear distinction between AI developers and deployers. While developers must meet baseline safety standards and provide thorough documentation, the companies that deploy AI in customer-facing applications bear primary liability for outcomes.
Major technology companies have responded with mixed reactions. Some have praised the clarity the framework provides, while others warn it could stifle innovation. Legal experts note that the legislation will likely influence similar regulations worldwide, much as GDPR set the standard for data privacy.
The directive takes effect in 18 months, giving companies time to audit their AI systems and ensure compliance. Industry groups are already forming working committees to develop best practices for meeting the new requirements.