Automakers Advance 'Eyes-Off' AI Driving Amid Safety and Liability Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Automakers, including Ford, are developing Level 3 autonomous driving systems that use AI to allow drivers to take their eyes off the road. The push raises significant safety and liability concerns, as these systems may require sudden human intervention, but no actual harm has yet occurred.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article clearly involves AI systems in the form of Level 3 autonomous driving technology. The discussion centers on the potential safety and liability risks of these systems, which could plausibly lead to harm if the technology fails or is misused. However, no actual harm or incident has been reported. The concerns about safety and liability, as well as the technological and regulatory challenges, indicate a credible risk of future harm. Thus, the event fits the definition of an AI Hazard, as it describes circumstances where AI system development and use could plausibly lead to an AI Incident, but no direct or indirect harm has yet occurred.[AI generated]
AI principles
SafetyAccountability

Industries
Mobility and autonomous vehicles

Affected stakeholders
ConsumersGeneral public

Harm types
Physical (injury)Physical (death)

Severity
AI hazard

AI system task:
Recognition/object detectionReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Carmakers push toward 'eyes-off' driving, raising questions of safety, liability By Reuters

2026-02-23
Investing.com
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems in the form of Level 3 autonomous driving technology. The discussion centers on the potential safety and liability risks of these systems, which could plausibly lead to harm if the technology fails or is misused. However, no actual harm or incident has been reported. The concerns about safety and liability, as well as the technological and regulatory challenges, indicate a credible risk of future harm. Thus, the event fits the definition of an AI Hazard, as it describes circumstances where AI system development and use could plausibly lead to an AI Incident, but no direct or indirect harm has yet occurred.
Thumbnail Image

Carmakers push toward 'eyes-off' driving, raising questions of safety, liability

2026-02-23
Reuters
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of Level 3 autonomous driving technology, which uses AI to control vehicle speed and steering and to detect when human intervention is needed. Although no actual harm or accident is reported, the discussion centers on the plausible safety risks and liability challenges that could arise from these systems' use, including the possibility of accidents caused by the handover between AI and human drivers. This meets the criteria for an AI Hazard, as the development and use of these AI systems could plausibly lead to injury or harm to persons. There is no indication of realized harm or incident, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the potential safety and liability risks of AI systems in vehicles.
Thumbnail Image

Carmakers push toward 'eyes-off' driving, raising questions of safety, liability

2026-02-23
ETAuto.com
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically Level 3 autonomous driving technology, which uses AI to control vehicle functions and requires human intervention when alerted. The discussion centers on the potential safety and liability risks associated with these systems, indicating plausible future harm if the technology is deployed without adequate safeguards. However, no actual harm or incident has been reported. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to harm but has not yet resulted in an AI Incident. It is not Complementary Information since it is not an update or response to a past incident, nor is it unrelated as it directly concerns AI systems and their risks.
Thumbnail Image

Carmakers Push Toward 'Eyes-Off' Driving, Raising Questions of Safety, Liability

2026-02-23
Republic World
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Level 3 autonomous driving) and discusses their use and development. However, no direct or indirect harm has occurred yet, nor is there a report of a specific incident involving these systems causing injury, property damage, rights violations, or other harms. The concerns raised are about plausible future safety and liability risks if these systems are deployed widely without adequate regulation or technological maturity. Therefore, this qualifies as an AI Hazard, as the article outlines credible potential risks and challenges that could plausibly lead to AI incidents in the future if not properly addressed.
Thumbnail Image

Carmakers advance 'eyes-off' driving as safety and liability concerns grow

2026-02-23
Times LIVE
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically Level 3 autonomous driving technology, which uses AI to control vehicle functions and detect when human intervention is needed. While it discusses the potential for safety and liability issues, no actual harm or incident has occurred yet. The concerns raised are about plausible future harms related to safety and liability if the technology is deployed without sufficient regulatory frameworks or technological maturity. Therefore, this qualifies as an AI Hazard because it plausibly could lead to incidents involving injury or liability disputes, but no direct or indirect harm has been reported at this time.
Thumbnail Image

The Auto Industry's Race to Eyes-Off Driving: Challenges and Debates

2026-02-23
Devdiscourse
Why's our monitor labelling this an incident or hazard?
While Level 3 autonomous driving systems are AI systems with potential safety implications, the article does not describe any actual harm, malfunction, or incident resulting from their use or development. It highlights challenges and debates about their feasibility and safety but does not report any direct or indirect harm or credible near-miss events. Therefore, this is not an AI Incident or AI Hazard. The content provides context and background on AI development and industry responses, fitting the definition of Complementary Information.
Thumbnail Image

Carmakers Push Toward 'Eyes-Off' Driving, Raising Questions of Safety, Liability

2026-02-23
Claims Journal
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically Level 3 autonomous driving systems that use AI for vehicle control and driver assistance. However, it does not describe any realized harm or incidents caused by these systems. The concerns raised are about plausible future safety and liability risks if these systems are deployed widely without adequate regulation or technological maturity. Therefore, the event is best classified as an AI Hazard, as it discusses credible potential risks and challenges that could plausibly lead to harm but does not document any actual harm or incidents yet.
Thumbnail Image

Report: BMW Backs Off Eyes-Off Self-Driving

2026-02-24
Kbb.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (driver assistance and self-driving technologies) but does not describe any realized harm or incident resulting from their use or malfunction. The decision to back off from Level 3 autonomy is a strategic and economic choice rather than an event involving harm or risk of harm. There is no indication of an AI incident or hazard, nor is the article focused on societal or governance responses or updates to previous incidents. Therefore, this is best classified as Complementary Information providing context on AI system deployment and industry trends.
Thumbnail Image

Carmakers push toward 'eyes-off' driving despite safety, liability questions | Honolulu Star-Advertiser

2026-02-24
Honolulu Star Advertiser
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of Level 3 autonomous driving technology, which uses AI to control vehicle speed and steering and to detect when human intervention is needed. The discussion centers on the use and development of these AI systems and the plausible future harms related to safety and liability concerns. No actual harm or incident is described, so it does not meet the criteria for an AI Incident. The article is not merely general AI news or product launch information, as it focuses on the risks and challenges that could lead to harm. Thus, it fits the definition of an AI Hazard, where the AI system's use could plausibly lead to harm, but no harm has yet occurred.
Thumbnail Image

Carmakers Push Toward 'Eyes-Off' Driving, Raises Safety, Liability Questions - Carrier Management

2026-02-24
Carrier Management
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically Level 3 autonomous driving technology that uses AI to control vehicle speed and steering and to detect when human intervention is needed. The discussion centers on the potential safety and liability risks associated with these systems, which could plausibly lead to harm such as accidents or legal disputes. However, no actual harm or incident is reported. Therefore, this event fits the definition of an AI Hazard, as it describes circumstances where AI system use could plausibly lead to an AI Incident in the future.
Thumbnail Image

Automakers Push Toward "Eyes-Off" Driving Despite Mounting Doubts

2026-02-26
ZeroHedge
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically Level 3 autonomous driving systems that use AI to control vehicles without constant human supervision. The discussion centers on the use and potential risks of these AI systems. However, it does not report any actual harm or accident caused by these systems, nor does it describe a specific event where harm was narrowly avoided. Instead, it outlines concerns and uncertainties about safety, legal liability, and market acceptance, which are potential future risks but not immediate hazards. Therefore, the event is best classified as Complementary Information, as it provides context and insight into the evolving AI ecosystem and governance challenges without describing a concrete AI Incident or AI Hazard.
Thumbnail Image

China introduces new regulations for autonomous driving - electrive.com

2026-02-26
electrive.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving systems at Level 3 and Level 4) and their development and use. However, the article focuses on the introduction of safety regulations and standards to mitigate risks and ensure safe operation, rather than describing any realized harm or accident caused by these AI systems. Since the regulations are not yet in force and no harm has occurred, the event represents a plausible future risk scenario addressed by governance measures. Therefore, it qualifies as Complementary Information, providing context on societal and governance responses to AI-related risks in autonomous driving.
Thumbnail Image

Carmakers push towards 'eyes-off' driving, raising questions of safety, liability

2026-02-27
The Straits Times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of Level 3 autonomous driving technology, which uses AI to control vehicle functions and requires human intervention under certain conditions. The discussion centers on the potential safety and liability risks of these systems, indicating that their use could plausibly lead to harm (e.g., accidents due to handover failures or driver inattention). However, no actual harm or incidents are reported. This fits the definition of an AI Hazard, as the development and deployment of these AI systems could plausibly lead to incidents involving injury or liability issues in the future. The article does not describe any realized harm or incident, nor does it primarily focus on responses or updates to past incidents, so it is not an AI Incident or Complementary Information.
Thumbnail Image

Direção autônoma avança, mas gera impasse entre montadoras

2026-02-23
Olhar Digital - O futuro passa primeiro aqui
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Level 3 autonomous driving systems) whose use directly relates to safety and legal concerns. Although no specific accident or harm is reported, the article highlights significant challenges and uncertainties about safety and liability, implying plausible future harm from the use of these AI systems. The discussion of ongoing deployment and regulatory approval in China further supports the potential for future incidents. Since no actual harm or incident is described, but credible risks and challenges are detailed, this qualifies as an AI Hazard rather than an AI Incident. The article is not merely general AI news or product announcements but focuses on the potential risks and challenges of AI autonomous driving systems.
Thumbnail Image

BMW não quer carro que "dirige sozinho" e toma decisão radical

2026-02-25
Canaltech
Why's our monitor labelling this an incident or hazard?
The event involves AI systems related to autonomous driving technology. However, the article describes a strategic business decision to withdraw or reduce the use of such AI systems rather than an incident or harm caused by their malfunction or misuse. There is no indication of realized harm or a plausible imminent risk of harm resulting from the AI systems themselves. Instead, it reflects industry responses and market considerations, which fall under complementary information about AI development and deployment trends.
Thumbnail Image

Direção autônoma está mais longe de se tornar realidade

2026-02-25
RD - Jornal Repórter Diário
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI systems (ADAS and semi-autonomous driving systems) and their involvement in causing accidents (harm to persons), which qualifies as an AI Incident. However, the article does not report a new or specific incident but rather discusses the general state, challenges, and perspectives of autonomous driving technology, including past incidents. Therefore, the main content serves as Complementary Information, providing context and updates on AI systems' development and safety concerns rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

Montadoras europeias recuam da condução autônoma nível 3 por causa dos altos custos

2026-02-25
R7 Notícias
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (Level 3 autonomous driving) and discusses their development and deployment decisions. However, it does not report any harm or incident caused by these AI systems, nor does it indicate a plausible future harm scenario. Instead, it focuses on the economic and regulatory challenges leading to a strategic shift away from Level 3 autonomy. This fits the definition of Complementary Information, as it provides important context and industry response updates without describing an AI Incident or AI Hazard.
Thumbnail Image

BMW e Mercedes abandonam condução autónoma de Nível 3 por ser demasiado cara | TugaTech

2026-02-23
TugaTech
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems, specifically autonomous driving AI at Level 3 and Level 2+. However, there is no indication of any harm, malfunction, or incident caused by these AI systems. The article focuses on the discontinuation of Level 3 systems due to cost and limited utility, and the adoption of Level 2+ systems, which is a development and governance-related update. Therefore, it fits the definition of Complementary Information, as it provides context and updates on AI system deployment and industry responses without reporting any AI Incident or AI Hazard.
Thumbnail Image

Direção autônoma: montadoras incentivam tirar os olhos da pista; é seguro?

2026-02-23
Portal N10
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of Level 3 autonomous driving technology, which makes real-time decisions and controls the vehicle. Although no actual accident or harm is reported, the discussion centers on the risks and responsibilities related to these systems potentially causing accidents if the driver is inattentive or if the system fails. This fits the definition of an AI Hazard, as the development and use of these AI systems could plausibly lead to injury or harm to persons. The article does not describe a realized harm or incident, so it is not an AI Incident. It is also not merely complementary information or unrelated, as the focus is on the plausible risk of harm from AI system use.
Thumbnail Image

A indústria automobilística está pronta para oferecer carros em que você não precisa prestar atenção à estrada enquanto dirige, o que levanta preocupações sobre segurança e responsabilidade.

2026-02-24
avalanchenoticias.com.br
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of autonomous driving technology (Level 3 autonomy) that infers from sensor inputs and controls vehicle operation. It discusses the use and development of these AI systems and the potential safety and liability harms that could arise if drivers divert attention from the road. No actual harm or incident is described, but the credible risk of accidents and responsibility issues is emphasized. Hence, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its risks are central to the discussion.
Thumbnail Image

Direção autônoma está mais longe de se tornar realidade

2026-02-28
AutoPapo
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI-related systems such as Level 3 autonomous driving and ADAS, which qualify as AI systems. It notes that Tesla's semi-autonomous system has caused accidents, which is an AI Incident, but these are mentioned as background information rather than the main focus. The article's main narrative is about the challenges and progress in autonomous driving technology, costs, and safety concerns, without reporting a new AI Incident or AI Hazard. The recall mentioned relates to battery defects, not AI systems. Thus, the article fits best as Complementary Information, providing context and updates on AI system development and related safety issues in the automotive sector.
Thumbnail Image

Direção autônoma está mais longe de se tornar realidade - Acelerando por aí

2026-02-27
Acelerando por aí
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI systems involved in autonomous driving and ADAS, which qualify as AI systems under the definitions. It acknowledges that Tesla's semi-autonomous system has caused accidents, which is an AI Incident, but these are mentioned as past events, not the main focus of the article. The main content is about the state of development, challenges, and future prospects of autonomous driving technology, which fits the definition of Complementary Information. There is no new AI Incident or AI Hazard described in the article itself; rather, it provides supporting context and updates on the ecosystem and challenges of AI in automotive applications.
Thumbnail Image

Direção autônoma está mais longe de se tornar realidade

2026-03-02
Vrum
Why's our monitor labelling this an incident or hazard?
The article mentions AI-based advanced driver assistance systems (ADAS) and autonomous driving levels, which involve AI systems. However, it does not report any actual harm or incident caused by AI malfunction or misuse. The recall of Volvo EX30 due to battery fire risk is a safety issue but not directly attributed to AI system failure. The autonomous driving discussion focuses on the challenges and uncertainties in development and deployment, indicating potential future risks but not a specific AI Hazard event. Thus, the content fits best as Complementary Information, providing background and updates on AI-related automotive technologies and safety issues without describing a concrete AI Incident or Hazard.
Thumbnail Image

小鵬汽車CEO「完全自動運転は3年以内に実現」 -- 中国 - エキサイトニュース

2026-03-04
Excite
Why's our monitor labelling this an incident or hazard?
The article focuses on future technological developments and ambitions regarding AI-powered autonomous driving. It does not describe any realized harm, malfunction, or misuse of AI systems, nor does it indicate any direct or indirect harm caused by these AI systems. Therefore, it does not qualify as an AI Incident or AI Hazard. Instead, it provides context on AI system development and industry progress, which fits the definition of Complementary Information.
Thumbnail Image

アングル:自動車各社、自動運転推進にブレーキ 開発巨費と安全性が壁

2026-02-28
ニューズウィーク日本版 オフィシャルサイト
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically autonomous driving AI at Level 3, which can control vehicle operations without constant human supervision. However, it does not describe any actual harm, accident, or violation caused by these systems. Instead, it discusses the plausible risks, high development costs, safety concerns, and regulatory uncertainties that could hinder or delay the deployment of these AI systems. Since no harm has yet occurred but there is a credible risk and challenge associated with these AI systems, the event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because it is not an update or response to a previously reported incident, nor is it unrelated as it clearly concerns AI systems in autonomous driving.
Thumbnail Image

ヘルムAI、カメラのみで都市部の自動運転を実現...レベル2+から4まで対応可能 | レスポンス(Response.jp)

2026-03-02
レスポンス(Response.jp)
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (autonomous driving software) and its advanced capabilities, which fits the definition of an AI system. However, it does not describe any realized harm or incident caused by the AI system, nor does it report any plausible future harm or hazard scenario. The focus is on the system's design, training, and demonstration under safe conditions with a safety driver, indicating no malfunction or harm occurred. This aligns with the definition of Complementary Information, as it provides supporting data and context about AI development and deployment without reporting an incident or hazard.