Bootstrap

Ethical Considerations of Autonomous Driving in the Xiaomi SU7 Incident

At 22:44 on March 29, 2025, a Xiaomi SU7 Standard Edition was involved in a severe traffic accident while traveling on the Chi-Qi section of the De-Shang Expressway, resulting in the tragic deaths of three occupants. Prior to the accident, the vehicle was in NOA (Navigate on Autopilot) smart driving assistance mode, traveling at a speed of 116 km/h. The section of the road where the accident occurred was under construction and maintenance, with barriers closing off the lane and diverting traffic to the opposite lane. After detecting the obstacle, the vehicle issued a warning and began to decelerate. Subsequently, the driver took over control of the vehicle and entered manual driving mode, continuing to slow down and steer the vehicle. The vehicle then collided with a concrete barrier. The system’s final recorded speed before the collision was approximately 97 km/h. Following the accident, the police immediately launched an investigation, and Xiaomi Corporation submitted the vehicle’s driving data and system operation information to the police on the evening of March 31. (The above information is sourced from the accident description posted on the official Weibo account of the Xiaomi spokesperson.)

Autonomous driving has long been fraught with numerous ethical issues. The most contentious point in the current incident is the ambiguity in the criteria for determining the responsibility of automakers and drivers. There is an ethical dilemma in attributing blame. The ultimate assignment of responsibility is related to multiple factors, including the definition of the application scenarios for assisted driving, whether the deceleration and warnings from the autonomous driving system were appropriate and timely, and whether the timing and execution of the driver’s takeover were correct.

First and foremost, we assert that prior to any ethical analysis, the determination of responsibility will be strictly in accordance with the final announcement from the police. The relationship between automakers and drivers is essentially contractual. Provided that consumers’ purchasing actions are founded on the principles of being informed, consenting, and voluntary, the contract between the two parties delineates the legal liability boundaries that enterprises bear towards consumers.

However, law is not synonymous with ethics. Although the operation of autonomous driving may appear “user-friendly,” the underlying code is highly specialized. AI algorithms, which are based on probabilistic statistics and are in a state of continuous development, can produce correct and stable results when processing events within a fixed framework. Unlike the stable environments of mid-journey unmanned operations in airplanes or subways, even if autonomous driving systems can promptly provide decision-making parameters at the scale of hundreds of millions for large models, the demands of open scenarios, variable driving speeds, ever-changing road conditions, obstructions, lighting, and countless other potential factors mean that requiring automotive autonomous driving to make driving responses within mere milliseconds can still lead to increased algorithmic errors and the unpredictability of code outcomes is hard to estimate. In other words, given the current state of technology, algorithmic flaws in the practical application of autonomous driving will always exist.

For the vast majority of consumers, they lack the specialized knowledge to assess the reliability of autonomous driving systems. It’s akin to a patient visiting a hospital; the purchasers find it challenging to fully comprehend the AI algorithms at play during the autonomous driving process. Their purchasing decisions are essentially reliant on the automakers’ product presentations.

Moreover, the drivers are the primary bearers of the consequences in the event of a car accident. Whether in terms of professional knowledge or physical well-being, they are in a vulnerable position. Thus, in the realm of consumer ethics, there exists a perspective that transcends legal definitions: We are still uncertain whether the deceleration and warnings from the autonomous driving system in this incident were appropriate, or whether the vehicle catching fire after the collision and the doors being locked were the main reasons for the fatalities of the driver and passengers. However, even if the driver had taken over in a timely manner, the company should bear the socio-ethical costs for the consumers’ casualties. That is to say, the company should be held accountable for the external injuries caused by the unavoidable defects in product design. This viewpoint has its own merits. When the external social costs are reflected in the product prices, it can encourage companies to minimize product defects, enhance design efficiency, and reduce the likelihood of accidents. This, in turn, alleviates the corresponding social pressures such as insurance claims, medical expenses, and accident disputes. Moreover, the anticipated injury costs resulting from product defects are distributed evenly among all consumers at the time of purchase, rather than being borne solely by the few accident victims. This method of cost distribution is more equitable.

For the vast majority of consumers, they lack the specialized knowledge to assess the reliability of autonomous driving systems. It’s akin to a patient visiting a hospital; the purchasers find it challenging to fully comprehend the AI algorithms at play during the autonomous driving process. Their purchasing decisions are essentially reliant on the automakers’ product presentations.

Moreover, the drivers are the primary bearers of the consequences in the event of a car accident. Whether in terms of professional knowledge or physical well-being, they are in a vulnerable position. Thus, in the realm of consumer ethics, there exists a perspective that transcends legal definitions: We are still uncertain whether the deceleration and warnings from the autonomous driving system in this incident were appropriate, or whether the vehicle catching fire after the collision and the doors being locked were the main reasons for the fatalities of the driver and passengers. However, even if the driver had taken over in a timely manner, the company should bear the socio-ethical costs for the consumers’ casualties. That is to say, the company should be held accountable for the external injuries caused by the unavoidable defects in product design. This viewpoint has its own merits. When the external social costs are reflected in the product prices, it can encourage companies to minimize product defects, enhance design efficiency, and reduce the likelihood of accidents. This, in turn, alleviates the corresponding social pressures such as insurance claims, medical expenses, and accident disputes. Moreover, the anticipated injury costs resulting from product defects are distributed evenly among all consumers at the time of purchase, rather than being borne solely by the few accident victims. This method of cost distribution is more equitable. However, this perspective also gives rise to new issues: once companies assume the costs of external injuries, consumers may become more careless in using the products, thereby increasing the likelihood of getting hurt. As drivers become increasingly reliant on vehicle automation, their manual driving skills tend to deteriorate. When confronted with emergency situations that exceed the system’s capacity to handle, and drivers need to urgently take over the vehicle, it is doubtful whether they can accurately respond to the crisis. For instance, in this accident, the scenario involved nighttime high-speed driving, road construction that diverted traffic to the opposite lane, and a driving speed of 116 km/h under autonomous driving. The system prompted the driver to hold the steering wheel at 36 minutes and 48 seconds, alerted of an obstacle at 44 minutes and 24 seconds, the driver took over 1 second later, and the car collided at 44 minutes and 26 seconds. The timing of the driver’s takeover and the subsequent operations have all sparked much controversy. As the driver’s mother revealed, the mother and daughter had once argued over the “convenience and safety” of autonomous driving, which confirms the driver’s over-reliance on it. The person has passed away, and the investigation into the accident is still ongoing. It is hoped that a just explanation can ultimately be given to the family and society, after all, it involves three precious lives! After Xiaomi’s official statement on April 1st, the company’s stock price (01810HK) once fell by more than 5%, and this accident has brought about a trust crisis in autonomous driving among the public. According to the standard GB/T 40429-2021, Level L0 is no automation, L1 is driver assistance, L2 is partial automation, L3 is conditional automation, L4 is high automation, and L5 is full automation. Currently, all automakers’ autonomous driving technologies (Levels L2-L3) are still in the realm of assisted driving, where drivers need to maintain constant control over the vehicle. Out of a responsible attitude, “zero driver takeover” should be approached with great caution. However, we cannot guarantee that every consumer is rational. For example, a drunk owner sitting in the passenger seat, using autonomous driving to go home without actually driving, not only raises ethical questions but also legal ambiguities regarding whether it constitutes drunk driving. Does the recent explosion and fire of the Xiaomi SU7 mean that humanity should abandon the research and application of autonomous driving? If one day, the accident rate of autonomous driving becomes significantly lower than that of human drivers, and the algorithms can minimize “overall harm,” then autonomous driving is not impossible to replace humans. But when facing extremely complex environments and the requirement for ultra-fast responses, will code bugs increase? Comparing autonomous driving with human drivers in terms of safety, we will still encounter new ethical dilemmas such as “sacrificing the few to save the many” and “the quantification of individual health and life.” AI lacks emotions and cannot understand human ethical contexts; this is the conflict between machine language and human language. We can never create an “ethically flawless” autonomous driving system. Ethical trade-offs and even concessions are the eternal dilemmas that future driverless vehicles will face.


Leave a comment

您的邮箱地址不会被公开。 必填项已用 * 标注