When Human Brains Guide Self-Driving Cars
According to a report by TechXplore, while vehicles developed by companies such as Tesla already promise hands-free driving, a series of recent accidents has highlighted the limitations of current autonomous driving systems—particularly in high-risk, rapidly changing environments.
In a new study published in the peer-reviewed journal Cyborg and Bionic Systems, a team of Chinese scientists has taken a major step toward the next generation of autonomous vehicles. The researchers designed an innovative system capable of reading passengers’ brain signals to assess stress and perceived danger, and then instantaneously adjusting the vehicle’s driving strategy. This approach offers a potential solution to persistent safety challenges faced by autonomous cars when encountering unexpected situations.
From Passenger Brain Signals to Vehicle Control
The system relies on functional near-infrared spectroscopy (fNIRS), a non-invasive and safe neuroimaging technique that monitors brain activity by projecting near-infrared light through the skull. Using this method, the system tracks activity in brain regions associated with stress, emotional arousal, and risk evaluation in real time.
The collected neural data are immediately fed into an intelligent algorithm based on deep reinforcement learning. This algorithm learns to interpret human reactions and translate them into driving decisions. When the system detects a sudden rise in stress or a heightened sense of danger in the passenger’s brain, it automatically switches the vehicle into a more conservative driving mode. Such adjustments may include reducing speed, increasing the distance from the vehicle ahead, or executing smoother and less aggressive maneuvers.Experimental Results: Improved Safety and Passenger Comfort
In simulated driving experiments, the performance of this hybrid human–machine system was compared with that of conventional autonomous vehicles. The results demonstrated clear advantages across several key metrics:
Faster learning: By incorporating human reactions, the algorithm adapts more rapidly to hazardous conditions.
Higher safety: Conservative decision-making during perceived danger reduces the likelihood of accidents.
Greater passenger satisfaction: Sudden deceleration aligned with a passenger’s sense of risk appears more reasonable and reassuring, enhancing trust in the system.
Limitations and Future Outlook
Despite the promising findings, the technology remains at an experimental stage. The relatively simple test scenarios, along with the limited and homogeneous group of participants, raise questions about how well the results can be generalized to real-world driving conditions and diverse populations.
Led by Professor Xiaofei Zhang of Tsinghua University, the research team has announced that its next goal is to conduct tests in more complex environments and to integrate brain data with the vehicle’s full suite of sensors, including cameras, radar, and lidar. Such integration could enable an exceptionally precise and comprehensive assessment of driving risk.
Overall, the study opens a new horizon in human–machine interaction, suggesting that human intuition and emotional responses could become one of the most critical “sensors” in future autonomous vehicles. Nevertheless, significant technical, ethical, and practical challenges must still be addressed before this concept can be commercialized.