Engineers develop artificial intelligence to predict car crashes

Researchers at Johns Hopkins University have engineered a novel artificial intelligence tool designed to forecast the location of future vehicle collisions and analyze the complex variables that lead to them. The system, named SafeTraffic Copilot, functions as an advanced analytical partner for traffic safety experts and policymakers, offering them the ability to test potential infrastructure changes and better understand how to prevent incidents on public roads.

The new tool leverages a powerful form of AI known as a large language model (LLM) to process and learn from immense and diverse datasets related to traffic incidents. Unlike previous predictive systems that relied solely on historical crash data, this generative AI can create forecasts for novel scenarios, such as how altering a traffic light’s timing might affect an intersection’s safety. The research, published in *Nature Communications*, aims to provide a more trustworthy and detailed method for mitigating the thousands of traffic-related fatalities that occur annually in the United States.

A New Model for Roadway Analysis

Developed by a team led by Hao “Frank” Yang, an assistant professor of civil and systems engineering at Johns Hopkins, SafeTraffic Copilot was created to simplify the intricate web of factors contributing to crashes. Motor vehicle accidents are complex events influenced by countless variables, including environmental conditions, traffic density, roadway design, and the actions of drivers. This AI tool is intended to untangle that complexity, providing clear, data-driven insights that can help officials make more informed decisions to enhance road safety.

The system moves beyond simply identifying existing high-risk areas based on past events. Older methods in traffic safety often relied on machine learning models that could only assess the safety of a road if its characteristics closely matched samples in its training data. If a proposed road design or traffic pattern was entirely new, these systems could not generate a reliable prediction. The new tool overcomes this limitation by using generative AI to explore “what-if” scenarios, giving planners a proactive way to assess the potential consequences of their interventions before they are implemented.

How the Predictive System Functions

Data and Multimodal Training

The foundation of SafeTraffic Copilot is its extensive training on a wide array of data sources. The LLM was trained using information from more than 66,000 different accidents. This dataset was not limited to simple statistics; it included rich, descriptive texts detailing road conditions, numerical figures such as blood alcohol levels, and visual data from satellite imagery and on-site photographs of crash scenes. By integrating these varied data types—a multimodal approach—the AI can identify subtle patterns and correlations that would be difficult for human analysts to detect across thousands of incidents. This comprehensive training enables the model to understand the interplay between different risk factors, such as how poor weather might amplify the danger of a poorly designed intersection.

Evaluating Scenarios and Interventions

A primary innovation of the Johns Hopkins model is its ability to forecast outcomes for hypothetical situations. For example, a traffic engineer could ask the system to predict the change in crash frequency if a specific traffic light’s timing is extended by 10 seconds. The AI can generate a forecast of how that single change could increase or decrease accidents at that location, providing a quantitative basis for the decision. This predictive power extends to various potential interventions, including changes in speed limits, the addition of new signage, or modifications to the physical geometry of a road. This capacity for proactive analysis marks a significant step forward from reactive measures that are typically implemented only after a pattern of crashes has already emerged.

Building Trust in AI-Driven Safety

A major hurdle for the adoption of AI in high-stakes fields like public safety has been the “black box” problem, where the internal reasoning of the AI is not transparent to its users. Decision-makers are often reluctant to rely on predictions when they cannot understand how the conclusions were reached. To address this, the engineers of SafeTraffic Copilot designed the system to quantify its own certainty.

Confidence Scores and Accuracy

Instead of just providing a prediction, the tool also generates a confidence score, which tells the user how much faith to place in a given forecast. For instance, the system might predict a certain outcome with a 70% accuracy level for a real-world scenario, giving policymakers a clear understanding of the prediction’s reliability. This feature is crucial for responsibly integrating AI into critical safety decisions. Yang emphasized that the technology is meant to augment human expertise, not replace it. “Rather than replacing humans, LLMs should serve as copilots—processing information, identifying patterns, and quantifying risks—while humans remain the final decision-makers,” Yang stated.

Key Findings on Crash Causation

Beyond its predictive capabilities, the model has already yielded significant insights from its analysis of existing crash data. The AI identified that aggressive driving and alcohol impairment are the most dangerous factors on the road, contributing to three times more crashes than other causes. The system’s ability to analyze both individual risk factors and their combined effects provides a more granular understanding of why crashes happen. For example, it can assess how the risk from speeding is compounded by wet road conditions or low visibility.

This analytical depth allows for more targeted safety campaigns and enforcement strategies. If a particular corridor shows a high-risk profile for incidents related to distracted driving, authorities can implement specific countermeasures. The system’s continuous learning loop ensures that its insights become more refined over time; as more crash data is fed into the model, its predictions and analyses grow progressively more accurate.

Future Applications and Development

The research team plans to continue developing the model and hopes to see it applied in local communities to directly benefit residents. Yang has expressed a particular interest in using the tool to improve safety in Baltimore City, Baltimore County, and the state of Maryland. Because the model can be customized with additional information, it can be adapted for the unique traffic patterns, geography, and regulations of different cities and states.

The principles behind SafeTraffic Copilot could also serve as a blueprint for the responsible use of AI in other critical public sectors. By focusing on transparency, quantifying uncertainty, and keeping humans in control, this approach to AI development could be extended to fields like public health, emergency response, and infrastructure management. The ongoing research aims to further explore how these powerful AI models can be integrated safely and ethically into high-risk decision-making environments.

Leave a Reply

Your email address will not be published. Required fields are marked *