Late last year, ATA was finally able to obtain a copy of a government-mandated review of FMCSA's SafeStat system. Conducted for DOT by the Oak Ridge National Laboratory (ORNL), the report is titled: “Review of the Motor Carrier Safety Status Measurement System (SafeStat).”
ORNL uncovered some significant flaws in the current SafeStat model — flaws that will not be readily resolved. The study was designed to assess the impact of late and/or missing data and whether the “expert judgment” method of weighting crashes and inspections provides results that are similar to methods using more sophisticated techniques.
SafeStat primarily uses three pieces of information to compare safety among trucking companies: crashes, roadside inspections and on-site compliance review results.
The original data is collected and sent to the Motor Carrier Management Information System (MCMIS) by various state and federal agencies. And therein lies the problem. There is no consistency among agencies in the way they handle the information, resulting in big differences in the quality and completeness of the data.
Inspection data, for example, is generally collected by trained enforcement officers, who put the information into a computer during the roadside inspection process. The data is then uploaded directly to MCMIS. ORNL notes that with few errors and omissions, the data is very reliable. It's also timely, with the lion's share arriving at MCMIS within 21 days.
The process for collecting and transmitting crash information, however, suffers a number of ills. First, ORNL determined that a significant portion of the crash data is never sent to the MCMIS database. Exacerbating the issue is the fact that the extent of missing data varies from year-to-year and state-to-state.
Second, the time required for crash data to arrive at MCMIS is excessive. It takes 178 days for 50% of the data to arrive, but 497 days for the data to be 90% complete. Timeliness also varies considerably from one state to the next.
Why does the crash data suffer these ills? First, crash reports are filled out by a variety of state, county and municipal crash-scene responders, many of whom have not been trained in the federal truck crash reporting standards. Second, the majority of these reports are filled out on paper, which means they have to be re-keyed before they can be transferred to MCMIS. Third, accident-scene responders often keep the reports until after their investigations have been completed.
Not surprisingly, ORNL determined that the crash data issues — completeness and timeliness — compromise the SafeStat rankings significantly. They quantified the extent of the problem by comparing the results of a simulated re-do of the March 2003 SafeStat ranking with the original results published on Volpe's SafeStat website, www.ai.volpe.dot.gov.
The findings were troubling. First, researchers determined that when the late data was taken into consideration, an additional 1,297 trucking companies were ranked “at risk” (SafeStat categories A & B). This amounts to 33% of the total identified in Volpe's original analysis. Second, the late data also led to the removal of 595 companies (15%) from the “at risk” category. So we can say that because of late-arrival data, the SafeStat model will not identify about 18% of high-risk carriers. This is not good news.
Next month, I'll look at what we can do to improve the SafeStat system.
Jim York is the manager of Zurich Service Corp.'s Risk Engineering Transportation Team, based in Schaumburg, IL.