There's been a great deal of interest in government safety data recently, especially since many insurance underwriting teams now include SafeStat evaluations in their pricing calculations.
A number of cutting edge fleets have begun to realize that SafeStat data contains a treasure trove of information that can be used to differentiate them from their competitors. We've been fielding an increasing number of questions about sources, interpretation and use of safety data, including whether reporting delays could affect SafeStat rankings. (This refers to the amount of time between when a roadside event takes place and when it's entered into the federal truck safety database.)
To find the answer, we picked 15 large to medium-sized trucking companies at random. On a given SafeStat run, we tabulated the most recent six months of crash, inspection and moving violation data for each fleet. We looked at that same information on the next SafeStat run and compared how many new events had been added to the months we'd already tabulated.
For example, the September 2000 SafeStat revealed that one carrier had 24 driver-out-of-service incidents during August 2000. The March 2001 SafeStat showed 34 incidents for that same month. In other words, 10 events, or 29%, were not posted in the original report.
Our findings indicate that driver-out-of-service and moving-violation events are reported relatively quickly. All but 19% were reported within 30 days; 8% were delayed 60 days; and 4% were delayed 90 days.
Surprisingly, 44% of vehicle-out-of-service events were delayed 30 days, and 20% by 60 days or more. State-reported crashes suffer the longest delays. Just over 60% were delayed 60 days or more. Even worse, it appears that about 10% of crashes don't get into the system for at least five months.
Why this disparity? First, crash reports are prepared in hard copy format, while states use laptop computers and special software to enter inspection results at the roadside. Second, once the crash reports are prepared, they must pass through a longer chain of command. Third, once the reports are forwarded to the appropriate agency, they must be re-keyed prior to being uploaded to the federal database.
The problem is compounded because SafeStat uses a “time” value in weighting events. Crashes that occur within 6 mo. of SafeStat's ranking date carry a weight of 3, those within 7-18 mo. carry a weight of 2, and those that are 19-30 mo. old carry a weight of 1. Given the snail's pace of accident reporting, very few carriers are assessed the true negative impact of all recent (within six months) crashes.
Smaller carriers may be affected disproportionately because they have fewer vehicles and thus fewer accidents to report. A single delay that changes the weighting factor could result in a large SafeStat difference. A large carrier's ranking would be relatively unaffected by one late report since they're based on a large number of vehicles.
How can you use this information to help your fleet? First of all, make sure your safety professionals understand reporting delays. If you're using carrier profiles to identify your bad apples, make sure you review the previous 90-day period every time you get a new report.
One of our clients fell into the habit of only paying attention to the most recent 30-day period when they looked at the monthly subscription profiles. Consequently, they missed a “driving while disqualified” incident altogether because there had been a 50-day reporting delay. It's also important to be aware of crash reporting delays. Any accident trend analysis that includes the most recent six months of state-reported crashes will be fraught with inconsistencies.
Our industry deserves more expeditious reporting of crash data. The improvements we have seen in inspection reporting are primarily the result of federal seed projects and benchmarking standards. We must foster similar innovations to make the necessary improvements to crash reporting.
Jim York is the manager of Zurich North America's Risk Engineering Team, based in Schaumburg, IL.