March 12, 2015
One of the most important lessons for federal agencies caught asleep at the wheel in a safety crisis is: do everything you can to validate your earlier, poor decisions that led to the crisis. Spare no expense at proving yourself right, while appearing to take a stern stance against industry. By no means should you ever focus on the field failures.
Thus we get yesterday’s Federal Highway Administration report on the controversial ET-Plus energy-absorbing guardrail end terminal, purporting to show that its most recent tests of the highway device are valid, and that there is only one version of the ET-Plus. The report, Task Force on ET-Plus 4” Dimensions, should be viewed as a precursor to the FHWA declaring there is nothing wrong with the ET-Plus, and that it was right all along to ignore the manufacturer’s failure to disclose important dimensional changes to the device, in violation of federal regulations.
It elides the central question: What is causing the ET-Plus to fail in the field, leading to injury and deaths? When a vehicle strikes a guardrail, the rail is supposed to be extruded through this type of energy absorbing end terminal into a flat metal ribbon that curls away from the vehicle. Some versions of the ET-Plus fold into a spear that penetrates the vehicle and causes severe harm to vehicle occupants.
For those who have not been following the Trinity saga: In 2012, Joshua Harman, the owner of a competitor company, alleged that Trinity Industries, a Dallas-based manufacturer of roadside safety equipment, had altered the critical dimensions of its ET-Plus energy-absorbing end terminal. Trinity officials made the change in 2005, as revealed in internal company memos, to save money on labor and materials, and deliberately did not inform the FHWA, which certifies that highway safety equipment has been properly tested and has not been altered in design or manufacture. Harman sued Trinity on behalf of the federal government under the qui tam provisions of the False Claims Act. In October, a federal jury found Trinity had defrauded the government and awarded trebled damages of $175 million.
The conclusion of the trial forced the FHWA – which previously responded to the allegations as insignificant fallout from a business dispute – to take action. It ordered Trinity to re-run the high-speed crash tests, and a couple of months ago the Southwest Research Institute (SwRI) conducted eight such tests. All went well until the last run, where the guardrail jammed in the feeder chute and folded in half, almost penetrating the GEO Prism. This is what the FHWA has to make go away. Some had already criticized the tests because they did not include a low-angled impact condition that mimicked the field failures. They have also alleged that Trinity – in recognition of the problem – made further undisclosed changes to the ET-Plus around 2012 to improve its performance.
Yesterday’s Task Force on ET-Plus 4” Dimensions is a survey of the dimensions of 1,048 guardrails measured in five states. The 15-page report answered four questions:
The FHWA answers are: yes, they are representative; yes, the right size guardrail systems were tested; no, we don’t know if changing the dimensions affects the field performance; and we don’t have to test the worst-case scenario, so there.
Here are the reality-based answers:
The sample, described as “for all practical purposes, a random sample,” is not a random sample. In statistics, your sample is random or it is not. There is no “for all practical purposes” category of randomness. Also, the survey could not control for date of manufacture – since only one state, South Carolina, kept records. And apparently those records weren’t all that good, because the measurers were out looking for “shiny” guardrails as a date-of-manufacture measure.
Surely, the guardrails tested by SwRI were representative of some installed in the field, but that doesn’t exactly get to the first and only important question: Do the dimensions affect performance?
And for that answer, we’ll quote the task force: “The task force could not determine, based on the data or material it reviewed, whether or not dimensional variances beyond the design tolerances, either individually or in combination, would affect the performance of the ET-Plus device.”
Really? Then why the frig were you spending so much time on our nation’s highways measuring guardrails with rulers?
Finally, what are manufacturers required to test for? According to an authority no less than the FHWA itself:
“The developer should also carefully choose which version of the device to be tested. If a number of different sizes are proposed for use, then the “worst case” conditions, if predictable, should be tested. It may be that “worst case” conditions are not obvious and more than one version of a device will need to be tested. The FHWA Office of Engineering is will to review a proposed test program to assist in determining an adequate number of tests to fully qualify a device and its variants.”
The FHWA has been issuing and re-issuing this guidance since 1997. It would be nice if agency officials followed their own advice.
We have dead bodies and severed limbs from close encounters with ET-Plus energy-absorbing end terminals. No amount of measuring how many angels can dance in the exit gap is going get the agency or Trinity around that rather graphic evidence.
U.S. Senator Richard Blumenthal (D-Conn.), who has done his best to keep the heat on the agency, issued a brief statement that nicely sums it up. So we’ll give him the last word:
“As demonstrated again today, FHWA’s guardrail testing has been consistently dumbfounding and deficient. FHWA repeatedly relies on guesswork, unsupported assumptions, and arbitrary choices. The agency neglected key measurements, rejected critical manufacturer information and completely ignored devices used in New England and the Northeast. FHWA’s lack of transparency and persistently-flawed methodology leaves the fundamental question: Are the 200,000 ET-Plus devices on our roads safe? After years of delay from FHWA, months of insufficient and outdated testing, failure to analyze real-world data, and lack of transparency, we need answers from DOT and we need them now.”