When it comes to disasters, how bad is bad? Was an earthquake more damaging than a flood? These aren’t just semantic questions: Knowing the relative severity of disasters could help with how we spend money both on prevention and in preparing for giving aid after a disaster. Because it’s hard to be rational in the face of suffering, a helpful new rating system puts disasters on a scale, using 12 different parameters, letting us say for certain which damage was worse.
The new system, proposed in the journal PLOS Currents Disasters, can help public health officials respond appropriately to disasters.
Developing disaster metrics in various aspects of disaster medicine and public health preparedness is essential, said Johns Hopkins University professor and author of the paper Jamil D. Bayram. “Metrics are the cornerstone for quantitative and objective measurement, which should be the guiding compass in the decision-making process regarding resource allocation, funding, training, and education.” He added that they can reduce subjective variations and errors in assessment, help officials measure progress across time, and compare various complex emergencies.
Bayram and his co-authors proposed Public Health Impact Severity Scale that using 12 parameters of disasters–from the number of excess deaths to the number of cases with acute communicable diseases, to the quality and quantity of water, and the amount of gender-based violence.
Each parameter was given a severity score; for example, water quality was rated on a scale of 1 to 10 based on the percentage of water samples polluted with fecal coliform bacteria. The authors note that in 2009, a sample of 240 drinking water sources in Nyala City in South Darfur, Sudan, found 45.2% of sources contaminated with fecal coliforms–this would score a 5 out of 10 on the scale.
Other parameters had their own scales. The level of sanitation facilities, for example, was rated on a scale from 0 to 5. 5 represented the complete lack of sanitary facilities, while 0 signaled an intact sanitation system.
Five out of the 12 parameters identified, with a total score of 40 out of 100 points maximum, actually reflect the health status of the community affected by the complex humanitarian emergencies, said Bayram. He said that this standard was given more weight by default because health care status reflects the end result of many other parameters.
So what does the metric actually say? Bayram cautioned that before it’s put into practice, there are some steps to get through. The metric needs to be discussed and evaluated by the disaster-response community. After validating the metrics, they will be pilot-tested for feasibility and then revised based on the results. Finally, large-scale application of the metrics with robust statistical analysis establishes their reliability and validity, he said. The parameters could be changed in the future to better reflect real situations.
After assessing a few of the metrics for the Japan earthquake versus the Haiti earthquake, Bayram said it was clear that the Haiti disaster had far more impact on the public health sector than the one in Japan. In the future, it might be easier to evaluate situations immediately–and plan a better response.