Detecting Damage After Natural Disasters

Implemented owned by Peltarion Swedish
3y ago update

Following a natural disaster, one of the most critical tasks is to determine the priority areas for emergency response teams. A crucial component of this determination is the degree of damage an area has received, and quickly identifying the most severely affected areas can make a massive difference in lives saved.

Computer vision - an area of deep learning that focuses on analyzing images - can improve the speed and efficiency of emergency response by identifying damaged areas in satellite images.

Due to climate change, natural disasters are rapidly increasing in severity, costing over $200 billion dollars in global damages each year. These disasters can cause lasting economic and agricultural disruptions, displace entire populations, and can result in decades of community recovery. While natural disasters cannot wholly be prevented, the efficiency of emergency response can be improved.

The current approach to damage identification relies on human experts classifying images by hand. This not only requires more time, but it also depends on numerous personnel with the requisite training. In contrast, deep computer vision models are able to train faster and classify images with greater speed and accuracy.

An image classification model predicts the probability that a given image has a particular label. These labels could be as simple as “damaged” or “undamaged” or more detailed, reflecting various types or degrees of disaster damage. During the training process, the model will learn the complex relationship between characteristics of the image and the corresponding label. The output of the model will be a probability that an image has a given label. For example, the model could output that there is a 34% probability that the image has the “undamaged” label and a 66% chance that it has the “damaged” label. We can then tune the threshold that the model uses to determine if the probability is high enough. The higher the threshold, the higher the label probability will need to be for the model to give an image that specific label, but figuring out the “right” threshold value will largely depend on your use case and needs.

To ensure a successful model, the data should reflect the characteristics of the typical data gathered following a natural disaster. For example, this could be high or low altitude satellite imagery reflecting damaged and undamaged areas or it could be imagery of different types of damage like flooding or debris. The images should all be the same shape and, ideally, we should have similar numbers of images for each label. In order for the model to learn the connections between the images and the disaster types, you’ll need to have a table that lists the image names and the corresponding labels.

In the Evaluation view, you’ll be able to take a closer look at how your model is performing. In particular, you’ll want to look at the rate of false positives and false negatives. While priorities may differ across emergency response organizations, in general, the consequences of falsely labeling a damaged area as undamaged are higher than labeling an undamaged area as damaged. If instead you decided to use multiple labels in your model (e.g. types of damage), you’ll want to look at the model’s performance for each label. Are there certain types of damage that your model is better at identifying than others? Understanding these results will help you make the best decisions using the model’s output.

Attributes

Vision
Clustering, Image Analysis
Image Data
pages

Resources

If you’re interested in building this use case, our car damage classifier tutorial is a great place to start. This tutorial walks you through how to build an image classifier that can predict multiple types of damage to cars.

You can also check out our cheat sheets in the Knowledge Center for single label image classification and multi-label image classification.