Automated multiple concrete damage detection using instance segmentation deep learning model

Byunghyun Kim, Soojin Cho

Research output: Contribution to journalArticlepeer-review

50 Scopus citations


In many developed countries with a long history of urbanization, there is an increasing need for automated computer vision (CV)-based inspection to replace conventional labor-intensive visual inspection. This paper proposes a technique for the automated detection of multiple concrete damage based on a state-of-the-art deep learning framework, Mask R-CNN, developed for instance segmentation. The structure of Mask R-CNN, which consists of three stages (region proposal, classification, and segmentation) is optimized for multiple concrete damage detection. The optimized Mask R-CNN is trained with 765 concrete images including cracks, efflorescence, rebar exposure, and spalling. The performance of the trained Mask R-CNN is evaluated with 25 actual test images containing damage as well as environmental objects. Two types of metrics are proposed to measure localization and segmentation performance. On average, 90.41% precision and 90.81% recall are achieved for localization and 87.24% precision and 87.58% recall for segmentation, which indicates the excellent field applicability of the trained Mask R-CNN. This paper also qualitatively discusses the test results by explaining that the architecture of Mask R-CNN that is optimized for general object detection purposes, can be modified to detect long and slender shapes of cracks, rebar exposure, and efflorescence in further research.

Original languageEnglish
Article number8008
Pages (from-to)1-17
Number of pages17
JournalApplied Sciences (Switzerland)
Issue number22
StatePublished - 2 Nov 2020


  • Concrete crack
  • Deep learning
  • Efflorescence
  • Mask r-cnn
  • Multiple damage
  • Rebar exposure
  • Spalling


Dive into the research topics of 'Automated multiple concrete damage detection using instance segmentation deep learning model'. Together they form a unique fingerprint.

Cite this