CN116434085B - Pit identification method, device, equipment and medium based on texture feature analysis - Google Patents

Pit identification method, device, equipment and medium based on texture feature analysis Download PDF

Info

Publication number
CN116434085B
CN116434085B CN202310267566.XA CN202310267566A CN116434085B CN 116434085 B CN116434085 B CN 116434085B CN 202310267566 A CN202310267566 A CN 202310267566A CN 116434085 B CN116434085 B CN 116434085B
Authority
CN
China
Prior art keywords
pit
image
super
pixel
center
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310267566.XA
Other languages
Chinese (zh)
Other versions
CN116434085A (en
Inventor
潘勇
陈伟乐
李毅
黄少雄
郑晓东
汪新天
邹威
兰建雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Nanyue Transportation Investment & Construction Co ltd
Guangzhou Tianqin Digital Technology Co ltd
Bay Area Super Major Bridge Maintenance Technology Center Of Guangdong Highway Construction Co ltd
Original Assignee
Guangdong Nanyue Transportation Investment & Construction Co ltd
Guangzhou Tianqin Digital Technology Co ltd
Bay Area Super Major Bridge Maintenance Technology Center Of Guangdong Highway Construction Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Nanyue Transportation Investment & Construction Co ltd, Guangzhou Tianqin Digital Technology Co ltd, Bay Area Super Major Bridge Maintenance Technology Center Of Guangdong Highway Construction Co ltd filed Critical Guangdong Nanyue Transportation Investment & Construction Co ltd
Priority to CN202310267566.XA priority Critical patent/CN116434085B/en
Publication of CN116434085A publication Critical patent/CN116434085A/en
Application granted granted Critical
Publication of CN116434085B publication Critical patent/CN116434085B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/44Analysis of texture based on statistical description of texture using image operators, e.g. filters, edge density metrics or local histograms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/45Analysis of texture based on statistical description of texture using co-occurrence matrix computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Computation (AREA)
  • Remote Sensing (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a pit identification method, a pit identification device, pit identification equipment and a pit identification medium based on texture feature analysis, wherein the pit identification method comprises the following steps: carrying out HSV-based background recognition on the pit image to obtain a background mask image; performing super-pixel division on the background mask image; acquiring gray level co-occurrence matrixes of the super pixels in four directions, calculating energy and entropy of the gray level co-occurrence matrixes, taking the energy and entropy as texture characteristics of the super pixels, and clustering the super pixels through the texture characteristics; and establishing a mark image, selecting a suspected pit area by a morphological method, and removing the pseudo pit area to realize pit identification. The invention solves the problems of inaccurate outline and low positioning precision existing in the prior art of pit identification, in the process of pit identification, the pit image is subjected to background identification based on HSV, the image of complex background is effectively removed, the accurate edge of the pit is reserved by the subsequent application of a super-pixel method, and the accuracy of pit identification is greatly improved based on texture characteristics.

Description

Pit identification method, device, equipment and medium based on texture feature analysis
Technical Field
The invention relates to a pit identification method, a pit identification device, pit identification equipment and pit identification media based on texture feature analysis, and belongs to the technical field of computer image processing.
Background
After the asphalt highway is built, various pavement diseases can occur due to the influences of the conditions of rain and snow, high temperature, overload and the like, wherein one of the serious diseases is a pit. The pit slot may cause jolt of a vehicle running at a high speed, resulting in accidents. In the traditional road maintenance work, manual detection is adopted, so that road traffic can be influenced, and potential safety hazards exist. Thus, automated detection has become a trend. In the automatic inspection, an inspection vehicle or an unmanned aerial vehicle can be adopted. Currently, automatic inspection based on unmanned aerial vehicles has become one of the most potential automatic detection means. The unmanned aerial vehicle has no influence on normal traffic, has stable image and higher definition, is provided with a positioning device, and can automatically detect road surface diseases, particularly pit grooves, by adopting an image processing method based on the unmanned aerial vehicle image.
The pit detection method based on image processing comprises the following steps: after the unmanned aerial vehicle image is acquired, the unmanned aerial vehicle image is analyzed by utilizing an image processing technology, and the pit is automatically detected. In the existing pit detection based on image processing, there is a pit identification method by using image texture information through SVM; the scholars comprehensively identify diseases by utilizing the shape information of the image content through a heuristic decision logic method; and identifying using a binarization method. In the existing method, texture information is used alone or shape information is only used, and the clustering method is also directly used, so that texture, color and shape information of an image cannot be fully mined, and pit identification is not accurate enough.
Road surface images shot by unmanned aerial vehicles often have vehicles, trees, soil, ponding, cracks and pits. The image color is not single, and the binarization method is easy to lose the image information; different image contents, even if the colors are the same, the textures of the different image contents are different; larger cracks may be more similar to pit texture but of a different shape.
Disclosure of Invention
In view of the problems of inaccurate outline and low positioning accuracy in pit identification in the prior art, the invention provides a pit identification method, device, computer equipment and storage medium based on texture feature analysis, which are suitable for image processing operation of detecting and calculating width of pits of roads and bridges based on unmanned aerial vehicle image super-pixel texture feature analysis.
A first object of the present invention is to provide a pit identification method based on texture feature analysis
A second object of the present invention is to provide a pit identification device based on texture feature analysis.
A third object of the present invention is to provide a computer device.
A fourth object of the present invention is to provide a storage medium.
The first object of the present invention can be achieved by adopting the following technical scheme:
a pothole recognition method based on texture feature analysis, the method comprising:
acquiring an pit image;
carrying out HSV-based background recognition on the pit image to obtain a background mask image;
performing super-pixel division on the background mask image;
acquiring gray level co-occurrence matrixes of the super pixels in four directions, calculating energy and entropy of the gray level co-occurrence matrixes, taking the energy and entropy as texture characteristics of the super pixels, and clustering the super pixels through the texture characteristics;
and establishing a marked image according to the super-pixel clustering result, selecting a suspected pit area by a morphological method, and removing a pseudo pit area to realize pit identification.
Further, the performing HSV-based background recognition on the pit image to obtain a background mask image specifically includes:
HSV color space transformation is carried out on the pit images;
primarily identifying the background of the pit image based on the HSV value;
and carrying out outline detection on the area occupied by the background color based on the convex hull to obtain a background mask image.
Further, the method comprises the steps of,
the HSV-based numerical value carries out preliminary recognition on the background of the pit image, and the following formula is adopted:
wherein min is the lower limit of the basic color range, and max is the lower limit of the basic color range; the pixel points with S between 0 and 0.1176 and V between 0.8667 and 1 are white areas; the pixels with H between 0.1950 and 0.4318, S between 0.1686 and 1, and V between 0.1804 and 1 are green areas.
Further, the performing superpixel division on the background mask image specifically includes:
converting the background mask image into a CIELab format;
selecting the super-pixel center of the converted image by a chessboard method to make the super-pixels uniform;
after the super-pixel center is selected, super-pixel division is realized according to a simple linear iterative clustering method.
Further, the selecting the super-pixel center of the converted image has the following formula:
wherein a point is extracted from each s×s region of the image as an initial center, center x Represents the center coordinates in the vertical direction, center y Representing the center coordinates in the horizontal direction; m, N is the width and height of the image, m and n represent the (m, n) th image block of the image with the current center point; dex 1 ,dex 2 ,dey 1 ,dey 2 The discrimination parameters for the formula are calculated as follows:
when the center is the center point of the last block in the horizontal direction, dex 1 =0,dex 2 =1, otherwise, dex 1 =1,dex 2 =0; dey when the center is the last block in the vertical direction 1 =0,dey 2 =1, otherwise, dey 1 =1,dey 2 =0; the range of values of m and n is
Further, the gray level co-occurrence matrix of the super pixel in four directions is obtained, and the following formula is shown:
wherein I is s Representing a super-pixel region, m 1 、m 2 Coordinates representing points in the super-pixel region, i and j representing gray levels;
and calculating the energy and entropy of the gray level co-occurrence matrix, wherein the energy and entropy of the gray level co-occurrence matrix are represented by the following formula:
wherein S is E Representing energy that can be used to measure the consistency of textures of different superpixel regions, S cov Is the statistic that accounts for the complexity inside the superpixel, i.e. entropy.
Further, according to the super-pixel clustering result, a mark image is established, a suspected pit area is selected through a morphological method, and a pseudo pit area is removed, so that pit identification is realized, and the method specifically comprises the following steps:
based on the super-pixel clustering result, a marked image is established, and the areas with similar texture features are set to be the same label;
selecting a region with the occupation area ratio smaller than a first preset ratio from the marked image as a suspected pit region by a morphological method;
taking a region with a occupation area ratio smaller than a second preset ratio as a pseudo pit region in the suspected pit region, wherein the second preset ratio is smaller than the first preset ratio;
and removing the pseudo pit area to obtain a final pit area.
The second object of the invention can be achieved by adopting the following technical scheme:
a pothole recognition device based on texture feature analysis, the device comprising:
the image acquisition module is used for acquiring pit images;
the background recognition module is used for carrying out HSV-based background recognition on the pit image to obtain a background mask image;
the super-pixel dividing module is used for performing super-pixel division on the background mask image;
the clustering module is used for acquiring gray level co-occurrence matrixes in four directions of the super pixels, calculating energy and entropy of the gray level co-occurrence matrixes, taking the energy and entropy as texture characteristics of the super pixels, and clustering the super pixels through the texture characteristics;
and the pit identification module is used for establishing a marked image according to the super-pixel clustering result, selecting a suspected pit area through a morphology method, removing a pseudo pit area and realizing pit identification.
The third object of the present invention can be achieved by adopting the following technical scheme:
the computer equipment comprises a processor and a memory for storing a program executable by the processor, and is characterized in that the pit identification method based on texture feature analysis is realized when the processor executes the program stored by the memory.
The fourth object of the present invention can be achieved by adopting the following technical scheme:
a storage medium storing a program which, when executed by a processor, implements the pit identification method based on texture feature analysis described above.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, HSV color space conversion is carried out on the pit image, and the convex hull of the area where the background color is located is calculated, so that the influence of the complex background on pit identification is effectively removed. The gray level co-occurrence matrix is utilized to obtain texture feature vectors, four directions are considered during matching, accurate pits and pavement areas are obtained, and finally efficient pit identification is achieved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a simplified flowchart of a pit identification method based on texture feature analysis according to embodiment 1 of the present invention.
Fig. 2 is a detailed flowchart of a pit identification method based on texture feature analysis according to embodiment 1 of the present invention.
Fig. 3 is a flowchart of preprocessing based on super pixel division in embodiment 1 of the present invention.
Fig. 4 is a flow chart of the extraction of super pixel texture features based on gray co-occurrence matrix in embodiment 1 of the present invention.
Fig. 5 to 7 are diagrams showing examples of pit identification results according to embodiment 1 of the present invention.
Fig. 8 is a block diagram showing the structure of a pit identification device based on texture feature analysis according to embodiment 2 of the present invention.
Fig. 9 is a block diagram showing the structure of a computer device according to embodiment 3 of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments, and all other embodiments obtained by those skilled in the art without making any inventive effort based on the embodiments of the present invention are within the scope of protection of the present invention.
Example 1:
the embodiment provides a pit identification method based on texture feature analysis, when detecting a pit, the method utilizes the color priori information to preliminarily select the area where the basic color of the background is located by the HSV method, then utilizes superpixels to reserve the accurate edge of the pit, thereby reducing the difficulty in pit identification and accurate edge acquisition, and further obtaining the texture feature for identifying the pit through the calculation of the gray level co-occurrence matrix and the characteristics thereof, and improving the precision of pit identification.
As shown in fig. 1 and 2, the pit identification method of the present embodiment includes the steps of:
s201, acquiring pit images.
The pit image in this embodiment may be directly acquired by a camera of the unmanned aerial vehicle, or may be acquired by searching from a database, for example: and storing pit images shot by the unmanned aerial vehicle in a database in advance, and searching the pit images from the database.
S202, carrying out HSV-based background recognition on the pit image to obtain a background mask image.
Step S202 is based on HSV background recognition, and when a pit image free from the influence of a complex background is subjected to feature analysis, more accurate pit recognition can be better performed according to texture information in the image; because the image for this embodiment is a pit image, and the pit image often includes trees on the road, white zebra stripes, and the like, the color of the pit image is greatly different from that of the pit and the road, but the texture features may be similar to each other, so that the background recognition is performed on the pit image. In the HSV color space, the basic color can be judged by the values of Hue (H), saturation (S) and brightness (V), so that the efficiency and accuracy of pit identification can be effectively improved by the difference between the pit color and the surrounding background color.
Further, the step S202 specifically includes:
s2021, HSV color space transformation is carried out on the pit image.
In this embodiment, the formula of the HSV color space transform is as follows:
V=Dmax
wherein R ', G ', B ' represent normalized R, G, B. Dmax represents the normalized maximum value of each color channel, and Δ represents the difference between this maximum value and the corresponding minimum value Dmin. The following calculation is carried out:
R'=R/255
G'=G/255
B'=B/255
Dmax=max(R',G',B')
Dmin=min(R',G',B')
Δ=Dmax-Dmin
s2022, primarily identifying the background of the pit image based on the HSV value.
In this embodiment, the background of the pit image is primarily identified based on the HSV value, and the following formula is:
wherein min is the lower limit of the basic color range, and max is the lower limit of the basic color range; the pixel points with S between 0 and 0.1176 and V between 0.8667 and 1 are white areas; pixels with H between 0.1950 and 0.4318, S between 0.1686 and 1 and V between 0.1804 and 1 are green areas; the gray areas are mainly reserved. The remaining regions can be removed by the above equation.
S2023, carrying out outline detection on the area occupied by the background color based on the convex hull to obtain a background mask image.
S203, performing super-pixel division on the background mask image.
Step S203 is preprocessing based on super-pixel division, as shown in fig. 3, after detecting a background area, performing super-pixel division on an image; specifically, converting the background mask image into a CIELab format, selecting the superpixel center of the image after converting the format by a checkerboard method after converting the background mask image into the CIELab format, so that the superpixel is as uniform as possible, that is to say, the initial center of the superpixel is selected as uniformly as possible, thereby improving the accuracy of the superpixel and improving the superpixel dividing effect of a simple linear iterative clustering (Simple Linear Iterative Clustering, SLIC) method; after the super-pixel center is selected, super-pixel division is realized according to a simple linear iterative clustering method.
In this embodiment, the super-pixel center of the converted image is selected, and the following formula is as follows:
wherein a point is extracted from each s×s region of the image as an initial center, center x Represents the center coordinates in the vertical direction, center y Representing the center coordinates in the horizontal direction; m, N is the width and height of the image, m and n represent the (m, n) th image block of the image with the current center point; dex 1 ,dex 2 ,dey 1 ,dey 2 The discrimination parameters for the formula are calculated as follows:
when the center is the center point of the last block in the horizontal direction, dex 1 =0,dex 2 =1, otherwise, dex 1 =1,dex 2 =0; dey when the center is the last block in the vertical direction 1 =0,dey 2 =1, otherwise, dey 1 =1,dey 2 =0; the range of values of m and n is
In order to avoid the seed points falling on the boundary with larger gradient, the probability of using noise points as the seed points is reduced, and the initial seed points are moved to the position with the minimum gradient in the corresponding 3×3 neighborhood.
S204, acquiring gray level co-occurrence matrixes of the super pixels in four directions, calculating energy and entropy of the gray level co-occurrence matrixes, taking the energy and entropy as texture characteristics of the super pixels, and clustering the super pixels through the texture characteristics.
Step S204 is the extraction of the super-pixel texture features based on the gray level co-occurrence matrix, as shown in FIG. 4, acquiring the gray level co-occurrence matrix of the super-pixel in four directions (0 DEG, 45 DEG, 90 DEG, 135 DEG), performing secondary statistics on the four gray level co-occurrence matrices, calculating the energy and entropy of the gray level co-occurrence matrix as the texture features of the super-pixel, and selecting the minimum difference value of the region; the super pixels are clustered by texture feature vectors composed of texture features to merge super pixels with the same characteristics, ready for final pit identification.
In this embodiment, a gray level co-occurrence matrix is calculated as follows:
wherein the method comprises the steps of,I s Representing a super-pixel region, m 1 、m 2 The coordinates of points in the super-pixel region are represented, i and j represent gray levels, and the formula is essentially statistics of the conditions with the same gray level difference and position relation.
In this embodiment, the energy and entropy of the gray level co-occurrence matrix are calculated as follows:
wherein S is E The energy used for measuring the consistency of textures of different super-pixel areas is represented, and the more uniform the gray level distribution in the super-pixel is, the finer the textures of the image are, and the larger the energy is; conversely, the gray level distribution is too dense, the texture of the image is rough, and the energy is very small; s is S cov Is a statistic for explaining the complexity of the super-pixel, namely, the larger the entropy value is, the more the texture information of the image is, and the larger the randomness is.
S205, establishing a marked image according to the super-pixel clustering result, selecting a suspected pit area through a morphological method, and removing a pseudo pit area to realize pit identification.
The step S205 is a morphology-based removal of the pseudo pit, and specifically includes:
s2051, based on the super-pixel clustering result, a marker image is established, and the areas with similar texture features are set to be the same labels.
S2052, selecting a region with a occupation area ratio smaller than a first preset ratio from the marked image by a morphological method, namely, selecting the region with the smaller occupation area ratio as a suspected pit area.
S2053, taking the area with the occupation area ratio smaller than the second preset ratio, namely the area with the occupation area ratio smaller than the second preset ratio, as the pseudo pit area.
S2054, removing the pseudo pit area to obtain a final pit area.
The feasibility of the pit identification method provided by the present embodiment is verified by a specific test. The pit identification method of the invention is developed and analyzed in the aspects of crack detection and crack width calculation:
1) Working conditions
The experiment of this embodiment uses an intel core i7-7700k [email protected]*4 processor, runs a PC of Windows10, and the graphics card is 1 block Geforce RTX 1070, and the programming language is Matlab.
2) Experimental content and results analysis
Fig. 5, fig. 6 and fig. 7 show examples of results of three pit images, in which the edge of the pit is marked with the pit width at the designated position, and by comparing the pit width shown in the figures with the actual number of pixels at the position in the image, the result obtained by the pit identification method of the present embodiment can be proved to be more accurate, and the accuracy of the pit identification method of the present embodiment can be verified.
The three groups of experimental results show that the influence of the background area on the identification is effectively removed by the HSV method, and the pit land edge is better saved by the super-pixel method. And then, the problem of insufficient pit recognition precision is solved by utilizing texture feature analysis based on the gray level co-occurrence matrix.
It should be noted that while the method operations of the above embodiments are described in a particular order, this does not require or imply that the operations must be performed in that particular order or that all of the illustrated operations be performed in order to achieve desirable results. Rather, the depicted steps may change the order of execution. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
Example 2:
as shown in fig. 8, the present embodiment provides a pit identification device based on texture feature analysis, where the device includes an image acquisition module 801, a background identification module 802, a superpixel division module 803, a clustering module 804, and a pit identification module 805, and specific functions of the modules are as follows:
an image acquisition module 801, configured to acquire an pit image;
the background recognition module 802 is configured to perform HSV-based background recognition on the pit image to obtain a background mask image;
the super-pixel dividing module 803 is configured to perform super-pixel division on the background mask image;
the clustering module 804 is configured to obtain gray level co-occurrence matrices in four directions of the super pixel, calculate energy and entropy of the gray level co-occurrence matrix, use the energy and entropy as texture features of the super pixel, and perform clustering of the super pixel through the texture features;
and the pit identification module 805 is configured to establish a marker image according to the super-pixel clustering result, select a suspected pit area by using a morphological method, remove a pseudo pit area, and implement pit identification.
It should be noted that, the apparatus provided in this embodiment is only exemplified by the division of the above functional modules, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure is divided into different functional modules, so as to perform all or part of the functions described above.
Example 3:
the present embodiment provides a computer apparatus, as shown in fig. 9, which includes a processor 902, a memory, an input device 903, a display 904 and a network interface 905 connected through a device bus 901, the processor being configured to provide computing and control capabilities, the memory including a nonvolatile storage medium 906 and an internal memory 907, the nonvolatile storage medium 906 storing an operating device, a computer program and a database, the internal memory 907 providing an environment for the operation of the operating device and the computer program in the nonvolatile storage medium, the processor 902 implementing the pit identification method based on texture feature analysis of the above embodiment 1 when executing the computer program stored in the memory, as follows:
acquiring an pit image;
carrying out HSV-based background recognition on the pit image to obtain a background mask image;
performing super-pixel division on the background mask image;
acquiring gray level co-occurrence matrixes of the super pixels in four directions, calculating energy and entropy of the gray level co-occurrence matrixes, taking the energy and entropy as texture characteristics of the super pixels, and clustering the super pixels through the texture characteristics;
and establishing a marked image according to the super-pixel clustering result, selecting a suspected pit area by a morphological method, and removing a pseudo pit area to realize pit identification.
Example 4:
the present embodiment provides a storage medium, which is a computer-readable storage medium storing a computer program, and when the computer program is executed by a processor, implements the pit identification method based on texture feature analysis of the above embodiment 1, as follows:
acquiring an pit image;
carrying out HSV-based background recognition on the pit image to obtain a background mask image;
performing super-pixel division on the background mask image;
acquiring gray level co-occurrence matrixes of the super pixels in four directions, calculating energy and entropy of the gray level co-occurrence matrixes, taking the energy and entropy as texture characteristics of the super pixels, and clustering the super pixels through the texture characteristics;
and establishing a marked image according to the super-pixel clustering result, selecting a suspected pit area by a morphological method, and removing a pseudo pit area to realize pit identification.
The computer readable storage medium of the present embodiment may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an apparatus, device, or means of electronic, magnetic, optical, electromagnetic, infrared, or semiconductor, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In this embodiment, the computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution apparatus, device, or apparatus. In the present embodiment, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with a computer-readable program embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable storage medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution apparatus, device, or apparatus. A computer program embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable storage medium may be written in one or more programming languages, including an object oriented programming language such as Java, python, C ++ and conventional procedural programming languages, such as the C-language or similar programming languages, or combinations thereof for performing the present embodiments. The program may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
In summary, the invention effectively removes the influence of complex background on pit identification by performing HSV color space conversion on the pit image and calculating the convex hull of the area where the background color is located. The gray level co-occurrence matrix is utilized to obtain texture feature vectors, four directions are considered during matching, accurate pits and pavement areas are obtained, and finally efficient pit identification is achieved.
The above description is only of the preferred embodiments of the present invention, but the protection scope of the present invention is not limited thereto, and any person skilled in the art can substitute or change the technical solution and the inventive conception of the present invention equally within the scope of the disclosure of the present invention.

Claims (8)

1. A pothole recognition method based on texture feature analysis, the method comprising:
acquiring an pit image;
carrying out HSV-based background recognition on the pit image to obtain a background mask image;
performing super-pixel division on the background mask image;
acquiring gray level co-occurrence matrixes of the super pixels in four directions, calculating energy and entropy of the gray level co-occurrence matrixes, taking the energy and entropy as texture characteristics of the super pixels, and clustering the super pixels through the texture characteristics;
according to the super-pixel clustering result, a marked image is established, a suspected pit area is selected through a morphological method, a pseudo pit area is removed, and pit identification is realized;
the super-pixel division of the background mask image specifically comprises the following steps:
converting the background mask image into a CIELab format;
selecting the super-pixel center of the converted image by a chessboard method to make the super-pixels uniform;
after selecting the super-pixel center, realizing super-pixel division according to a simple linear iterative clustering method;
and selecting a super-pixel center of the converted image, wherein the super-pixel center is represented by the following formula:
wherein a point is extracted from each s×s region of the image as an initial center, center x Represents the center coordinates in the vertical direction, center y Representing the center coordinates in the horizontal direction; m, N is the width and height of the image, m and n represent the (m, n) th image block of the image with the current center point; dex 1 ,dex 2 ,dey 1 ,dey 2 The discrimination parameters for the formula are calculated as follows:
when the center is the center point of the last block in the horizontal direction, dex 1 =0,dex 2 =1, otherwise, dex 1 =1,dex 2 =0; dey when the center is the last block in the vertical direction 1 =0,dey 2 =1, otherwise, dey 1 =1,dey 2 =0; the range of values of m and n is
2. The pit identification method according to claim 1, wherein the performing HSV-based background identification on the pit image to obtain a background mask image specifically comprises:
HSV color space transformation is carried out on the pit images;
primarily identifying the background of the pit image based on the HSV value;
and carrying out outline detection on the area occupied by the background color based on the convex hull to obtain a background mask image.
3. The pit identification method of claim 2, wherein the HSV-based value initially identifies the background of the pit image by:
wherein min is the lower limit of the basic color range, and max is the lower limit of the basic color range; the pixel points with S between 0 and 0.1176 and V between 0.8667 and 1 are white areas; the pixels with H between 0.1950 and 0.4318, S between 0.1686 and 1, and V between 0.1804 and 1 are green areas.
4. The pit identification method of claim 1, wherein the acquiring the gray level co-occurrence matrix of the super-pixel in four directions is as follows:
wherein I is s Representing a super-pixel region, m 1 、m 2 Coordinates representing points in the super-pixel region, i and j representing gray levels;
and calculating the energy and entropy of the gray level co-occurrence matrix, wherein the energy and entropy of the gray level co-occurrence matrix are represented by the following formula:
wherein S is E Representing energy that can be used to measure the consistency of textures of different superpixel regions, S cov Is the statistic that accounts for the complexity inside the superpixel, i.e. entropy.
5. The pit identification method according to any one of claims 1 to 4, wherein the creating a marker image according to the super-pixel clustering result, selecting a suspected pit area by a morphological method, removing a pseudo pit area, and implementing pit identification specifically comprises:
based on the super-pixel clustering result, a marked image is established, and the areas with similar texture features are set to be the same label;
selecting a region with the occupation area ratio smaller than a first preset ratio from the marked image as a suspected pit region by a morphological method;
taking a region with a occupation area ratio smaller than a second preset ratio as a pseudo pit region in the suspected pit region, wherein the second preset ratio is smaller than the first preset ratio;
and removing the pseudo pit area to obtain a final pit area.
6. A pothole recognition device based on texture feature analysis, the device comprising:
the image acquisition module is used for acquiring pit images;
the background recognition module is used for carrying out HSV-based background recognition on the pit image to obtain a background mask image;
the super-pixel dividing module is used for performing super-pixel division on the background mask image;
the clustering module is used for acquiring gray level co-occurrence matrixes in four directions of the super pixels, calculating energy and entropy of the gray level co-occurrence matrixes, taking the energy and entropy as texture characteristics of the super pixels, and clustering the super pixels through the texture characteristics;
the pit identification module is used for establishing a marked image according to the super-pixel clustering result, selecting a suspected pit area through a morphological method, and removing a pseudo pit area to realize pit identification;
the super-pixel division of the background mask image specifically comprises the following steps:
converting the background mask image into a CIELab format;
selecting the super-pixel center of the converted image by a chessboard method to make the super-pixels uniform;
after selecting the super-pixel center, realizing super-pixel division according to a simple linear iterative clustering method;
and selecting a super-pixel center of the converted image, wherein the super-pixel center is represented by the following formula:
wherein a point is extracted from each s×s region of the image as an initial center, center x Represents the center coordinates in the vertical direction, center y Representing the center coordinates in the horizontal direction; m, N is the width and height of the image, m and n represent the (m, n) th image block of the image with the current center point; dex 1 ,dex 2 ,dey 1 ,dey 2 The discrimination parameters for the formula are calculated as follows:
when the center is the center point of the last block in the horizontal direction, dex 1 =0,dex 2 =1, otherwise, dex 1 =1,dex 2 =0; dey when the center is the last block in the vertical direction 1 =0,dey 2 =1, otherwise, dey 1 =1,dey 2 =0; the range of values of m and n is
7. A computer device comprising a processor and a memory for storing a program executable by the processor, wherein the processor, when executing the program stored in the memory, implements the pit identification method of any one of claims 1-5.
8. A storage medium storing a program which, when executed by a processor, implements the pit identification method according to any one of claims 1 to 5.
CN202310267566.XA 2023-03-20 2023-03-20 Pit identification method, device, equipment and medium based on texture feature analysis Active CN116434085B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310267566.XA CN116434085B (en) 2023-03-20 2023-03-20 Pit identification method, device, equipment and medium based on texture feature analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310267566.XA CN116434085B (en) 2023-03-20 2023-03-20 Pit identification method, device, equipment and medium based on texture feature analysis

Publications (2)

Publication Number Publication Date
CN116434085A CN116434085A (en) 2023-07-14
CN116434085B true CN116434085B (en) 2024-03-12

Family

ID=87088197

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310267566.XA Active CN116434085B (en) 2023-03-20 2023-03-20 Pit identification method, device, equipment and medium based on texture feature analysis

Country Status (1)

Country Link
CN (1) CN116434085B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116630311B (en) * 2023-07-21 2023-09-19 聊城市瀚格智能科技有限公司 Pavement damage identification alarm method for highway administration

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113780259A (en) * 2021-11-15 2021-12-10 中移(上海)信息通信科技有限公司 Road surface defect detection method and device, electronic equipment and readable storage medium
CN115424232A (en) * 2022-11-04 2022-12-02 深圳市城市交通规划设计研究中心股份有限公司 Method for identifying and evaluating pavement pit, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110765941B (en) * 2019-10-23 2022-04-26 北京建筑大学 Seawater pollution area identification method and equipment based on high-resolution remote sensing image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113780259A (en) * 2021-11-15 2021-12-10 中移(上海)信息通信科技有限公司 Road surface defect detection method and device, electronic equipment and readable storage medium
CN115424232A (en) * 2022-11-04 2022-12-02 深圳市城市交通规划设计研究中心股份有限公司 Method for identifying and evaluating pavement pit, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于图像纹理的沥青路面坑槽识别及提取;王朋辉 等;《计算机应用研究》;第35卷(第5期);第1597-1598页第1节 *

Also Published As

Publication number Publication date
CN116434085A (en) 2023-07-14

Similar Documents

Publication Publication Date Title
CN106651872B (en) Pavement crack identification method and system based on Prewitt operator
CN109389163B (en) Unmanned aerial vehicle image classification system and method based on topographic map
CN112287912B (en) Deep learning-based lane line detection method and device
CN116434085B (en) Pit identification method, device, equipment and medium based on texture feature analysis
Shaikh et al. A novel approach for automatic number plate recognition
CN110175556B (en) Remote sensing image cloud detection method based on Sobel operator
CN112052777B (en) Method and device for extracting water-crossing bridge based on high-resolution remote sensing image
CN112686264A (en) Digital instrument reading method and device, computer equipment and storage medium
CN114898097B (en) Image recognition method and system
CN106875407A (en) A kind of unmanned plane image crown canopy dividing method of combining form and marking of control
CN115760762A (en) Corrosion detection method, detection device and storage medium
Huang et al. Deep convolutional segmentation of remote sensing imagery: A simple and efficient alternative to stitching output labels
CN117037082A (en) Parking behavior recognition method and system
CN114627463B (en) Non-contact power distribution data identification method based on machine identification
CN114821078B (en) License plate recognition method and device, electronic equipment and storage medium
CN115511815A (en) Cervical fluid-based cell segmentation method and system based on watershed
CN114862889A (en) Road edge extraction method and device based on remote sensing image
CN115376106A (en) Vehicle type identification method, device, equipment and medium based on radar map
CN112183556B (en) Port ore heap contour extraction method based on spatial clustering and watershed transformation
CN115690470A (en) Method for identifying state of switch indicator and related product
CN111062309B (en) Method, storage medium and system for detecting traffic signs in rainy days
CN111507287B (en) Method and system for extracting road zebra crossing corner points in aerial image
CN111832103B (en) Rapid implementation method for merging traffic subareas based on road network closed land parcel
CN114445814A (en) Character region extraction method and computer-readable storage medium
Sun et al. Automatic pavement cracks detection system based on Visual Studio C++ 6.0

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant